the popularisation of physics: boundaries of authority and the visual culture of science

299
The Popularisation of Physics: Boundaries of Authority and the Visual Culture of Science Adam Nieman A thesis submitted in partial fulfilment of the requirements of the University of the West of England, Bristol for the degree of Doctor of Philosophy December 2000 Faculty of Applied Sciences and Faculty of Humanities, University of the West of England, Bristol

Upload: others

Post on 09-Feb-2022

3 views

Category:

Documents


0 download

TRANSCRIPT

The Popularisation of Physics: Boundaries of Authority and

the Visual Culture of Science

Adam Nieman

A thesis submitted in partial fulfilment of the requirements of the University of the West of England, Bristol for the

degree of Doctor of Philosophy

December 2000

Faculty of Applied Sciences and Faculty of Humanities, University of the West of England, Bristol

ii

Abstract This thesis reformulates foundational issues in science studies by

developing an interdisciplinary framework for the analysis of popular science

texts. In particular, the ‘demarcation problem’ in the philosophy of science is

addressed by examining the role of popular science in determining what ‘science’

is taken to mean at any moment. The relation of scientific practice to the culture in

which it is embedded is explored through a study of scientific images. This results

in a comprehensive theory of scientific images with wide application. The thesis

also addresses communication between ‘science’ and ‘the public’. An emphasis

on foundational issues results in an improved understanding of the category

‘popular science’.

Questions about representation and the roles of popular science are

explored from two perspectives. Firstly, the thesis examines the role of context in

making scientific images meaningful. A model is proposed that relates imaging

techniques in science to both the visual culture in which they are embedded and to

human physiology. This in turn reveals the significance of ‘popularising’

scientific images. Secondly, the thesis examines popular texts as ‘forums for

negotiation’. The thesis argues that whether or not there are independent

(metaphysical) criteria for distinguishing science from non-science, questions

about which topics are ‘scientific’, who can legitimately call themselves

‘scientist’, and the limits of scientists’ authority and responsibility are determined

in practice by negotiation in popular contexts. Through textual analysis, the thesis

explores the role of popular science texts in maintaining and modifying the

boundaries between science and non-science and how the boundaries themselves

act as ‘creative pivots’ in popularisations.

Communication theory is central to questions of popular science and, the

thesis argues, a reflexive approach to communication is the foundation of any

meaningful conclusions drawn from analysis of popular science texts. The thesis

explores the ‘dialogical’ nature of communication in science with reference to

theoretical approaches drawn from the Bakhtin Circle. As a result of the emphasis

on foundational issues, the centrality of popular science to key issues in ‘science

studies’ research becomes evident.

iii

Contents

Abstract .........................................................................................ii

Contents .......................................................................................iii

Figures and Tables .......................................................................v List of Figures ............................................................................................................... v List of Tables................................................................................................................. v Picture Credits .............................................................................................................. v

Preface..........................................................................................vi Acknowledgements......................................................................................................vii

1 Introduction................................................................................1 What is Popular Science?....................................................................................................... 1

The Public Understanding of Science........................................................................... 3 Motivation for the Present Study................................................................................... 4 ‘Popularisation’ as a Research Topic ........................................................................... 6

The ‘Dominant View’ of Popularisation and Some Alternatives ......................................... 7 The Problem with Popular Science (The Reflexive Imperative) ................................... 7 The Deficit Model of Public Understanding of Science................................................. 9 The ‘Dominant View’ of Popularisation ....................................................................... 12

Communication Theory and Popular Science .................................................................... 19 The Communication Divide: Empirical Vs. Interpretative ........................................... 21 Media Effects and Linear Models of Communication (The Process School) ............. 23 ‘Critical’ Communication Theory (The Semiotic School) ............................................ 25 Communication Theory and Approaches to Popular Science.................................... 28

A ‘Critical’ Account of Popular Science .............................................................................. 33 Contexts for Science: ‘Streams’, ‘Webs’ and ‘Forums’ (Alternatives to the Binary Division Between Popular and Professional Contexts for Science) ........................... 34 Negotiation in Popular Science................................................................................... 38 Interdisciplinary Science Studies ................................................................................ 43

The Following Chapters ........................................................................................................ 48 Images in Science....................................................................................................... 50 Popularisation and the Boundaries of Science........................................................... 51

2 Images and Imaging in Science and Popular Culture ..........53 ‘Reading’ Visual Texts in Popular and Professional Science ........................................... 53

Preview ....................................................................................................................... 54 Popular and Scientific Contexts for Scientific Images ................................................ 57 ‘Arbitrary’ and ‘Independent’ Elements: The Analysis of Scientific Images................ 59

Historical and Disciplinary Contexts: ‘Metaphysical’, ‘Mechanical’ and ‘Interpreted’ Images..................................................................................................................................... 63

The Evolution of Imaging in Science: Global Imperatives for Image Makers ............. 63 The Interpreted Image and the Significance of Context ............................................. 67 Disciplinary (Local) Imperatives for Image Makers..................................................... 68

Images and Interventions: Classifying Scientific Imaging Techniques........................... 76 Making Invisible Objects Visible ................................................................................. 76 Representation and Invisible Objects: Primary and Secondary Qualities .................. 79 Classifying Imaging Techniques: Scientific Imaging and Human Vision .................... 84

iv

3 ‘Pretty Pictures’ and Other Types of Scientific Image .........91 Towards Taxonomy ............................................................................................................... 91

‘Pretty Pictures’: Scientists’ Own Perception of Popular and Professional Images.... 91 Scientific Visualisation: A New Type of Image? A New Role for the Viewer? ............ 97 Scientific Images and Artistic Images: A New Look at an Old Distinction ................ 108

Examples .............................................................................................................................. 115 ‘Reading’, ‘Viewing’ and the Context of Visual Texts: Pictures of Electron Waves.. 117 Making Sense of Alien Landscapes: Pictures of Venus ........................................... 134

Conclusion............................................................................................................................ 144

4 The Boundaries of Science: Polemical and Philosophical Approaches to the Demarcation Problem and the Rhetoric of Popular Science ...................................................................146 The Boundaries are not Fixed ............................................................................................ 154

The Philosophy of Science ....................................................................................... 154 Essentialist Accounts of Science .............................................................................. 157 The Question ‘What is Science?’: Polemical and Philosophical Uses...................... 159 ‘Pithy Definitions’ of Science: Summary ................................................................... 167

Boundary Work .................................................................................................................... 168 Problems with the Cartographic Metaphor ............................................................... 168 Beyond The Cartographic Metaphor: ‘Dependent Entities’ ...................................... 170

‘Science’ and ‘The Public’: Boundary Work and the Origin of Two Categories............ 176 Two Categories: A Tempting Teleological Account.................................................. 176 Beyond Teleological Temptations: A Historical Account .......................................... 177

5 Popular Science as a Forum for Negotiation ......................185 Scientists and Non-Scientists, Science and Non-Science .............................................. 185

A Victorian Vision: Physicists as Torturers ............................................................... 187 Nuclear Politics and Nuclear Physics: Robert Oppenheimer’s Reith Lectures ........ 190 Empowering and Disempowering Public Understanding of Science: Consensus Conferences.............................................................................................................. 195 Controversy in Popular Science: The Ghost in the Atom ......................................... 197 Negotiating Expertise: The Ghost in the Atom.......................................................... 205 The Role of the Scientist in Popular Science: The Physicist as Shaman................. 209

The Other Side of the Fence: ‘Appropriations’ of Science.............................................. 210 Art and Science: Inside Information .......................................................................... 212 Boundaries of Science in Literature: Gut Symmetries by Jeanette Winterson......... 218 Boundaries and Collaborations Between Artists and Scientists: Talking of the Sex of Angels ................................................................................................................... 223 Boundaries of Science in Drama: Hapgood by Tom Stoppard................................. 230

What is Science? ................................................................................................................. 234

6 Conclusion .............................................................................235 The Public Understanding of Science....................................................................... 241 Towards a Dialogical Account .................................................................................. 245 Summary................................................................................................................... 247 Images, Scientists, Culture and the World................................................................ 249

Appendix 1: Imaging and Visualisation in the Science Citation Index . 251

Appendix 2: Scanning Tunnelling Microscopy and Atomic ‘Corrals’ ... 253

Appendix 3: David Phillip Anderson ........................................................ 255

Bibliography..............................................................................262

v

Figures and Tables List of Figures Figure 1 Cartoon by Peter S. Mueller 1 Figure 2 Shannon and Weaver’s Model of Communication 24 Figure 3 Contexts in which scientific knowledge is communicated 35 Figure 4 Bruce Lewenstein’s ‘web’ model of science communication contexts 37 Figure 5 Engraving of a flea from Robert Hooke’s Micrographia, 1665. 70 Figure 6 Photograph of a flea made with a microscope similar to the one used by Hooke 70 Figure 7 An engraving of Robert Boyle’s air pump 73 Figure 8 Arbitrary visual elements plot 89 Figure 9 'Visualisation' and 'imaging' in the Science Citation Index (Graphs) 103 Figure 10 Galileo’s watercolour sketches of the moon, 1610 112 Figure 11 Left: Thomas Harriot’s sketch of the moon. Right: A photograph of the moon 113 Figure 12 Detail from Lodovico Cigoli’s Assumption of the Virgin (1610-12) 115 Figure 13 Scanning tunnelling micrographs of 48 iron atoms placed a copper surface 118 Figure 14 Two representations of an atomic orbital (4f0) 133 Figure 15 Four views of the surface of Venus by David P. Anderson 134 Figure 16 Two ‘photographs’ of Venus (Mariner 10 and Galileo) 135 Figure 17 ‘Photographs’ of the surface of Venus made by Russian landers 136 Figure 18 Two alternative renderings of the Magellan data (JPL and UCLA) 137 Figure 19 'Visualisation' and 'Imaging' in the Title, Abstract or Keywords (Graphs) 252

List of Tables Table 1 Summary of Two Conceptions of Communication in Science 33 Table 2 Arbitrary vs. Externally Determined Aspects of Images 60 Table 3 Summary of three main attitudes towards images in science 68 Table 4 Imaging and Visualisation in the Science Citation Index 251

Picture Credits Page 1 (cartoon) © Peter S. Mueller; page 70 (engraving of flea) © National

Library of Scotland; page 70 (micrograph of flea) © Brian J. Ford; page 113 (Harriot’s

sketch) © Lord Egremont; page 113 (photograph of moon) © António Cidadão; page 118

(STMs) © IBM; page 133 (orbitals) © Adam Nieman; page 134 (Venus) © David P.

Anderson; page 137 (Maat Mons) © JPL; page 137 (Western Ishtar Terra) © UCLA.

vi

Preface The following account addresses, amongst other issues, how the authority

to speak about or on behalf of science is established. During the period the

research for this thesis was conducted, there was considerable controversy around

this very question. Should it be left to scientists themselves to comment on the

development of science and its place in wider culture or are historians,

philosophers, sociologists and cultural theorists – despite their lack of technical

knowledge – in a better position to approach the subject objectively? So fractious

has the debate become that in American universities it goes by the name of the

‘Science Wars’.

The main problem with conceiving the debate as a territory dispute is that

productive approaches to the subject of science become tainted merely by

association with ‘the other side’: ‘empirical – good’, ‘interpretative – bad’ or vice

versa. Wars have the effect of clarifying divisions but obscuring common ground.

There is much more common ground to be occupied than science warriors are

prepared to admit and I hope that becomes clear in what follows. Other than

finding the whole debate counterproductive, I do not have strong feelings for one

side or the other. Nevertheless, in common with all commentators in the present

academic climate, I am obliged to declare my interests from the outset.

I like science. I feel it is undervalued by government and by the population

at large and I would like to see it attract more enthusiasm and more funding. I

generally enjoy popular science and I am the ‘natural audience’ for many of the

texts I have analysed over the years. I have also enthusiastically taken part in

public understanding of science initiatives. If asked, I would claim to be ‘pro-

science’ and against ‘anti-science’ (but, out of context, these are empty and

meaningless claims). However, these opinions and activities do not affect the

following account. Somebody with an opposing attitude would still have been

able to come to the same conclusions that I have.

What may be more important to the shape of the following account is my

education and experience – in particular, the four years I spent as a student of

physics. Andrew Ross dedicates one of his books to, “all of the science teachers I

never had” because the book, “could only have been written without

vii

them” (Ross 1991: iv). In contrast, the research on which this thesis is based,

would not have taken the shape it did without my formal study of science.

Despite this, I believe that a background in professional science is neither

necessary nor sufficient to comment meaningfully on science in popular culture.

Indeed, I have learnt more from commentators without a formal background in

science (including, on the odd occasion, Andrew Ross) than those with degrees in

natural sciences. However, this disparity merely reflects where scientists’

intellectual interests generally lie. It, emphatically, does not suggest in scientists

an intrinsic inability to make sense of the social context of their work.

With respect to the Science Wars, I do not advocate compromise. I want

instead to demonstrate that the common ground is far more productive than any

entrenched position. Making ‘science’ an object of study requires theoretical

flexibility. I advocate an eclectic, reflexive and imaginative approach to the

subject, which involves making sense of ‘science’ in many incompatible (but not

inchoate) ways at once.

The format of this thesis conforms, for the most part, to the Modern

Humanities Research Association (MHRA) Style Book (1991). However, to assist

the reader, I have departed from the MHRA format in two ways. Firstly, I have

used the Author-Date (Harvard) System of referencing and also made use of

footnotes. Simple references are provided in the main body of the text using the

Author-Date System. Only if more clarification is required is a footnote used. (For

convenience, I have occasionally added the title of a work to references that

appear in footnotes.) Secondly, I have used British spellings in all (modern)

quotations even when American spellings were used in the original source.

However, where titles are quoted, the original spelling has been retained.

Acknowledgements

The following account has benefited immeasurably from the support,

cooperation and encouragement of colleagues, friends and family. Thanks are due

in particular to my supervisors, Chris Philippidis and Peter Broks, who originally

conceived the interdisciplinary project, and Gillian Swanson who joined the

supervisory team in the final months. I am grateful to my colleagues in the School

of Interdisciplinary Sciences – particularly Felicity Mellor for her encouragement

and her insight. I was fortunate to share an office with Ben Johnson who

distracted me constructively over a period of some years. Of the many people at

viii

the University of the West of England to whom I owe a debt of gratitude, I am

particularly indebted to Phil Topham and Chris Shorland. My research into

scientific visualisation was greatly assisted by the kind cooperation of Donald

Eigler of the IBM Almaden Research Center, San Jose and David Phillip

Anderson of the Southern Methodist University, Dallas. I am also grateful to

everybody who supplied images and/or permission to use them.

I have benefited greatly from encouragement and interest from friends and

family (who know who they are) but I have been overwhelmed by the kindness of,

and support from Aileen Nieman, Pamela Nieman, Dave Southerden, Julian

Nieman, Sandy Horley, Ali Burrows and Liz Cadogan.

1

1 In t roduct ion

Figure 1 Cartoon by Peter S. Mueller from an American popular science magazine circa 1985.

Spreading misinformation about the mass of neutrinos

What is Popular Science? The cartoon above is ‘popular science’ – the subject of this thesis. It also

shows somebody in the process of popularising science. Neutrinos are weakly

interacting subatomic particles that exist in huge numbers throughout the universe.

If they have any mass at all then it is tiny. When this cartoon was first published,

in the 1980s, there was considerable interest in the exact mass of the neutrino

because they are prolific enough for any mass to have cosmological implications.

In particular, if neutrinos have mass then the density of the universe may be

sufficient to halt its expansion and cause it to recollapse again in a ‘big crunch’.

This, presumably, is what the people in this cartoon are afraid of. The cartoon is

amusing because it portrays people reacting to scientific knowledge in an

inappropriate way – this is not how anybody actually would react to news about

the mass of neutrinos. Indeed, running away would be illogical anyway. (For

those unfamiliar with particle physics and cosmology, the cartoon may appear to

depict people in fear of huge neutrinos knocking them senseless but this

interpretation is not nearly so funny. The fact that the cartoon is an ‘in-joke’ is

also significant.)

The mass of the neutrino may be interesting and important but it is not

interesting and important in the way it is portrayed as being here – it is not

personal or immanent knowledge. What the cartoon reflects is that we orient

2

ourselves to scientific knowledge – make it meaningful for ourselves – in

particular ways. It is not ‘neutral’ in that sense. However, what we ‘do’ with

scientific texts is not prescribed. They may come with a ‘preferred reading’ (see

page 39) but audiences have various powers to challenge this. The account of

popular science in this thesis is concerned with the social uses of popular science

texts, with how they interact with each other and with how science is embedded in

wider culture. These broad questions about the communication of science can be

distinguished from questions about the ‘effectiveness’ of popular science texts or

how much information is transmitted in the popularisation process (and how much

distortion takes place).

In addition to shifting emphasis away from effectiveness, the notion of

‘misinformation’ in science communication is also an inadequate concept for the

purposes of this thesis. The notion of misinformation implies a will to deceive.

Deliberate deviation from the truth for the purpose of deception will (inevitably)

be an occasional feature of science communication. In general though, we can

assume that scientific accounts are ‘truthful’ and their authors do not intend to

deceive their audiences. On the other hand, all communication in science, whether

in professional or popular contexts, attempts to persuade and is shaped by the

particular interests of its authors. In turn, science texts are interpreted by

audiences who bring their own interests to bear in making them meaningful.

Misinformation (telling lies) is a trivial issue in popular science. What is much

more interesting (and requires a much more subtle approach) is to ask how authors

achieve their goals by telling the truth in different ways. This leads on to questions

about how (or if) readers are empowered to challenge the (preferred) interpretation

on offer. The ideas of effectiveness, distortion and misinformation all implicitly

rely on the assumption that there is a perfectly neutral way to articulate truths

about the world against which alternatives (and the social baggage that come with

them) can be measured. This is not the case.

It is not just the content of scientific texts that we should be concerned

with but also the way they invite their audience to orient themselves towards

science. This is not to suggest that the content of popular science texts is

unimportant to these broader questions. Indeed, this question of orientation is

explored by application of content analysis in an account by Peter Broks of

science in Victorian and Edwardian periodicals – for instance, where he notes,

3

Common to all the periodicals was the presentation of science not as an

approach to the world, but a collection of facts and feats. In the late

Victorian consumer revolution, science was another commodity, to be

received, not participated in. It was not a process but a product.

(Broks 1996: 37)

As with Peter Broks’ study, the analysis of popular science here

concentrates on ‘the text’ rather than on audiences. Thus, textual analyses rather

than audience surveys or ethnographic studies have been applied. Unlike Broks’s

study, the emphasis is firmly on contemporary texts and debates.

The Public Understanding of Science The public understanding of science has, in recent years, been the cause of

concern from many quarters1. At the same time, there has been a change in the

pattern of exposure that science gets in the media and a change in the place of

science within popular culture. Many of these developments are the result of a

concerted effort to raise the public profile of science. This effort represents an

identifiable movement whose evangelical approach reflects its genesis in

professional institutions and public relations departments. In addition to its public

relations role, the movement aims to increase the democratic leverage of ordinary

people by empowering them to intervene in questions with a technical content.

There is an inherent tension between these twin objectives: an activity that

seeks to serve the interests of one specific group (scientists) is not necessarily

compatible with an activity that seeks to empower all groups in society (by

strengthening the effectiveness of the democratic process). The democratic

objective includes empowering groups whose interests diverge from those of

scientists, for instance those who compete with scientists for resources or who

object to scientific practices on moral grounds. The conflict between the twin

objectives of public relations and democratic empowerment has not been

1 Recent concern about the public understanding of science was galvanised in the mid-1980s by the report of a Royal Society committee chaired by Walter Bodmer (Royal Society Council 1985, see also Bodmer 1986). In the same year, the British Association for the Advancement of Science, the Royal Society and the Royal Institution founded the Committee for the Public Understanding of Science (COPUS) to improve public awareness of science and technology in the UK. A journal devoted to research in this area was started in January 1992 (Public Understanding of Science, IOP Publishing).

4

adequately addressed and in many places is not even recognised. Significantly, it

is not recognised in recent science policy2.

The main reason the conflict can be overlooked is that the public

understanding of science movement has, until recently, approached science in

moral terms. A basic tenet of the movement is (or has been) the idea that science

is like health or happiness: it is essentially good and therefore any lack of it is bad.

The interests of scientists (served by the public relations agenda) can thus be seen

as concomitant with those of ‘the people’ (served by the democratic

empowerment agenda) so there is no conflict between the goals. Contrary opinion

about the value of science can be dismissed as a ‘misunderstanding’. The idea that

a goal of the movement should be to empower people to halt the work of scientists

(through informed democratic organisation) is anathema to many advocates of the

public understanding of science but is, nevertheless, a consequence of democratic

empowerment objective.

Motivation for the Present Study Recent developments in research into the public understanding of science

formed one of the factors that prompted this study of the popularisation of

2 See in particular the 1993 white paper on science and the report for the Office of Science and Technology of the Committee to Review the Contribution of Scientists and Engineers to the Public Understanding of Science, Engineering and Technology (Wolfendale 1996). The Wolfendale report summarised government policy on the public understanding of science thus,

1.4 The objectives of the Government’s policy on public understanding are:

1.4.1 to contribute to the economic wealth and quality of life of the

Nation, particularly by drawing more of our best young people into

careers in science, engineering and technology

1.4.2 to strengthen the effectiveness of the democratic process through

better informed public debate of issues of public concern arising in the

fields of science, engineering and technology.

1.5 The main obstacle to achieving these objectives was perceived by

Government to be the relatively low status of science and engineering in the eyes

of the general public relative to other competitor nations. The policy therefore is

about changing public attitudes as a means to achieving the objectives. (Wolfendale 1996).

Further comment on the conflict between these twin objectives within the public understanding of science movement can be found in Doorman (Ed.) 1986 (Introduction) and Nieman 1996.

5

science. Others were the increasing profile of popularisers and their work (such as

Stephen Hawking and his A Brief History of Time, 1988)3 and the cultural turn

taken by science studies in recent years. In addition, the ‘Two Cultures’ debates

precipitated in the 1960s by C. P. Snow have, in recent years, taken two turns that

each impact on the study of popular science. On the one hand, discussions

concerning the relations between science and art have become increasingly

sophisticated and have led to interesting collaborations between artists and

scientists. On the other hand, increasing interest in cultural aspects of science

from non-scientists has led to an organised reaction from some parts of the

scientific community (known as the ‘Science Wars’ in the United States).

The research on which this thesis is based was directed towards the

popularisation of physics. In particular, the research involved textual analysis of

popularisations of quantum mechanics. Interventions in the ‘Two Cultures’ and

‘Science Wars’ debates were also subject to textual analysis. A separate (but

related) strand of the research was directed at modern imaging techniques in

physics and how these are embedded in visual culture. Quantum mechanics was

identified early on as a fruitful subject to pursue because of the sheer number of

popularisations available and its potency as a symbol of modern physics. Imaging

and the popularisation of scientific images were studied because, as Cooter and

Pumfrey point out (Cooter and Pumfrey 1994) non-verbal communication in

popular science has, so far, failed to attract the attention it deserves. The main

motivation for studying imaging was the pursuit of general conclusions about

popular science. In addition, interest in scientific imaging came from previous

experience in computer graphics and work in a scientific picture agency (The

Science Photo Library, London). Interest in the boundaries of science emerged

from both the work on quantum mechanics and the work on imaging and opened

up new avenues of research to pursue (and provoked interest in an even wider

selection of texts).

The results are presented in the following four chapters. Chapters 2 and 3

concentrate on scientific images while chapters 4 and 5 examine how boundaries

between what counts as ‘science’ and what counts as ‘the popular’ are constructed

in popular science texts. In the case of imaging, the research was not limited to the

3 For an assessment of the impact of A Brief History of Time see Michael Rodgers 1992.

6

study of texts but also to the dynamics of the research communities involved and

the development of visualisation technologies. In contrast to the other strands of

the research, imaging was studied in a ‘hands-on’ manner making full use of the

lived experience of the visualisation process as well as by applying techniques of

textual analysis. That is, studying the visual culture of physics involved producing

visualisations: rendering real data, simulations and mathematical objects visually

in various different ways, using various different tools, to identify the choices

available to image makers and the significance of those choices. It also involved

following developments in visualisation journals and magazines and attending

talks and conferences to better understand the visualisation community. In

addition, the growth in imaging techniques in science was also gauged

numerically with citation analyses (see Appendix 1: Imaging and Visualisation in

the Science Citation Index, page 251).

‘Popularisation’ as a Research Topic ‘Popularisation’, then, is a problematic term. In common usage

popularisation can signify ‘vulgarisation’, a process of translation, a democratic

activity, a frivolous activity, a public relations exercise or be understood as a

mixture of all of these. The meaning of ‘popularisation’ and ‘popular science’ is

determined largely by the questions we bring to the subject but many accounts of

popular science assume a coherence for the subject that does not bear scrutiny.

The apparent coherence of the term ‘popularisation’ comes from a

pervasive ‘common-sense’ about the subject that has an important influence on

the way it is addressed. The account of popular science in this thesis departs from

this common-sense in several ways. The contrast is significant so the

commonplace assumptions that dominate in this field are discussed in the next

section. What follows is partly an account of the diversity of approaches to the

subject of popular science. Nevertheless, I attempt to show that distinct and

diverse perspectives can be combined coherently. In particular, I look at how the

concept of ideology can help us to understand the relation of science texts to their

social contexts. This, I will argue, is essential to both understanding how science

is made meaningful and accounting for the ways it is understood differently by

scientists and non-scientists.

7

The plurality of approaches to the subject can be seen as a resource rather

than an impediment as long as the relation between them is understood. There are

many distinct reasons for investigating popularisation. Clarity about exactly which

questions are being brought to bear can help to alleviate the inevitable

methodological dilemmas that this multiplicity creates.

‘Popular science’, then, is a collection of parallel subjects. Questions like,

‘Why popularise physics?’ or ‘How is popularised physics understood?’ will have

a range of parallel answers depending on exactly what we are interested in.

Whichever way we look at them though, questions like these require us to ask

about science from the perspective of a theory of society. They are not abstract

questions and cannot be understood without reference to our motivation for asking

them.

The ‘Dominant View’ of Popularisation and Some Alternatives The Problem with Popular Science (The Reflexive Imperative)

Before we can even begin to think about communication in science, we

need to highlight a perennial cause of confusion, which occurs on (at least) two

levels. The cause of confusion is encapsulated in the following question: does the

universality and reliability of scientific knowledge make questions about the

communication of science simpler or more complicated?

The answer, when faced bluntly with the question like this, is an

unequivocal ‘both’. All commentators on communication in science could point to

ways in which the special status of scientific knowledge simplifies problems and

also identify other ways in which it compounds them. But the question itself is

rarely, if ever, addressed explicitly. Instead, implicit assumptions about scientific

knowledge and the purpose of communication in science frame the subsequent

argument. Broadly, the literature offers us two alternatives. For some, in

comparison with the messy social contingencies and ideological issues associated

with other communication efforts (for example news reporting), questions about

the communication of science are simple. In other words, ‘because there are facts

in science, we do not have to worry about all that social stuff’. For others,

communication efforts such as, say, news reporting are less problematic than

8

popular science because the social imperatives that drive them are more easily

discernable. Popular science texts, in comparison, disguise the ideological work

they do more effectively, making their analysis more difficult.

Both attitudes have some merit on a pragmatic level except when it comes

to communication between the two camps. The underlying issues are so

fundamental that it is difficult to see the alternative view as anything other than

perverse. The solution to this dilemma is to take nothing for granted in any

account of communication in science – make every assumption explicit. When it

comes to understanding communication in science, fools rush in where angels fear

to tread.

On a slightly less fundamental level is the question of communication

itself. All accounts of popularisation, of the public understanding of science or of

communication within professional science invoke theories of communication. As

with philosophical issues, ‘communication’ is often naïvely taken for granted in

accounts of science. Theories of communication are invoked implicitly rather than

explicitly leading to profound confusion and unnecessary antagonism between

rival approaches.

The research on which this account is based was aimed both at popular

science texts and the basic assumptions of current approaches to the public

understanding of science. At public understanding of science conferences and

events, I was as interested in observing how the debate was framed as I was in

hearing reports of (often excellent) research and interventions. With a few notable

exceptions, a characteristic of both research and interventions in the public

understanding of science is a lack of reflexivity. That is, the approaches adopted

and the ways in which research questions are framed are ‘naturalised’ rather than

justified or explained. This is not necessarily a bad approach to adopt. Research in

all fields has to start from somewhere. In some fields, the foundations on which

methodologies are based can be utterly mysterious (and taken for granted) with no

detrimental effect. Progress can be made within an arbitrary framework4.

However, in the case of the public understanding of science, this is not the case.

(continued on next page)

4 Interestingly, quantum mechanics could serve as an example here. As we shall see below, foundational issues in quantum mechanics were all but ignored for a generation following the establishment of the ‘Copenhagen Interpretation’. It was good enough for most physicists to believe that foundational questions had been resolved and that the framework in which they

9

One reason that the foundations of public understanding of science

research cannot be taken for granted is that there is not remotely enough

consensus to justify such a move. As John Durant noted in the inaugural issue of

the journal Public Understanding of Science, it is not a conventional academic

discipline but, rather, an emerging interdisciplinary research field. Public

understanding of science is not what Thomas Kuhn might describe as

‘paradigmatic’: there is no generally acknowledged exemplar, no universally

accepted model, no body of securely established theory (Durant 1992: 1).

However, there is in public understanding of science research a pervasive

‘common-sense’ that structures the interdisciplinary field, which is explored in

depth below.

As we have seen above in relation to the two attitudes reflected in the

literature, researchers adopt competing perspectives but, without a degree of

reflexivity in their approach, have no way to understand the relation of one

perspective to another. That is, researchers can recognise differences between

each other but not talk about them. This account of popular science weaves a

careful path through both the philosophical and methodological issues at the heart

of any questions about science. Readers who disagree with the conclusions

reached and the methods adopted here should, at least, understand why.

The Deficit Model of Public Understanding of Science In many ways, science is substantially more straightforward than other

issues. It gives us straightforward, reliable knowledge about nature. For instance,

‘the charge of an electron is 1.602176462 x 10-19 ± 0.0000000063 x 10-19

coulombs5’. Perhaps the way to judge science communication would be to

measure the fidelity with which ideas like this are transmitted. For instance, we

might deem communication of science to be successful when the pure, universal

facts can be distinguished from the particular context in which they are presented.

The important aspect is the pure knowledge – the idea that the charge on an

electron is 1.602176462 x 10-19 C, say. If this fact could not be abstracted from the

contd… produced results at such a prodigious rate was robust – they did not feel the need to address those issues themselves. Only relatively recently have physicists again adopted a more reflexive approach to their results in quantum mechanics. 5 1998 CODATA recommended value. See International Aspects of Establishing Recommended Values, http://physics.nist.gov/cuu/Constants/international.html (August 2000).

10

context in which it was imparted (that is, if the audience made it meaningful in a

way peculiar to the context itself) then we might say that the communication had

failed in some way. The job of research into the public understanding of science

would then be conceived in negative terms – working out what obstructs the flow

of pure scientific knowledge to the audience.

The argument above invokes a (very crude) ‘deficit model’ of the public

understanding of science. According to deficit models, ‘science’ is conceived as

an unproblematic body of knowledge. That is, science is equated with the

collection of facts that (it is believed) comprise ‘scientific knowledge’. Public

understanding, then, is a measure of the quantity of these facts that individual

members of the public are acquainted with. Another way to put this is to note that

public understanding of science is the inverse of public ignorance of science6.

Deficit models have come under sustained criticism since the inception of

public understanding of science research in the late 1980s. When Durant, Evans

and Thomas invoked such a model in their analysis of survey data, they were

forced to defend it at length, carefully justifying its relevance with reference to the

particular aims of their research. It was, however, rejected as a general approach

to the public understanding of science because it is, “not suited to handling all

aspects of the relationship between science and the public” (Durant, Evans and

Thomas 1992: 163).

Durant et al’s defence of the limited applicability of a deficit model also

serves as a good summary of the various objections to deficit models in general.

They identify three principal objections: 1) it is simply not appropriate to equate

science with a collection of facts; 2) deficit models overlook the fact that much if

not most scientific knowledge is remote from and irrelevant to everyday life;

3) deficit models embody the specific value judgement that scientific

understanding is inherently good (Durant, Evans and Thomas 1992: 162-163).

In the following account of popular science, a deficit model of the public

understanding of science is inappropriate for two main reasons. Firstly, as with

objection 1 above, we shall find that science can not be reduced to a collection of

unproblematic facts. Secondly, we shall find that the communication of science

6 Actually, PUS=(1-PIS) where PIS is public ignorance of science and both PUS and PIS are normalised to the sum of scientific knowledge.

11

(indeed, communication in general) can not be reduced to the (mere) transmission

of messages. Rather, messages are made meaningful through an interaction

between a reader and a text. Thus, the facts (which en masse we are tempted to

equate with ‘science’) are always made meaningful with reference to more

sophisticated ideas about the relation of science to its ‘others’.

Returning to the original ‘unit’ of scientific knowledge, what does

knowing the charge of an electron entail? There are many ways in which such

knowledge can be made meaningful. Most of these will be empowering to an

individual, giving her or him a slightly fuller and more robust picture of the world

he or she inhabits. Importantly though, there is a variety of ways in which the

knowledge would be empowering – acquiring ‘understanding’ is not purely

cumulative. In addition, there are ways in which the knowledge can be actually

disempowering (perverse as this may seem for such a straightforward example).

For example, in a context in which there was no distinction made between the

various sources of scientists’ authority, the precision with which such a quantity

has been determined could be mobilised rhetorically to add weight to any

intervention by a scientist. Although we learn that the charge is

1.602176462 x 10-19 C, the only meaning this takes on in this hypothetical context

is related to the authority of scientists. The knowledge by itself is not ‘available’

for any other use.

The hypothetical example above (though admittedly rather strained)

reveals that facts such as the charge of the electron are not simply exchanged. We

are encouraged to orient ourselves to them in particular ways. For instance, in the

context of this thesis, the charge of the electron is of no intrinsic interest at all. It

stands as an example of a ‘unit of scientific knowledge’. Both reader and author

recognise it as an arbitrarily chosen example and orient themselves to the

knowledge accordingly. Importantly, this context is not one in which a reader is

empowered to make the knowledge meaningful for himself or herself. There is

only one way of making the charge of an electron meaningful on offer here: it is

an example, and so the reader’s understanding of science is not augmented. We

may, of course, have a context of our own in which we can make such knowledge

meaningful. For instance, to somebody who (hypothetically) has painstakingly

measured the charge of the electron as part of their (aborted) training as a scientist

12

and obtained the wrong answer (by several orders of magnitude), the number of

significant figures in the quoted value may be particularly note-worthy.

To some extent, all communication of science (including all public

understanding of science initiatives) asks us to adopt a particular attitude to the

subject rather than merely passing on facts. Sometimes the example of

Prometheus is enough to temper our enthusiasm for genuine and comprehensive

public understanding of science. For instance, public discussion about drugs is one

place where we might expect to learn some science; yet, public ‘information’

initiatives tend to be about prohibition rather than pharmacology or physiology.

To encourage public understanding of the technical details of drugs might be seen

as condoning drug taking. Even amongst scientists, some scientific knowledge is

not considered neutral and great care is taken to frame its explication to ensure

that it carries the correct moral values.

The deficit model has ceased being influential in research into popular

science texts and the public understanding of science. Approaches that adopt a

deficit model uncritically are dismissed as naïve. However, it still has influence

over interventions in the public understanding of science. Science popularisers

and public understanding of science activists may still conceive their task as

‘filling a gap’ or getting a (unproblematic) message across. The deficit model is

thus part of the pervasive common-sense about science and popularisation that

structures both public understanding of science practice and public understanding

of science research.

The ‘Dominant View’ of Popularisation The pervasive common-sense that structures discussion of popular science

is dubbed the ‘dominant view’ of popularisation by Stephen Hilgartner (1990)7.

The term refers more to a hazy collection of attitudes towards the subject than a

clearly articulated programme. Nevertheless, Hilgartner manages to characterise

the dominant view succinctly in a way that will serve this discussion also:

The culturally-dominant view of the popularisation of science is rooted in

the idealised notion of pure, genuine scientific knowledge against which

popularised knowledge is contrasted. A two-stage model is assumed:

7 See also Peter Broks 1996: 128-133

13

first, scientists develop genuine scientific knowledge; subsequently,

popularisers disseminate simplified accounts to the public. Moreover, the

dominant view holds that any differences between genuine and

popularised science must be caused by ‘distortion’ or ‘degradation’ of the

original truths. (Hilgartner 1990: 519)

The dominant view provides an inadequate framework to address many of

the questions we can ask of popular science. However, because the dominant view

is dominant, alternatives need explicit justification. Space for an alternative,

‘critical’ account of popular science is found here by problematising aspects of

popular science that the dominant view takes for granted. Hilgartner’s definition is

augmented to include a wider constellation of assumptions that underpin many

accounts of popular science. The assumptions inherent in the dominant view are

examined along several axes: ‘Science’, ‘scientists’, knowledge, authority,

audience, ‘communication’ and textual analysis. We shall start with the last of

these axes and examine how the dominant view frames the analysis of popular

science texts.

The first criticism we can level against the dominant view is that by

assuming boundaries between cultures it presupposes that the only object for

analysis is the transfer of cultural items across the boundaries. This is a concern

that is dealt with at length by Whitley (1985). The emphasis on ‘diffusion’ of

knowledge in the dominant view is discussed by Cooter and Pumfrey (1994).

Diffusion maybe perceived as a passive ‘trickle-down’ or as osmotic but in either

case is not construed as dynamic. They contrast these ‘watery’ metaphors and the

related ones of contamination, contagion, seduction or colonisation with those of

grafting, appropriation and transformation to undermine the naturalness of the

diffusionist model.

The diffusion model blinds us to several aspects of popular science. In

particular, it fails to make clear that popular culture can generate its own natural

knowledge. Cooter and Pumfrey point out (1994) that this natural knowledge need

not be limited to popular lore and magic nor to ‘radical appropriations’ like

phrenology. Instead, they make reference to Anne Secord’s (1994) paper on

working-class botany that appears in the same issue of History of Science.

Another gap in the diffusionist account is the mechanism by which ‘successfully

popularised’ natural knowledge may take on very different meanings within

14

popular culture from those intended by its popularisers. Peter Broks takes this

theme further, rejecting the idea that the public consumes scientific messages

passively. According to Broks (adopting terms coined by Neil Ryder and Logie

Barrow respectively) the study of popular science should concentrate instead on

how the public “actively produces its own ‘vernacular science’ and its own

‘democratic epistemologies’” (Broks 1996: 133).

The dominant view also places emphasis on the pedagogical role of

popularisation. It is assumed that the primary purpose of popular science is to

educate its audience. However, this assumption cannot be supported – popular

science takes on a wide range of roles, as we shall see. When it comes to the

analysis of texts, the dominant view’s emphasis on pedagogy restricts what counts

as popular science. The critical account of popular science that we are seeking

here should be applicable to all manifestations of science in popular culture

whether they are explicit ‘popularisations’ that conform to the dominant model or

‘appropriations’ of scientific ideas.

We turn now to the notion of ‘scientist’. The difference between the

dominant view of popularisation and the ‘critical’ account offered here is that the

former assumes that there is a metaphysical distinction between the categories

‘scientist’ and ‘non-scientist’ while the latter seeks a social explanation for the

categories. According to the dominant view, scientists are the source of

knowledge that is ultimately popularised. The categories ‘scientist’ and ‘non-

scientist’ appear natural and inevitable and seem to emerge from the intrinsic

properties of the scientific process itself. In other words, according to the

dominant view, the means by which we distinguish scientists from the rest of

society – their authority, their responsibilities and their privileges – are all

assumed to stem from scientists’ unique relationship to nature. Whatever the

source of scientists’ authority (and it may well be the case that the intuition of the

dominant view has some foundation), a critical account of popular science needs

to address the constitution of the categories explicitly rather than taking them for

granted.

In this critical account, emphasis is placed on how the distinction between

the categories ‘scientist’ and ‘non-scientist’ is signified in popular science texts

themselves. The account does not offer metaphysical distinctions between the

categories but instead asks how distinguishing them is approached as a practical

15

problem for both scientists and various groups of non-scientists. Various

approaches to distinguishing the categories are discussed in greater depth in

chapter 4.

The differing approaches to the categories ‘scientist’ and ‘non-scientist’

have implications for the authenticity of scientific knowledge. The dominant view

recognises two categories of scientific knowledge – authentic and popularised.

The critical account does not make the distinction and does not posit any a priori

criteria by which such a distinction can be made. In the critical account, scientific

knowledge is understood with reference to its context (including its intended

audience, its implied audience, the genre it conforms to, the author’s goals, the

uses to which it is put, etc.) From this point of view, scientific knowledge forms a

spectrum from laboratory ‘shop-talk’ and colloquia at one end to popular

expositions in mass-media at the other (Hilgartner 1990, pp 528). Thus, the

boundary between authentic and popularised knowledge is arbitrary and the act of

defining a boundary is understood in terms of the social interests it serves.

Put another way, the critical account does not distinguish between

categories of knowledge (does not make any epistemological claims) but does

distinguish between uses to which knowledge is put. In contrast, the dominant

view of popularisation sees authentic scientific knowledge as an independent

category. For the dominant view, the use to which the knowledge is put is a

peripheral issue – the important thing is to increase the amount of knowledge. For

the critical account offered here, the epistemological status of the knowledge is a

peripheral issue – the important thing is why it is being invoked in a particular

context. Therefore, the dominant view of popularisation and the critical view

outlined here address very different questions.

According to the critical account, the meaning of scientific knowledge

depends on its context. Sometimes there is conscious and explicit

recontextualisation involved as in the case of popularisation. But élite

articulations of scientific knowledge (talks, papers, textbooks, visualisations) can

also find themselves, without any alteration of the original text itself, in new

contexts where they take on new meanings. The effect of context on meaning is

explored with particular emphasis on the popularisation of scientific pictures in

chapters 2 and 3.

16

All articulations of scientific knowledge are prone to error but the

relevance of the errors depends on the use to which the knowledge is being put.

For example, the solution to an equation written out on a blackboard may be

wrong but if its context is a film about scientists then the error is not relevant to

the use to which it is being put – signifying ‘scientificity’. Nevertheless, the

critical account recognises the perception of a clear distinction between authentic

and popularised scientific knowledge. It is the perception of two coherent

categories rather than the categories themselves that is the object of study by the

critical account. It is the perception rather than the categories themselves that

plays a significant role in how science is made meaningful in different contexts.

The question of authenticity leads on to another set of assumptions that

underpin the dominant view. These relate to the categories ‘science’ and ‘non-

science’. As with the categories ‘scientist’ and ‘non-scientist’, the dominant view

of popularisation does not recognise the distinction as a problematic one. The

distinction is assumed to be entirely natural and timeless and is explained with

reference to the intrinsic qualities of scientific knowledge. That is, the dominant

view includes the assumption that there are objective, a priori criteria by which to

distinguish the two categories. In contrast, the critical account developed here

pragmatically avoids any such assumption. It does this to reveal more clearly the

effort that goes in to establishing the boundaries between the two and the

imperatives for establishing them. (The distinction is explored in much greater

detail in chapters 4 and 5.)

The critical account places emphasis firmly on the construction of the

categories ‘science’ and ‘non-science’ in popular culture. The motivation of an

audience to attribute knowledge to either category is explained with reference to

how knowledge is represented rather than with reference to external imperatives.

That is, to the critical account of popular science, popular representations and the

‘negotiation’ that goes on in popular forums are more important than, a) how the

word ‘science’ is defined in élite forums or, b) what science ‘must’ be in order to

give us reliable knowledge of the world.

Another essential distinction between the dominant view of ‘science’ and

the view adopted here concerns the coherence of the term. The dominant view

assumes that ‘science’ is a coherent category that can, in principle, be defined

objectively and uniquely. The critical account, in contrast, understands science as

17

an incoherent category. The institution of science is polysemic – it can take on

significance in many different ways. The meaning of ‘science’ depends on the

context in which it is invoked. Thus, in this account, no definition of science is

offered. Instead, the account provides a way of analysing the various ways science

is defined in popular texts8.

The coherence of the term ‘science’ as it is invoked and deployed within

the dominant view of popularisation should not, however, be overstated. The

dominant view understands ‘science’ in several ways. It is: a) a body of

knowledge; b) a collection of practices and attitudes; c) a community; d) a vague

mixture of all of these. More important to the dominant view than any particular

definition of science is the assumption that there is a straightforward and coherent,

‘pithy’ definition that will capture the essence of the term. This assumption plays

a rhetorical role in popular science that is discussed in chapter 4 (page 159).

It appears that we are presented with a dilemma. On the one hand, there is

a legitimate desire to define science according to a priori criteria. On the other

hand, we realise that doing so blinds us to how science is actually defined in a

social context. How do we proceed? We have to be able to understand ideas about

science in two ways simultaneously and, crucially, be aware that we are doing so.

The tempting alternative to adopting this complex approach is to adopt a view of

science either as a body of knowledge or as an instrument of power in society.

The nature of science makes this hard distinction apparently viable but scrutiny

reveals a different story. Neither purely metaphysical nor purely cultural accounts

8 Thomas Gieryn speaks of ‘ideologies of science’ to understand the different ways

science is defined in a social context (Gieryn 1983). However, this account adopts Gunther Kress’s

more subtle distinction between ideology and discourse because of the handle it gives us on the

manifestation of ideology in texts.

The relation between language and ideology depends on the category of discourse. Any

linguistic form considered in isolation has no specifically determinate meaning as such,

nor does it possess any ideological significance or function. It is because linguistic forms

always appear in a text and therefore in systematic form as the sign of a system of

meaning embodied in specific discourse that we can attribute ideological significance to

them. The defined and delimited set of statements that constitute a discourse are

themselves expressive of and organized by a specific ideology. That is, ideology and

discourse are aspects of the same phenomenon, regarded from two different standpoints.

(Kress 1985: 30)

18

of science can claim to be comprehensive. However, an account that ‘brackets off’

either metaphysical or cultural issues can help to illuminate both perspectives so

long as it is clear about the types of questions it addresses. The emphasis on the

social context of science in this account is not a rejection of metaphysical

accounts. The philosophy of science and its relation to sociology and cultural

studies of science is discussed further in chapter 4 (page 154). The problem of

forging a coherent account from diverse theoretical perspectives in science studies

is dealt with below (page 38).

Another problem with the dominant view of popularisation is the way it

conceives the audience for popular science. Within the dominant view, the

audience for popularisations – ‘the public’ – is understood as an undifferentiated

mass. This conception suffers from the same problem that Raymond Williams

notes in relation to ‘mass culture’:

There are in fact no masses; there are only ways of seeing people as

masses. (Williams 1958: 300)9

Williams goes on to note that our society presents many opportunities for

such ways of seeing. What we actually see are other people. In practice, though,

we mass them and interpret them according to some convenient formula, which

allows us to think of them as a single entity. The dominant account of

popularisation adopts such formulas for identifying audiences (distinguishing ‘us’

from ‘them’) uncritically. The way ‘the masses’ are defined and identified appears

entirely natural and unproblematic in most accounts of popular science.

For a critical account of popular science, the formula by which audiences

are grouped by others is at least as interesting as the masses themselves. That is,

critical accounts concern themselves with the social function of these ‘ways of

seeing other people’. The critical account outlined here recognises many different

audiences for popular science (including scientists themselves). Different

audiences are likely to want different things from popular science and may also be

in conflict with each other. Just as with the concept of ‘scientist’, the audience for

popular science is not decided by a priori criteria. Instead, popular science texts

themselves orient their readers – encouraging them to become the ‘natural’

9 For a detailed account of how the concept of ‘the masses’ affects approaches to popular culture see Williams 1958: 297-312.

19

audience for such accounts. Further, the distinction between ‘scientists’ and ‘the

public’ is often established by popular science texts rather than popular science

texts orienting themselves to pre-existing categories. These points are also

discussed in greater detail below (page 34).

Audiences for popular science, then, are much more than passive

recipients of (diluted or modified) knowledge. They play an active role

themselves in making scientific knowledge meaningful. The meaning of scientific

knowledge is not an immanent property of the knowledge itself but emerges

through an interaction between readers, texts and contexts. Knowledge of human

genetics or cosmology for instance can be made meaningful in different ways at

different moments. Galileo’s cosmology was perceived as a threat to the authority

of the Vatican, yet a much more comprehensive (and awesome) cosmology is

made silly in the Muller cartoon (Figure 1, page 1).

The final critique of the dominant view of popularisation concerns the

theory of communication that it draws upon. Peter Broks points out that

approaches such as those characterised here as the dominant view (especially

those that adopt a ‘diffusionist’ account) presuppose a ‘behavioural’ model of

communication (Broks 1996: 128). The questions that are brought to bear on the

popular science focus on how it modifies ‘science literacy’ or some other

behavioural characteristic of individuals. As we shall see below, the emphasis on

‘media effects’ (how media modify behaviour) has been discredited by recent

work in communication theory and cultural studies. The emphasis in this account

is on how meaning is constructed within an ideological context. The dominant

view, by contrast, adopts a ‘linear’ account of communication in which ‘the

media’ are conceived as conduits for messages. Models of communication

determine, to a large extent, the types of questions we can ask of popular science

and so we turn to these next.

Communication Theory and Popular Science

So central are models of communication theory to our conception of

popular science texts and the public understanding of science, and so rarely are

they discussed explicitly in research, that it is time to ask, why? As Stuart Hall

20

notes of communication in general, when it comes to research into the public

understanding of science we should be,

...deeply suspicious of and hostile to empirical work that has no ideas

because that just simply means that it does not know the ideas it has.

(Hall 1989: 52)

Much research into the public understanding of science has followed

Harold D. Lasswell’s prescription for communications enquiry. According to

Lasswell, understanding communication means answering the following question:

Who says what in which channel to whom with what effect?

(Lasswell 1948 quoted by Fiske 1982: 30-31)10

However, many conflicting theories of communication emerged in the

twentieth-century and no single model dominates today. Instead, the study of

communication is characterised by a lack of consensus. Communication scholars

even disagree about whether they should strive for consensus or embrace plurality

in their field11. The prescription above – to place emphasis on questions about

questions – thus applies to questions about communication also.

The extent to which studies of popular science and the public

understanding of science form into two camps can be explained in large part by

basic assumptions about ‘communication’. Studies of popular science draw

heavily on theories of communication that are sometimes articulated explicitly but

are often taken for granted. So full of ferment and controversy is the field of

communication that failure to explicitly locate an account of popular science with

respect to a theoretical position often leads to confusion.

Another question that divides communication scholars is whether

communication should be a distinct academic discipline at all. The problem

perceived by many is that the study of ‘messages’ in isolation from the culture

they inhabit is meaningless. This is the view adopted in this account of popular

10 Lasswell’s prescription is also discussed by McQuail and Windahl 1981: 10-11. The source quoted by both commentators is Lasswell, Harold D. 1948, ‘The Structure and Function of Communication in Society’, in Bryson, L. (Ed.) The Communication of Ideas (New York: Institute for Religious and Social Studies) 11 For instance, Bostrom and Donahew 1992 adopt the view that only empirically based behavioural theories should be considered ‘communication theory’ and ‘interpretative’ approaches should be excluded from the field. Griffin 1994 and Fiske 1990 adopt a much more pluralistic approach in their overviews of the subject (see, in particular Fiske 1990: 1-2).

21

science. Nevertheless, distinct traditions in communication theory run through all

accounts of popularisation. This section, then, is an attempt to orient the following

account of popular science with respect to communication theory in general. It is

(in all senses) a partial account of the field12.

The Communication Divide: Empirical Vs. Interpretative The turbulent field of communication research is frequently divided into

just two competing perspectives. (Articulating one’s own position is that much

easier when it is just a matter of telling ‘us’ from ‘them’.) John Fiske, for instance,

distinguishes the ‘semiotic school’ from the ‘process school’ of communication

studies (1990: 2-5). Em Griffin distinguishes ‘scientific’ from ‘humanistic’

theories (1994: 480-481). In this account, I have already distinguished the

‘dominant view’ of popular science from a ‘critical’ alternative. I now want to go

further and, like Fiske and Griffin, divide the entire field of communication

research into two broad approaches. This will shed light on some of the debates

within public understanding of science research. Like Fiske and Griffin, the

distinction I make serves an analytic purpose but should not be interpreted as

reflecting an actual boundary. Communication studies are too theoretically

sophisticated to allow us to assign them straightforwardly to one category or the

other. In this sense, distinguishing two schools of communication theory is like

distinguishing professional science from popular science (see page 34) –

ultimately arbitrary and never as straightforward as it at first appears. With that

caveat, we can follow Fiske and call one camp the ‘semiotic school’ and the other

the ‘process school’. The theoretical landscape in which communication studies

are located is discussed here in some detail because it is crucial to understanding

the tensions between and within accounts of popular science.

The process school (which roughly corresponds to Griffin’s ‘scientific’

category) conceives of communication as ‘transmission’ or ‘transport’. It draws

12 For alternative overviews of the field, see Cobley (Ed.) 1996 The Communication Theory Reader; Dervin et al 1989 Rethinking Communication; Griffin 1994 A First Look at Communication Theory; Fiske 1990 An Introduction to Communication Studies. A more compact account of the development of communication theory is available in Curran 1996 Rethinking Mass Communications. For the relation of communication theory to popular science in particular see Lewenstein 1995 From Fax to Facts: Communication in the Cold Fusion Saga; Gregory and Miller 1998 Science in Public (particularly pp 86-88); Collier and Toomey (Eds.) 1997 Scientific and Technical Communication: Theory, Practice, and Policy; etc.

22

heavily on models of communication developed in electrical engineering for

signal processing. It is also informed by behavioural research and the

methodologies it adopts tend to be empirical. The research agenda of the process

school includes tracking the ‘effect’ of the communication process and

understanding how ‘communication breakdown’ occurs.

The semiotic school (which roughly corresponds to Griffin’s ‘humanistic’

category) tends to view communication as an interaction between a ‘reader’ and a

‘text’ rather than as the transmission of information. It is concerned with the role

of the context in how messages are understood. Rather than emerging from signal

processing and behavioural science, the semiotic school has grown out of a critical

tradition that is distrustful of positivistic approaches13. It generally employs

‘interpretative’ methodologies rather than empirical ones. The semiotic school is

(generally) more concerned with how communication acts to establish consensus

than with measuring the ‘effects’ or ‘effectiveness’ of media messages.

Perhaps the most important distinction between the two is how ‘meaning’

is conceived from each perspective. To the extent that the process school deals

with meaning at all, it tends to be equated with ‘content’. Meaning is placed in a

message at one end, and extracted (or not) at the other. The semiotic school, on

the other hand conceives of meaning as arising from interpretation rather than

being an independent property of the text. By interacting with a text, a reader

actively constructs its meaning rather than simply discovering it. (Though

semiotics also concentrates on how meaning is generated by textual processes.

The important distinction is the idea of meaning as an interaction rather than as a

transportable property). Finer distinctions emerge in the following discussion and

are summarised in Table 1 (page 33). For research in science communication (as

in communication research in general) the distinction represents not only

methodological differences but cultural ones too. Thus, we should address the

origin of the bifurcation to understand it.

The theoretical approaches to communication with (by far) the longest

pedigree are rhetoric and philology (rhetoric being the study of how language is

used to persuade and philology being the study of the structure, historical

13 Although semiotics itself was heralded as a ‘scientific’ approach to linguistics, it has since been reformulated and its scientismic pretensions played down. See page 23.

23

development, and relationships of a language or languages)14. In the first decades

of the twentieth-century, a radically different approach to communication

emerged in the form of semiotics15. In its original, 1916 formulation

(Saussure 1983), semiotics was perceived as a scientific approach to linguistics.

The distinction made earlier between the tendency to isolate the study of

communication from the culture in which it is embedded thus applies. However,

semiotics has since been formulated in ways that locate the field firmly in the

cultural realm.

Media Effects and Linear Models of Communication (The Process School) The bifurcation between ‘empirical’ and ‘interpretative’ approaches to

communication theory that has informed debate around popular science can be

traced to parallel developments in the 1930s and 1940s. On one hand, there was

the growth of ‘media effects’ research and, on the other, a different emphasis

placed on media by ‘post-Marxist’ thinkers.

In response to growing concern about new media such as film and radio,

emphasis was placed on the social study of ‘media effects’. In the 1930s, The

Payne Fund (a charitable trust) funded the first significant studies into media

effects (Griffin 1994: 22). These involved correlating the media use of children

with a series of variables that might indicate ‘ill effects’ it may be having such as

(amongst others) lower exam grades, negative emotions, loss of prosocial

attitudes, and delinquency. Such research has generally been inconclusive16.

However, the behavioural science approach to communication has had

considerable influence.

The study of media effects gained even more impetus with the introduction

of models of communication emerging in engineering. Principal amongst these is

14 For an account of the role of rhetoric in the history of science, see Golinski 1989. A more broad-ranging review of rhetoric in science studies can be found in Ashmore et al 1994. See also Offer Gal 1994 for a detailed account of the relation of rhetoric and philosophy of science with reference to Galileo’s ‘paradigm-shifting’ reformulation of Aristotelian mechanics. For an account of the role of rhetoric in popular science, see Gross 1994. Philology has not made much impact on contemporary science studies. According to F. Tuohy, “The professor of Comparative Philology thinks that no one should learn English without having mastered Anglo-Saxon” (New Shorter Oxford English Dictionary, 1997). 15 For accounts of semiotics, see Hawkes 1988 and Guiraud 1975. 16 For a comprehensive collection of critiques of media effects research see Martin Barker and Julian Petley (Eds.) 1997 Ill Effects: The Media/Violence Debate. See, in particular, Ian Vine, The Dangerous Psycho-Logic of Media ‘Effects’, pp 125-146; and Willard D. Rowland, Jr, Television Violence Redux: The Continuing Mythology of Effects, pp 102-124.

24

the work of Claude Shannon who, while working for the Bell Laboratories in the

1940s, developed a theory of information and communication that has been the

foundation of the ‘information revolution’17. In 1949 Warren Weaver popularised

Shannon’s work amongst communication theorists (Shannon and Weaver 1949)

and the model has dominated the field ever since. (The quotation from Lasswell

with which this discussion of communication theory began – page 20 – is a

verbalisation of Shannon’s model.) It has the advantage of being intuitive at the

same time as providing a clear framework for detailed quantitative analysis.

As we have seen, the process school sees communication as the

transmission of messages and communication as a process by which one person

affects the state of mind of another by transmitting information to them. It is

concerned with the mechanics of the process – the encoding of the information by

the transmitter, what happens to it during the transmission (whether it gets

corrupted or distorted) and then how the receiver decodes it.

Figure 2 The principal components of Shannon and Weaver’s Model of Communication (After Shannon and Weaver 1998/1949: 7) In the case of telephony, the channel would be the wire and the exchanges, the signal would be the alternating current it carries, and the transmitter and receiver would be the telephone handsets. Noise would include crackling from the wire and delays on the line in long-distance calls. In conversation, a mouth would be the transmitter, air would be the channel, sound waves would be the signal, and an ear would be the receiver. Noise would include any distraction one might experience whilst listening to the speaker.

17 So important have Claude Shannon’s theories been to the development of information technology that it is frequently suggested that, despite his relative anonymity today, he will be remembered as one of the most influential figures of the twentieth-century (more enduringly than politicians). For instance, Charles A. Gimon writes,

Names like Einstein, Heisenberg, or even Kurt Goedel better known among the people

who have led twentieth-century science into the limits of knowledge, but Shannon's

information theory makes him at least as important as they were, if not as famous. Yet

when he started out, his simple goal was just to find a way to clear up noisy telephone

connections. (Gimon 2000).

noise source

transmitter receiver destination information source

signal received signal

message message

25

In this model, media are conceived as ‘conduits’ for messages. The

meaning of messages is understood to reside within them – imbued by the

information source. The meaning of a message may be only partially understood

by the receiver if the message did not contain enough ‘redundancy’ to overcome

the problems caused by ‘noise’ in the transmission process.

From this basis, Shannon and Weaver identify three levels of analysis by

which communication can be studied. Firstly, there are ‘technical problems’: How

accurately can the symbols of communication be transmitted? Secondly, there are

‘semantic problems’: How precisely do the transmitted symbols convey the

desired meaning? Thirdly there are ‘effectiveness problems’: How effectively

does the received meaning affect conduct in the desired way? (Shannon and

Weaver 1998: 4-6)18.

‘Critical’ Communication Theory (The Semiotic School) As discussed above, semiotics was originally conceived as a ‘scientific’

approach to communication but has subsequently proved its value to what Griffin

dubs ‘interpretative’ approaches. Thus, semiotics is today located firmly in the

cultural realm. Two responses to Saussure’s formulation of the subject that have

been particularly influential to the present study are those of Valentin Vološinov

(1973/1929) and Roland Barthes (1973a, 1973b). The main difference between

each of these approaches and that of Saussure is how they conceive the purpose of

language. For Saussure the primary purpose of language is to describe the world

and attach values to its parts. For both Vološinov and Barthes on the other hand,

language is first and foremost a means by which social relations are established.

This results in conceptions of communication in which the participants and their

social context are essential components. Vološinov’s work in particular is

characterised by open hostility to Saussure’s structuralism, which he sees as a

mathematical form of scholasticism.

What interests the mathematically minded rationalists is not the

relationship of the sign to the actual reality it reflects or to the individual

who is its originator, but the relationship of sign to sign within a closed

18 For a more detailed summary of Shannon and Weaver’s model see Fiske 1990: 6-23. Daniel Chandler (1994/2000) also provides an insightful (but unsympathetic) appraisal of its influence on communications research.

26

system already accepted and authorised. In other words, they are

interested only in the inner logic of the system of signs itself, taken, as in

algebra, completely independently of the meanings that give signs their

content. (Vološinov 1973: 2)19

Vološinov’s conception of language (principally the emphasis on dialogue

and the ideological context of utterances) is particularly appropriate to a model of

popularisation and popular science developed in this thesis. Chapters 4 and 5

address the role played by popular science in establishing and modifying the

boundaries of science. From this perspective, the relations between individual

texts are as significant as the content of the texts themselves. Texts speak to each

other through their audience. Contexts for popular science are conceived as

forums for the process of negotiating what is to count as science and what is to be

excluded as well as the responsibilities and privileges of a ‘scientist’. The process

of negotiation between texts that we see going on in popular science contexts is

the reason popular science lends itself to the type of analysis pioneered by

Vološinov. For Vološinov, to understand is to begin to respond

(Barker 1989: 263).

To understand another person’s utterance means to orient oneself with

respect to it, to find the proper place for it in the corresponding context.

For each word in the utterance that we are in the process of

understanding we, as it were, lay down a set of our own answering

words. (Vološinov 1973: 102, quoted by Barker 1989: 263)

This conception of language and understanding provides considerable

insight into the process by which popular science is understood by various

audiences as well as providing insight into the structure of texts themselves.

Where there is conflict between approaches to popular science, this is

often caused by the incompatibility between the belief that communication can be

studied in isolation from its cultural context, and the opposing view that the

cultural context of communication is essential to understanding it. Vološinov

would be as intolerant of the deficit model of the public understanding of science

as he is of structuralism. This is because the deficit model conceives the subject of

19 The charge of mathematical scholasticism applies even more to C. S. Pierce, the other pioneer of semiotics, than it does to Saussure.

27

scientific discourse as independent of the actual use to which communication in

science is put. In the deficit model, emphasis is put on (in Vološinov’s words) the

“inner logic of the system of signs itself” (the ‘facts’ of science and the

relationship between them) rather than on the meanings those facts take on in

different contexts.

In addition to the developments in behavioural and mathematical

approaches to communication, the 1940s also saw the consolidation of the

intellectual tradition that informs modern ‘interpretative’ approaches. The

questions that prompted these rival approaches to communication are generally

more political in nature, placing emphasis on the role media play in class conflict.

In the 1930s and 1940s, a conception of mass or popular culture as a

‘culture industry’ emerged with the Frankfurt School’s (Marxist) critique of

Fascism in Europe and mass culture in America. Theodor Adorno and Max

Horkheimer (1986) coined the phrase ‘the culture industry’ to refer to the

operation of the media. The School generally conceived mass culture in negative

terms20. However, this has not been the case with all approaches to

communication that have evolved from the agenda set by the critical school21.

One of the main distinctions between such approaches to communication and the

process school approaches relates to the assumed purpose of communication. For

the semiotic school, communication is a means to establish and contest

conceptions of social reality. (This is rather like the distinction between Vološinov

and Saussure’s conceptions of the purpose of language.) Thus, meaning is not

transmitted unproblematically as a ‘property’ of a message but is constructed by a

(situated) reader within an ideological context. Communication is not just how

individuals pass information between each other; it is also how groups in society

reach consensus about the relations between themselves. Thus, class, race, gender

and a whole set of other interests and affiliations play a role in how messages are

made meaningful. Even apparently straightforward communication (such as

accounts of scientific research) plays its role in the continual struggle over

meaning and its relation to ‘social reality’.

20 See, for instance, Marcuse 1972, One Dimensional Man (particularly pp 16-28). 21 For accounts of the development of critical approaches to communication, see note 12, page 21. In particular, see Curran 1996, Rethinking Mass Communication.

28

Communication Theory and Approaches to Popular Science Discussions of the public understanding of science and the process of

popularisation overwhelmingly adopt a ‘process school’ approach. That is, they

invoke ‘linear’ (transmission) models of communication such as Shannon and

Weaver’s22. (Sometimes a model of communication is articulated explicitly in an

account of the public understanding of science but more usually, a linear model is

adopted implicitly.)

A process school model of communication renders popular science as little

more than a dilute version of professional science. The main question to be asked

of popular science in this case is how has the popularisation process distorted

‘proper’ scientific knowledge? (How has the message become scrambled in

transmission?). One effect of adopting a transmission model of the popularisation

process is that it predisposes us to thinking of the aims of popularisation in terms

of pedagogy. While it is true that an important role of popular science is teaching

lay-people science, this is by no means the only, or even the primary, role it takes

on. For instance, we shall see how popular science acts as a forum for negotiating

the boundaries of scientists’ responsibility and authority and how scientific

images are used in public relations. The dominant view of popularisation provides

no insight into these roles.

There are various reasons for the dominance of linear models but perhaps

the most important one is simply that they are models that scientifically trained

commentators are comfortable with. This is because they offer ways to isolate

independent variables in the communication process and analyse their significance

numerically (allowing well developed techniques from natural science to be

employed). Shannon and Weaver’s model is intuitive – the underlying metaphor

of transmission concurs with common-sense notions of communication. It is also

quantitative and offers apparently objective criteria with which to assess what is

happening in the communication process.

In contrast, the ‘critical’ tradition (at least in the form of the Frankfurt

School’s analysis) is characterised by open hostility to science and scientism.

Why would researchers interested in the public understanding of science want to

22 A comparison of many models of the communication process (most of which are direct descendents of Shannon’s model) can be found in McQuail and Windahl 1981.

29

give up powerful, intuitive, objective methodologies in favour of counterintuitive,

‘interpretative’ (hermeneutic) ones? To answer this, we examine the motivation

for addressing issues in science communication – the ‘questions about questions’

raised earlier (page 7).

For researchers interested in measuring the effect of particular texts on

particular audiences, linear models of communication provide an appropriate

framework for pursuing such issues. As the ‘effect’ of science texts is often

assumed to be positive, many researchers would place a slightly different

emphasis on the question by thinking instead of the ‘effectiveness’ of a

communication strategy, but linear models are still appropriate. A straightforward,

prescriptive, linear model is, apparently, what Walter Bodmer and Janice Wilkins

want from research into science communication,

Those of us who are involved actively in trying to improve the public

understanding of science need some research that points us in the right

directions. We need to know the most effective methods to use to get

messages across to a wide variety of target audiences. (Bodmer and

Wilkins 1992: 7)

An example of a problem that might lend itself to a linear model of

communication would be a comparison of the pedagogic strategies in two

textbooks that relies on examination marks achieved by students to provide a

measure of effectiveness. Shannon and Weaver’s engineering metaphor and

Shannon’s concept of information would then provide an effectual basis for

modelling the process. Daniel Chandler, however, is less sympathetic towards

such approaches,

In short, the transmissive model is of little direct value to social science

research into human communication, and its endurance in popular

discussion is a real liability. Its reductive influence has implications not

only for the commonsense understanding of communication in general,

but also for specific forms of communication such as speaking and

listening, writing and reading, watching television and so on. In

education, it represents a similarly transmissive model of teaching and

learning. And in perception in general, it reflects the naïve ‘realist’ notion

that meanings exist in the world awaiting only decoding by the passive

30

spectator. In all these contexts, such a model underestimates the

creativity of the act of interpretation. (Chandler 1994: Conclusion)

There are many more questions raised by popular science texts than the

pedagogical ones mentioned above but the dominance of media effects research

means that alternatives have a problem establishing their relevance. Thus,

questions about the public understanding of science that cannot be phrased in

terms of media effects have to strive harder to establish their legitimacy. Since

media messages evidently do influence audiences, it seems natural and

unproblematic to ask questions about the media ‘effects’. Looking for alternative

approaches to media research not only appears perverse to many in the public

understanding of science community, it also raises basic philosophical

problems23. But although these issues (such as questions about causality) only

become apparent when we look for alternatives, this does not mean that effects

research itself is immune from them. Indeed, the opposite is the case.

The questions raised in the following chapters are not concerned with how

much information ‘gets across’ so much as with how the same information can

take on different meanings in different contexts. The questions raised in chapter 4

are concerned with the social function of popular science texts and how texts

interact with each other rather than with the actual information that is

communicated. A linear/behavioural model of communication is an entirely

inappropriate framework for addressing such questions. Peter Broks highlights

some of the consequences for ‘communication’ of addressing cultural questions,

Cultural analysis... brings us to a different understanding of what we

mean by ‘popular science’. We should no longer see the media as a

means of communication with popular science as its end product, but

rather as a system of representation encompassing what [is] both popular

and scientific. It becomes a forum for negotiations, and not a conduit for

messages. (Broks 1996: 131)

Christopher Dornan picks up the question of how critical traditions

influence investigations of science in the media in Science and Scientism in the

Media (1989). Dornan makes a distinction that is similar to Fiske’s in an account

23 Vine 1997 provides an interesting discussion of some of the philosophical questions raised by effects research.

31

of what he calls a ‘critical’ approach to communication in science. This approach

is exemplified by specific reference to two early examples: Garner &

Young (1981) and Dunn (1979). Dornan argues that most commentary is

preoccupied by the media’s inadequacy in communicating technical information.

Science is not perceived as a problematic and the communication process is

assumed to be rigidly linear.

Scientists are the source of information, the media are the conduit, and

the public is the ultimate destination. The goal is to minimise media

interference so as to transmit as much information as possible with the

maximum fidelity. The bulk of work on science and the media is

therefore conducted firmly within the mainstream tradition of North

American communication inquiry. (Dornan, 1989, pp 102)

Failure to address questions of motivation (for instance questions about the

motivation for ‘science literacy’: public relations or democratic leverage?) has led

to the exactly the same turmoil in public understanding of science research as the

wider field of communication theory has lived with for some time. To a certain

extent, the field is divided similarly into a behaviourist school on one side and a

critical school on the other. Conferences on the subject split into two camps until

one or other side is ejected permanently24. For instance, the Science

Communicators’ Forums organised by the British Association initially attracted a

mixture of theoreticians and practitioners actively involved in public

understanding activities such as museum initiatives. The non-practitioners tended

to adopt a ‘critical’ approach to communication in science and the practitioners

tended to adhere to the ‘dominant view’ of popularisation. Whilst the dialogue

between the two groups was productive in the early meetings, the relationship

became antagonistic as time went on and positions became entrenched25. The

series continued but the theoretical framework in which science communication

24 Some notable examples of public understanding of science conferences that exhibited ‘two camps’ (but nevertheless proved to be constructive and enlightening meetings) are Education for Scientific Literacy (Science Museum, London, November 1994); Building Bridges to Science (COPUS, Royal College of Surgeons, Edinburgh, April 1995); Science Communicators’ Forum (BA Annual Festival, Newcastle, September 1995); European Science Communication Teachers Conference (Wellcome Centre, London, March 1996); and Science Communicators’ Forum (BA Annual Festival, Birmingham, September 1996). 25 For a first-hand account of the Newcastle meeting (September 1995) see Richard Bond 1995.

32

was discussed remained rigidly behaviourist. ‘Critical’ scholars ceased attending

as much as they did in the beginning of the series.

The situation in communication research in general is not dissimilar to the

special case of public understanding of science research. A special issue of The

Journal of Communication26 reflected the “ferment in the field” in the 1980s. In

May 1985, the International Communication Association’s annual conference27

was dedicated to the thorny question of the disciplinary identity of communication

studies. From this meeting a two-volume publication emerged with the aim of

“encouraging the dialogues to continue” (Dervin, et al 1989: 10). Whilst the

“paradigm debates” of the 1980s did not achieve coherence in communication

studies, they have resulted in a clearer distinction between ‘scientific’ and

‘humanistic’ approaches – the former tending to adopt more quantitative empirical

methodologies and the latter more interpretative ones. In addition, there has been

a bifurcation of attitudes towards the lack of consensus: On the one hand, there are

communication scholars who celebrate pluralism within the field. On the other

hand, there are those that are intolerant of the situation28.

The principal differences between the two broad conceptions of

communication found in research into public understanding of science and

popular science texts are summarised in Table 1.

26 ‘Ferment in the Field’, Journal of Communication Vol. 3, No. 3, June 1983 27 Beyond Polemics: Paradigm Dialogues, ICA annual conference, Honolulu, Hawaii; May 1985 28 Em Griffin (1994: 481-486) further divides each of these positions but the most significant choice is this one between pluralism and consensus. See also note 11, page 20 (Bostrom and Donahew’s intolerance of interpretative accounts).

33

Science as a body of knowledge

Science as a social process

Process School Semiotic School

Communication is the transmission of messages

Communication is the production and exchange of meanings

Meaning is a property of the message

Message acquires meaning by interaction with the receiver

The media are conduits for messages

The media are forums for negotiation

Concerned with: Concerned with:

Effectiveness Signification Noise / Corruption Interpretation

Efficiency Negotiation Accuracy Ideology

Table 1 Summary of Two Conceptions of Communication in Science

A ‘Critical’ Account of Popular Science The following is a summary of the principal distinctions between the

approach adopted in this account and the dominant approach to popularisation:

• This account explores social explanations for the constitution of the

categories ‘scientist’ and ‘science’ rather than assuming the existence of

an a priori metaphysical explanation.

• Rather than defining ‘popularisation’ in terms of pedagogy, this account

addresses the various uses of popularisation explicitly.

• This account rejects ‘diffusionist accounts’ of popular science and ‘linear’

models of communication. Instead, it sees the meaning of popular science

texts emerging from an interaction between readers, texts and contexts.

• Greater emphasis is placed on the effect of the context for popular science

than on the effect of the content of individual texts.

• Rather than concentrating on the fidelity with which facts and concepts are

‘transmitted’ by individual texts, this account explores how popular

science texts interact with each other. That is, this account does not dwell

34

on how the popularisation process ‘distorts’ the facts but instead studies

the debates in which they intervene.

Contexts for Science: ‘Streams’, ‘Webs’ and ‘Forums’ (Alternatives to the Binary Division Between Popular and Professional Contexts for Science)

The question ‘what is popular science?’ is answered in this account but

provisionally and equivocally. On each occasion the question is addressed, the

answer depends on why the question has been asked. Rather than (arbitrarily)

identifying some criterion or other that would facilitate a fixed definition, this

account examines how the term is applied in a variety of contexts. The fact that

people disagree about the criteria by which popular science is distinguished from

official science is more interesting than any particular definition. However, two

distinct but complementary conceptions of ‘the popular’ as it relates to science are

employed here. These are the idea of a spectrum of contexts for scientific

discourse and the idea of a boundary between what counts as science and what is

not science. Each has a place in understanding the functions of popularisation and

the significance of the transformation involved when science is popularised.

The first problem that faces us in distinguishing genuine science from

popular science is that, given the variety of contexts in which scientific knowledge

is manifest, the precise location of the boundary between the two is not easy to

identify (Hilgartner 1990: 524-529). Most people would agree that a television

news report is ‘popular science’ whilst a paper in a scientific journal is ‘genuine

science’ but the decision to locate, say, journal editorials or grant proposals in one

category or the other seems more arbitrary. Rather than two clearly defined

categories, we find a spectrum between two extremes. We can get a handle on this

spectrum with reference to the audience involved in each context. At one end,

scientific discourse is directed at very narrow, specialised audiences and at the

other, it is directed at much wider audiences. Thus at one extreme we have

conversations between colleagues in the laboratory and at the other, mass media

such as television or tabloid newspapers. Hilgartner adopts a stream metaphor to

describe the spectrum of contexts for science (see Figure 3) and discusses various

strategies adopted elsewhere for dividing the spectrum into binary categories

(1990: 524-528).

35

upstream downstream

technical seminars grant proposals policy reports lab shop talk scientific papers research news mass media

meetings literature reviews journal editorials textbooks books

Figure 3 Contexts in which scientific knowledge is communicated (after Hilgartner 1990: 528)

The conclusion that Hilgartner reaches is that popularisation is a matter of

degree and the ambiguity about where the boundary lies leads to flexibility over

what to label ‘popularisation’ (1990: 528). This flexibility leads on to the second

set of problems involved in distinguishing genuine science from popular science:

What is the exact nature of the boundary between them? What is the significance

of crossing the boundary? How is authority and responsibility distributed between

the realm of genuine science and the realm of popular science?

This account of popular science looks at the realm of the popular from

each perspective: For some questions, the emphasis is placed on the effect of

changing a text’s context – that is, on the significance of moving scientific texts

upstream or downstream. For other questions the emphasis is placed firmly on the

boundary between science and non-science that emerges from the popularisation

process: how the boundary emerges, who it empowers, etc. Rather than

identifying the boundary between popular and professional, our task is to

recognise the interests at work and study how the boundaries emerge through

discourse. The various boundaries that emerge give rise to the another model of

popular science considered here: that of popularisations as forums for debate. This

is addressed in the following section and in chapters 4 and 5. We shall see how

both scientists and non-scientists use popularisations to shift boundaries, modify

how they operate and how established boundaries themselves act as creative

‘pivots’ in popular science.

Before moving on to consider popularisations as forums for debate, we

consider an important problem with the stream metaphor and an alternative model

36

that addresses this. The problem lies with the linearity (or otherwise) of the stream

metaphor. As Jane Gregory points out,

If a microbiologist reads a newspaper feature article about food-

poisoning, it could be argued that the popular article has become a

technical communication. If a sick layperson were to read up on the latest

research into her condition, then those academic papers become

popularisations. (Gregory 1998: 35)

Thus, the ‘spectrum’ of contexts can be misleading as the relation between

them is not fixed. Another way to put this is to note that the context for science

communication is comprised not simply of the genre it occupies or the medium it

employs but also on the particular concerns of the audience.

Bruce Lewenstein found Hilgartner’s stream metaphor inadequate to

explain the role played by the mass media in an account of the ‘cold fusion’

episode (Lewenstein 1995). In March 1989, B. Stanley Pons and Martin

Fleischmann announced at the University of Utah that they had found a way to

produce nuclear fusion at room temperature (normally this requires temperatures

like those found in the centre of the sun or those produced by an atom bomb). This

led to several months of intense speculation and commentary as various teams

around the world tried to repeat the experiments. More intense controversy

developed in the years that followed. The consensus today is that Pons and

Fleischmann did not observe cold fusion.

In the confusion that followed Pons and Fleischmann’s announcement, all

sources of information played a role. Any hierarchy in contexts broke down as, for

instance, some teams relied on television reports to recreate experiments

(Lewenstein 1995: 412). Lewenstein concludes that linear models (which for

Lewenstein includes Hilgartner’s) fail to account for the complexity of scientific

communication. Rather than a spectrum, he suggests that we think of a ‘web’ of

contexts for scientific communication (see Figure 4). In the case of cold fusion

science, the exceptional complexity in the relationships between different media

led to instability. In addition, mass media moved towards a central position in the

web.

37

grant proposals

journals

textbooks

mass media

policy reports

meetings

e-mail

Lab shop-talk

technical news

preprints

talks

Figure 4 Bruce Lewenstein’s ‘web’ model of science communication contexts. (After Lewenstein 1995: 426)

Eventually, Lewenstein is in a position to address the wider question of

how to understand the role of mass media in science communication generally.

His answer is,

Don’t try – or at least, don’t try without also examining the full

communication context at work. (Lewenstein 1995: 427)

Whilst taking the lessons drawn by Lewenstein from his analysis of cold

fusion fully on board, Hilgartner’s stream metaphor is, nevertheless, applied in

this account. It is too useful to abandon altogether and the two models are

‘orthogonal’ rather than mutually exclusive – they tell us different things about

the context of scientific knowledge. The stream metaphor has the advantage of

organising the contexts for science texts in terms of audience (see page 34). It also

corresponds to the notion of ‘preferred reading’ developed below (page 39).

However, the following provisos apply:

• Texts can move upstream or downstream without modification. (This is

especially true of pictures, as we shall see in the chapter 2.)

• The location of a text in the spectrum does not necessarily reflect its

importance to scientific endeavour.

• The location of a text in the spectrum does not necessarily tell us how

closely it reflects scientists’ actual thinking on the matter in hand.

38

As we have already seen, models of communication are crucial to debates

around the public understanding of science. Most of the disputes in the field can

be traced back to basic assumptions about communication.

Negotiation in Popular Science In much of what follows, the role of the context in making science texts

meaningful is understood by thinking of the texts themselves mediating a process

of negotiation. That is, rather than being isolated repositories of knowledge, we

can see science texts as interventions (or ironic asides) in an ongoing debate. They

speak to, and answer each other. This is why Vološinov’s conception of language

as essentially dialogical is useful to an understanding of popular science.

There are several reasons why we generally do not think of popular

science as a process as negotiation (why its dialogical nature is disguised):

• It is rare for authors to explicitly address a specific proposal (and so make

it clear that they are negotiating).

• Popular science texts are generally pedagogical rather than polemical in

their tone.

• The ostensible intention of the author is rarely to take part in a debate

(rather it is education or entertainment)

• The ostensible subject of popular science texts is usually nature itself

rather than the social relations of science.

The previous discussion of communication theory and popular science

distinguished ‘the text’ from the ‘available readings’ of it. That is, emphasis was

placed on the way meaning is actively constructed by the reader. Meaning, we

have seen, is not simply ‘distilled’ from the text nor is it ‘carried’ to the reader by

the text. On the contrary, there is no necessary correspondence between the

intentions of an author and how a reader makes sense of a text. Stuart Hall makes

this point in terms of ‘encoding’ and ‘decoding’ noting that there is no necessary

correspondence between them. “[T]he former can attempt to ‘pre-fer’ but cannot

prescribe or guarantee the latter, which has its own conditions of existence.”

(Hall 1980: 135).

This view of meaning in popular science was contrasted with a view of

popular science that placed emphasis on questions of ‘effectiveness’ and how the

39

popularisation process ‘distorts’ the science itself and that speaks of

‘communication breakdown’ between scientists and the public. The latter view

(the process school approach) has one significant advantage over the former view

(the semiotic school approach): the way it can account for misunderstandings and

mistakes. So far, we have avoided this problem by concentrating on questions for

which ‘misunderstanding’ is not an appropriate concept to adopt. But how does

our model of communication address the problem faced by popularisers who find

their message ‘failing to get across’?

The first step we can make is to distinguish two (often conflated) ways in

which such a phrase is used. On one hand, ‘failing to get something across’ can

refer to misunderstandings of a literal kind – the audience does not understand the

terms used or the concepts are too alien or too difficult. More often though, what

meant is that the audience has failed to take the meaning that the populariser

intended. That is, the audience is not operating in the ‘dominant’ or ‘preferred’

code (Stuart Hall 1980: 135).

In Encoding/Decoding (1980) Stuart Hall disrupts the symmetry of

transmission models of the communication process by noting that there is no

necessary correspondence between the encoding of a message by a ‘transmitter’

and its ‘decoding’ by a ‘receiver’. That is, the way a message is encoded cannot

guarantee or prescribe how it will be decoded. It can, however, have the effect of

delimiting and constraining the parameters by which decoding will operate,

If there were no limits, audiences could simply read whatever they liked

into any message. No doubt some total misunderstandings of this kind do

exist. But the vast range must contain some degree of reciprocity between

encoding and decoding moments, otherwise we could not speak of an

effective communicative exchange at all. Nevertheless, this

‘correspondence’ is not given but constructed. It is not ‘natural’ but the

product of an articulation between two distinct moments.

(Hall 1980: 135-136)

The situation is even more complex when it is scientific discourse in

question because of the ‘naturalness’ of the subject. But Stuart Hall addresses this

issue also,

Reality exists outside language, but it is constantly mediated by and

through language: and what we can know and say has to be produced in

40

and through discourse. Discursive ‘knowledge’ is the product not of the

transparent representation of the ‘real’ in language but of the articulation

of language on real relations and conditions. Thus there is no intelligible

discourse without the operation of a code… There is no degree zero in

language. Naturalism and ‘realism’ – the apparent fidelity of the

representation to the thing or concept represented – is the result, the

effect, of a certain specific articulation of language on the ‘real’. It is the

result of a discursive practice. (Hall 1980: 132)

Having disrupted the symmetry between encoding and decoding, Hall

identifies three positions from which decodings can be constructed. One possible

position to adopt is that of the ‘dominant-hegemonic position’ – the reader

decodes the message with reference to the code with which it was encoded. That

is, the reader is ‘operating inside the dominant code’.

The definition of a hegemonic viewpoint is (a) that it defines within its

terms the mental horizon, the universe of possible meanings, of a whole

sector of relations in a society or culture; and (b) that it carries with it the

stamp of legitimacy – it appears coterminous with what is ‘natural’,

‘inevitable’, ‘taken for granted’ about the social order. (Hall 1980: 137)

A radically alternative position to adopt is that of an ‘oppositional code’.

In this case, the reader understands both the literal and connotative inflection

within a discourse but chooses to decode it in a contrary way,

He/she detotalises the message in the preferred code in order to retotalise

the message within some alternative framework of reference. This is the

case of the [television] viewer who listens to a debate on the need to limit

wages but ‘reads’ every mention of the ‘national interest’ as ‘class

interest’. (Hall 1980: 138)

In between these two positions is one that Hall calls the ‘negotiated code’,

which is altogether subtler. It involves a mixture of both positions: granting a

privileged position to the hegemonic viewpoint (basically accepting its

legitimacy) whilst retaining the right to modify its application in specific

circumstances according to ‘local conditions’. Essentially, a ‘negotiated code’

according to Hall is one that, at a situated (local) level, “makes its own ground

rules” or “operates with exceptions to the rule” (Hall 1980: 137).

41

The following account of popular science concentrates on how the

meaning of popular science texts emerges from a process of negotiation between

an ‘interested’ reader and the ‘preferred reading’ offered by the text. It also

considers the way popular science offers a forum for negotiating the ‘boundaries

of science’ – the limits of scientists’ authority and responsibility and who can

legitimately claim that authority. Concentrating on negotiation involves a

fundamental shift in the way we consider texts – a shift that places emphasis on

the context. It also requires a broader understanding of ‘negotiation’ than we find

in everyday usage. Usually, negotiation implies a specialised activity that occurs

at clearly identified places and times: it is how managers and unions settle

disputes, how business people draw up contracts, what diplomats do, etc. In this

account, the term is applied not only to clearly identified negotiations like these

but also to any place where the essential features of negotiation apply.

Negotiation need not involve one of the specialised activities normally

associated with the term such as the role played by advocates, diplomats or

politicians. The term can be applied to any activity that involves interests and

proposals and counter-proposals. This includes producing popular science texts.

These explicitly or implicitly include proposals about the constitution of science,

the limits of scientists’ authority, the relative importance of science compared

with other entities, etc. There are often specific, material and personal interests at

stake and so proposals in popular science texts result in counter-proposals in other

popular science texts. For instance, most discussions of the big bang theory

implicitly include the notion that pronouncements on the origin of the universe are

the sole preserve of physicists. This could have serious implications for clerics,

who have an interest in questions about authority in such matters. People with

religious affiliations produce texts that specifically address the explicit and

implicit proposals in accounts of the big bang. These counter-proposals are in turn

addressed in subsequent accounts (though usually they do not make reference to

any specific intervention)29.

29 See for instance Weinberg 1993: Chap. 11; Davies 1984 and 1993; Carl Sagan in his introduction to A Brief History of Time speaks of there being, “nothing for a Creator to do” (Hawking 1988: x). Hawking himself claims that to discover a complete theory of physics, “would be the ultimate triumph of human reason – for then we would know the mind of God.” (1988: 175).

42

To speak of these interventions as counter-proposals in a process of

negotiation is not to argue that they serve only to promote the interests of

scientists. Rather, that we must understand them in two ways by concentrating

either on the individual text itself or by concentrating on its context. The

proposals and counter-proposals in popular science are oriented with respect to the

state of the negotiation – to what is currently taken for granted – just as diplomats

or trade-unionists orient their interventions. Put another way, the current state of

the negotiation shapes the proposals themselves. Although a claim within a text

may be autonomous and self-consistent, we can not fully account for it without

reference to its wider context – that is, without reference to its role in a

negotiation concerning material or other interests.

In the case of cosmology, the negotiation eventually did become like a

traditional negotiation with interested parties meeting face to face. This is Stephen

Hawking’s version of events,

The Catholic Church had made a bad mistake with Galileo when it tried

to lay down the law on a question of science, declaring that the sun went

round the earth. Now, centuries later, it had decided to invite a number of

experts to advise it on cosmology. At the end of the conference the

participants were granted an audience with the Pope. He told us that it

was all right to study the evolution of the universe after the big bang, but

we should not inquire into the big bang itself because that was the

moment of Creation and therefore the work of God.

(Hawking: 1988: 116)

This, of course, is an isolated incident. In general, negotiation over issues

such as ‘who owns the Creation’ progresses through proposals, critiques and

counter-proposals made in popular science texts. The way Hawking frames his

account of the conference is itself a ‘counter proposal’ in the negotiation – one

that makes sense only in relation to the current state of the process. On Galileo,

we are today all agreed: ‘of course’ (with hindsight) it was wrong of the Church to

attempt to intervene on questions that are so clearly scientific. The question for us

here is not whether such questions are necessarily scientific but how we come to

agree that they are scientific.

The discussion so far has identified axes by which accounts of

popularisation may be distinguished from each other. We can compare approaches

43

by looking at a) the way they distinguish the ‘science / non-science’ categories;

b) the mechanics and importance of their model of communication; c) their

assumptions about the purposes of popularisation. These points should be made

explicit in any account of popular science. Only then can its value in more general

debates about science be discerned.

Interdisciplinary Science Studies An important motivation for studying popular science is the extent it can

tell us about the scientific endeavour itself. (Work in this area is generally dubbed

‘science studies’.) Like academic papers and laboratory notebooks,

popularisations provide considerable insight into the social structure of science

and the consolidation of scientific knowledge. Unlike the traditional sources

though, historians and sociologists of science have only recently turned to

popularisations as sources30. There are many reasons for this. One, noted by

Cooter and Pumfrey, is the absence of an intellectual tradition in which such a

study could be located,

[N]either intellectual history, nor the sociology of (professional) science

equip us for thinking about the history of the process of scientific

exchange, interaction, translation and resistance. Once we leave the well-

charted areas of learned science for popular science we are without a

map. (Cooter and Pumfrey 1994: 248)

The proviso above concerning Hilgartner’s spectrum and what it does or

does not tell us about the actual thinking of scientists (page 37), is a reminder to

30 Some recent examples include Cooter 1984, The Cultural Meaning of Popular Science: Phrenology and the Organisation of Consent in Nineteenth-Century Britain; Shinn and Whitley (Eds.) 1985, Expository Science: Forms and Functions of Popularisation; Myers 1985, Nineteenth-Century Popularisers of Thermodynamics and the Rhetoric of Social Prophecy; Block Jnr 1986, T. H. Huxley’s Rhetoric and the Popularisation of Victorian Scientific Ideas; Callon 1986, Some Elements of a Sociology of Translation: Domestication of the Scallops and the Fishermen of St Brieuc Bay; Broks 1988, Science and the Popular Press: A Cultural Analysis of British Family Magazines 1890-1914; Ring 1988, The Popularisation of Elementary Science Through Popular Science Books c. 1870 to c. 1939; Shapin 1989, Science and the Public; La Follette 1990, Making Science Our Own: Public Images of Science 1910-1955; Kemp 1990, The Science of Art: Optical Themes in Western Art from Brunelleschi to Seurat; McRae (Ed.) 1993, The Literature of Science: Perspectives on Popular Science Writing; Stafford 1994, Artful Science: Enlightenment Entertainment and the Eclipse of Visual Education; Broks 1996, Media Science Before the Great War; Gregory 1998, Fred Hoyle and the Popularisation of Cosmology. For further examples and a discussion of popularisation in the history of science see Cooter and Pumfrey 1994, Separate Spheres and Public Places: Reflections on the History of Science Popularisation and Science in Popular Culture.

44

avoid thinking of sources of information about scientists’ thoughts as divisible

into distinct categories of private and public. As Martin J. S. Rudwick counsels,

we should instead think of a ‘continuum of relative privacy’,

covering an array of activities ranging from the most deeply private

speculation to the fully public formulation of conclusions, and leaving as

its historical traces a corresponding array of documents ranging from

strictly private notebooks to published articles and books. No single

activity and no single class of documents gives privileged or even

preferential access to the scientist’s ‘real’ thoughts or beliefs since all

alike are the embodiments of modes of thinking and doing that form

essential components of the whole process of scientific work.

(Rudwick 1985: 430)31

This point about how to employ sources in the history of science applies to

the use of sources in other science studies also. Rarely, if ever, can we draw

legitimate conclusions from just a single type of source or even a single mode of

argument. So, what does it mean to describe a study such as the present one as

belonging to the field ‘science studies’? It is such an interdisciplinary,

heterogeneous field that designating an account as science studies tells us little

about it.

In the past, the situation was more straightforward. What today is studied

under the umbrella heading ‘science studies’ would at one time be distinguished

into the history, sociology or philosophy of science. Each distinct field would

have had its own canon of literature and there would have been broad consensus

as to what would have counted as progress in the field. The current situation is

now more complicated for two main reasons. On the one hand, the boundaries

between the history, sociology and philosophy of science have become more

permeable as approaches from one field are understood with reference to

another32. On the other hand, the humanistic study of science33 has broadened

(continued on next page)

31 Rudwick’s notion of a continuum of relative privacy is also discussed by Gregory 1998: 36. 32 This is due in particular to the influence of Thomas Kuhn’s 1970 The Structure of Scientific Revolutions which placed a sociological emphasis on epistemological questions in science and divided different historical periods into two sociological and philosophical distinct groups: ‘normal science’ and ‘revolutionary science’. Also influential are interdisciplinary groups that have emerged since the 1970s – especially the ‘Edinburgh School’ with its emphasis on the sociology of scientific knowledge (see Bloor 1991). Members of the school responsible for blurring the distinction between philosophy, sociology and history of science include the sociologists Barry

45

considerably as researchers from adjacent fields such as literary studies, social

anthropology and women’s studies have turned their attention to science34.

David J. Hess simplifies the situation by pointing to the “gulf of

understanding among the constituent disciplines” of science and technology

studies (STS) (Hess 1997: 6). It is still true today that science studies is comprised

of various incompatible approaches to the subject. Concentrating on the

differences allows Hess to discuss the disciplines in separate chapters on the

philosophy of science, the institutional sociology of science, the sociology of

scientific knowledge, and critical and cultural studies of science and technology.

Within this framework however, Hess continually stresses the increasingly

interdisciplinary nature of science studies. The boundaries between disciplines in

science studies are reorganised on a regular basis. Further, Hess concludes with a

four point analytical framework for science studies that is wholly

interdisciplinary. This was developed partly in response to his unhappiness with

the ‘Strong Programme’ in the sociology of scientific knowledge (see

Bloor 1991: 3-23). What follows is a summary of his prescription for analysis of

scientific episodes:

1. The analysis is political; it explores the operation of power in the

history of a field of knowledge that becomes constituted by a

consensus and attendant heterodoxies.

2. The analysis is cultural in the sense that it develops a sophisticated,

noninstrumentalist explanation and explication of the dynamics of

power that have been described in the first step.

contd… Barnes and David Bloor and the historians Steven Shapin and Donald MacKenzie. A sense of how the history of science has broadened to address sociological and philosophical questions can be gained from Olby et al (Eds.) 1989 Companion to the History of Modern Science. Within this collection, the article by J.R.R. Christie 1989, The Development of the Historiography of Science provides a good overview. 33 For a discussion of the distinction between ‘humanities’ and ‘social science’ in science studies see Hess 1997: 6-7. Here, the term ‘humanities’ is used broadly to include fields often designated ‘social science’ such as sociology. 34 A sense of the diversity in science studies today can be gained from Jasanoff et al (Eds.) 1994 Handbook of Science and Technology Studies. Within this collection, the review article by Ashmore et al 1994 Discourse, Rhetoric, Reflexivity: Seven Days in the Library provides a snapshot of the field. It should be noted, however, that novel approaches to popular culture that have emerged in recent times have not yet been exploited to the extent that they could be with respect to popular science. This is due in part to the dominance of particular problematics and genres for analysis and in part to what Christopher Dornan (1989) calls a ‘lay intimidation’ with respect to science.

46

3. The analysis is evaluative; it draws on the philosophy of science to

weigh the accuracy, consistency, pragmatic value, and potential

social biases of the knowledge claims of the consensus and

alternative research traditions.

4. The analysis is positioned; it provides an evaluation of alternative

policy and political goals that could result in beneficial institutional

and research programme changes… In the terminology of the STS

field, this level of analysis can be described as a type of reflexivity,

but one that is more profoundly sociological or anthropological than

previously discussed forms. (David J. Hess 1997: 153-155)

Of these principles, the most provocative is number three – the

requirement that the analysis is evaluative. This challenges two of the four central

tenets of the Strong Programme in the sociology of scientific knowledge, namely

that the analysis should a) “be impartial with respect to truth and falsity,

rationality or irrationality, success or failure. Both sides of these dichotomies will

require explanation”; and b) “be symmetrical in its style of explanation. The same

types of cause would explain, say, true and false beliefs” (Bloor 1991: 7). Each of

these tenets of the Strong Programme has caused significant controversy in recent

years. In the so-called ‘Science Wars’ the principle of symmetry has come under

considerable criticism35. Often the problem lies in assumptions that are made

about the purpose to which the analysis is directed. Clarity at this level (which for

Hess would come under the umbrella of his principle number four – ‘the analysis

is positioned’) can alleviate the confusion caused by incompatible methodological

approaches within the same intellectual project. Symmetry is a necessary

methodological principle to adopt to understand particular aspects of a historical

episode. It would be a mistake, however, to make it a requirement of all historical

accounts just as it is a mistake to assume that only scientists’ own histories of

science can claim legitimacy. Harry Collins distinguishes five different types of

history of science36:

• Official: The post hoc assignment of credit.

35 For instance Steven Weinberg argues, “What Herbert Butterfield called the Whig interpretation of history is legitimate in the history of science in a way that it is not in the history of politics or culture, because science is cumulative, and permits definite judgements of success or failure.” (1996: 15). For further discussion of scientists’ relation to the history of science, see Brush 1995. 36 Speaking at the Science Peace Conference (University of Southampton, 28 July 1997)

47

• Reviewers’: Designed to organise a field.

• Reflective: Designed to provide morals and lessons (what Collins calls

‘war stories’).

• Analytic: As above but designed to provide morals and lessons for

sociologists and philosophers.

• Interpretative: Puts the audience into the shoes of scientists with all the

doubts and uncertainties.

Collins claims that ‘interpretative’ histories are vital to empower citizens

as well as being vital for new recruits to a scientific field. This classification of

histories of science also provides a handle on popular science (which, incidentally,

is the most important forum for interpretative histories). The histories of science

in popular science books are mainly of the first and third kinds (official and

reflective). Occasionally, popular science provides the forum for ‘reviewers’

histories’ – those with the aim of organising a field. James Gleick’s Chaos:

Making a New Science (1988) is an example of such a history. All these various

histories place different weight on various forms of evidence and argument. The

question of their legitimacy can only be addressed with respect to their purpose

and context.

The broader purpose of this discussion of histories of science is to stress

the importance of orienting any discussion of science. This is the point of David

Hess’s fourth principle (see page 45) – that analysis of science should be

positioned. We should be distrustful of, and hostile to, any account of science or

the public understanding of science that fails to orient the reader (that is, one that

apparently has no ‘agenda’) because this simply means that it does not know what

agenda it is working to37.

Whether the various methodological approaches within science studies are

appropriate or not depends on the questions that they are brought to bear upon.

This point, banal and platitudinous as it may be, is worth stressing repeatedly

because incoherence at the level of motivation is a singular characteristic of

contemporary debate within science studies and the public understanding of

37 This is a similar point to that made above (page 20) about empirical work in communication theory.

48

science. Forging a coherent account involves understanding the boundaries

between different theoretical approaches to science as much as it involves

understanding how to apply any one approach. In this account, for instance, the

distinction between metaphysical and cultural approaches to the question ‘what is

science’ is discussed at length to orient the subsequent discussion of the

‘boundaries of science’. The whole field of science studies would benefit from

greater emphasis placed on ‘questions about questions’.

The Following Chapters The following chapters explore popularisation from two perspectives.

Chapters 2 and 3 address the popularisation of scientific images and the dialogue

between science and the visual culture in which it is embedded. Chapters 4 and 5

explore the boundaries between ‘science’ and ‘non-science’ and address the role

popularisations take on as ‘forums for negotiation’. Before outlining these

chapters in more detail, we consider how the themes developed in the preceding

discussion shape the account.

Bruce Lewenstein’s account of the cold fusion saga (1995) and Jane

Gregory’s analysis of Fred Hoyle’s career (1998) each reveal the complex role

that popular science plays in the research process itself. Pete Broks (1988, 1993

and 1996) demonstrates how popular science played a role in the wider struggle to

gain assent for social change before the First World War. To this list, we could

add many other places where the role of popular science is not primarily

pedagogical but where an inadequate model of communication would force (or at

least predispose) us to understand and assess popular science texts in terms of

pedagogy.

One of the principal areas where this pedagogical assumption is manifest

is the public understanding of science movement. It is this assumption (and the

conception of communication that underpins it) that explains the conflict between

the objectives of the public understanding of science movement (see page 3 and

note 2, page 4). The conflict between public relations on the one hand and

democratic empowerment on the other could be resolved with a more ingenuous

attitude towards the subject. This would be facilitated by a more sophisticated

approach to communication.

49

The emphasis in the present account on a ‘critical’ approach to

communication, that addresses the context of messages to the same or greater

degree than their content, is not the only position to adopt. The subjects addressed

here, such as the relations of scientific imaging and visual culture and the

construction and maintenance of boundaries, have been explored fruitfully but

other approaches (including those that adopt a transmission model of

communication) would provide insight unavailable within this framework. In

general though, critical/interpretative approaches to communication address

broader questions than behavioural/empirical ones and the latter are frequently

unhelpful when applied to the types of questions addressed in public

understanding of science research.

A necessary component of any account of communication in science,

however, is a reflexive approach to whatever theoretical position is adopted – an

awareness of choices being made and an explicit assessment of the implications of

those choices. This is, perhaps, the most important conclusion of the present

study. Communication theory is central to questions of popular science and a

reflexive approach to communication is the foundation of any meaningful

conclusions drawn from either audience surveys or analysis of popular science

texts.

We need theoretical flexibility to speak about science coherently. We have

seen the type of flexibility needed with the sociological turn in the philosophy of

science since the 1970s. Understanding popular science is yet more complicated

though this is often overlooked. The first step to a mature account of popular

science is the recognition that language and communication are essentially

problematic.

Communication is not rendered ‘transparent’ or ‘neutral’ simply because

science is the topic communicated. Indeed, the tension between the

epistemological claims of science and its rhetoric – between facts and the social

conditions that allow us to recognise their pertinence – compounds the question of

signification in popular science texts. Science Wars discourse often presents the

problem in binary terms: either science is ‘socially constructed knowledge’ or it is

‘genuine knowledge of the real world’. The dichotomy is actually rather

disingenuous – the two positions are, by no means, mutually exclusive – but the

way the choice is presented is significant. Emphasising the latter choice (science

50

is ‘truth’) has the effect of naturalising all scientific discourse and disguising any

ideological role it takes on – whatever context it finds itself in. On the other hand,

emphasising the former choice (science is ‘socially constructed’) actually

provokes a search for ideology.

Images in Science Achievements in science are often presented to the public in the form of a

picture. However, pictures in popular science have, so far, failed to attract serious

critical attention38. Research into popularisation has instead concentrated on

written texts but this emphasis does not reflect the very central place of nonverbal

communication in both professional and popular science. The approaches outlined

above – particularly the emphasis on the role of context in meaning – make

previously intractable questions about representation in science accessible.

In a public context, a picture can take on very different roles from those it

may have had in a scientific context. In contrast to text-based popularisations, the

significance of this recontextualisation is not often apparent. This is a symptom of

what Gordon Fyfe and John Law (1988) call “the invisibility of the visual” in

science studies. They advocate much greater attention to the use of images in

science.

A depiction is never just an illustration. It is the material representation,

the apparently stabilised product of a process of work. And it is the site

for the construction and depiction of social difference. To understand a

visualisation is thus to inquire into its provenance and into the social

work that it does. It is to note its principles of exclusion and inclusion, to

detect the roles that it makes available, to understand the way in which

they are distributed, and to decode the hierarchies and differences that it

naturalises. And it is also to analyse the ways in which authorship is

38 For one of the few exceptions see Daniel Jacobi 1989, Scientific Imagery and Popularized Imagery: Differences and Similarities in the Photographic Portraits of Scientists. There is a degree of literature on science on television and at the cinema but even these accounts do not concentrate much on images. (See, for instance, Gardner and Young 1981, Science on TV: A Critique; Nathan Reingold 1985, Metro-Goldwyn-Mayer Meets the Atom Bomb; Aart Gisolf 1993, Science on Television; Bernard Dixon 1986; Roger Silverstone 1984, Narrative Strategies in Television Science: A Case for Study; Roger Silverstone 1985, Framing Science: The Making of a BBC Documentary; Roger Silverstone 1989, Science and the Media: The Case of Television; Ian Connell 1980, Making Sense of Science; Neil Ryder 1982, Science, Television and the Adolescent: A Case Study and Theoretical Model; and Robert G. Dunn 1979, Science, Technology and Bureaucratic Domination: Television and the Ideology of Scientism)

51

constructed or concealed and the sense of audience is realised. (Fyfe and

Law 1988: 1)

The main aims of chapter 2 are 1) to identify where there is ‘room’ in

scientific images for them to take on ‘extra-scientific’ meanings in popular

contexts and 2) to develop a scheme for determining how significant changing a

picture’s context will be (for instance, predicting what sense will be made of a

popularised image). The changing status of the scientific image since the

seventeenth-century is also examined. The analysis draws a distinction between

‘arbitrary’ and ‘externally determined’ (or independent) aspects of images. It also

invokes John Locke’s articulation of the distinction between ‘primary’ and

‘secondary’ qualities of objects. The analysis leads to a wide range of conclusions

about, a) the role of imaging in science; b) the distinction between popular and

professional uses of images in science; c) the interaction between the visual

culture of science and wider visual culture; and d) more philosophical questions

about representation.

In chapter 3 these themes are developed with reference to concrete

examples. Two modern examples (scanning tunnelling micrographs of atoms and

synthetic aperture radar images of Venus) are dealt with in depth.

Popularisation and the Boundaries of Science What counts as science; who counts as a scientist; where to place the

boundaries of scientists’ responsibility and authority (for instance where science

stops and politics or theology begins)? These are all matters of continual

negotiation between scientists and the rest of society. One, often overlooked,

function of popular science is to provide a forum in which the negotiation can take

place – perhaps the most important forum. The concept of boundaries is well

established in the history and sociology of science39. In chapter 4, the notion of

‘boundary work’ is developed as a critical tool for the analysis of popular science

texts. In chapter 5, this theoretical framework is applied to a variety of examples

of boundary work in popular science texts. In addition, chapter 5 explores the way

39 See for instance Barnes 1974: Chap. 5; Barnes 1982: 90-93; Gieryn 1983, 1994 and 1999; Cooter 1984; Shapin and Schaffer 1985 (particularly Chap. 8); Shapin 1989; Callon 1986; Barnes, Bloor and Henry 1996: Chap. 6. For earlier accounts of what has come to be known as ‘boundary work’ see Wallis (Ed.) 1979.

52

boundaries are employed creatively in popular science (especially in dramatic art

and literature).

By looking for the boundaries on which texts impinge, we can recognise

the interests that shape the texts themselves. This in turn gives us insight into how

such texts are made meaningful. In concentrating on boundary work, emphasis is

placed firmly on contexts for popular science. The analysis of texts themselves is

prompted by considering their place in a constellation of other texts.

As we have seen, making sense of boundaries and understanding how they

are employed in a literary context requires considerable theoretical flexibility. The

two chapters, then, tread a careful path through the philosophy of science and

various models of communication in science. That is, we need webs, streams and

boundaries as well as an understanding of the epistemological issues these models

throw up.

At the heart of many Science Wars disputes over these issues is clumsy

metaphysics. This account of popular science overcomes this obstacle, not by

solving all the philosophical problems in science, but by carefully avoiding them –

preventing metaphysics from complicating the methods of textual analysis applied

to metaphysical disputes. As we shall see in chapter 4 (page 150), the question is

not ‘what is science?’ but, ‘given that people disagree about what science is, how

is the answer decided and what is the source of the conflict in the first place?’

53

2 Images and Imaging in Sc ience and Popular

Cul ture

‘Reading’ Visual Texts in Popular and Professional Science

Achievements in science are often presented to the public in the form of

pictures. Two particularly dramatic examples are pictures of the surface of Venus

(Figure 15, page 134) from the Magellan mission (May 1989 to October 1994)

and pictures of individual iron atoms arranged on the surface of a piece of copper

(Figure 13, page 118). Earlier examples include Galileo’s 1610 drawings of the

surface of the moon (Figure 10, page 112) and Robert Hooke’s 1665 engraving of

a flea (Figure 5, page 70). These texts form the principal examples through which

questions arising from the popularisation of scientific images are explored here.

The difference between scientific images in popular and scientific contexts

is explained here with reference to the problem of representing objects that are

ordinarily invisible. The increasing importance of imaging and visualisation to

science is revealed through an analysis of the Science Citation Index. A scheme

for classifying scientific imaging techniques according to their relation to human

vision is outlined. The account reveals a dialogue between scientific image

makers and the visual culture in which they are embedded. Eventually, the binary

opposition of ‘scientific’ and ‘popular’ contexts is rejected in favour of the

‘stream’ model outlined by Stephen Hilgartner (1990), (see page 34).

To properly account for the relation between scientific images and popular

images, several issues need to be addressed. Broadly, we need to be aware of the

choices available to image makers and how these relate to their ‘visualisation

goals’. In particular, we need to understand the status of visual evidence in each

discipline we are concerned with, the mechanism of the imaging technique, the

nature of the data represented, and the visualisation options available to the image

maker. An account of popularised images should begin with the particular

imaging technique at issue – the freedom and constraints it imposes on the image

maker and the scientific tradition in which it is embedded. Only then can we

properly address questions about power, ideology, ‘boundary work’ and any other

54

issues related to questions of representation. For the most part, the research that

led to the conclusions outlined here relied on close examination of images

themselves and the auxiliary information provided by their context rather than on

surveys or ethnographic studies of viewers. That is, the following is an account of

‘available readings’ of scientific images rather than actual readings. In general,

distinctions made between ‘scientific’ and ‘non-scientific’ viewers are not based

on actual constituencies of scientists and non-scientists but on the framework that

various contexts provide for making sense of images. This textual analysis is,

however, supported and in some cases precipitated by (non-systematic) empirical

observations of different audiences for scientific images. For instance, through

study of the visualisation community, through my own visualisation practices and

by observing the reactions of students and colleagues to the texts examined.

Preview With the cultural turn in science studies, it is perhaps not surprising to find

that turning to questions of popularisation provides insight into fundamental

issues related to representation in science. The conclusions drawn in the following

account came from three distinct (but often simultaneous) approaches to visual

texts. One approach involves applying lessons from historical accounts of imaging

in science to questions arising from modern popularisation practices. Another

approach involves a lengthy study of modern techniques of scientific visualisation

and the development of visualisation as a discipline in its own right. A third

approach involves a detailed analysis of particular visual texts and the various

contexts they are to be found in.

From these studies, a model emerges that reveals (and explains) how

scientific images are transformed through the process of popularisation. It also

reveals an inherent organising principle in scientific imaging based on the relation

of imaging techniques to human vision. In particular, a modified version of John

Locke’s articulation of the distinction between ‘primary’ and ‘secondary’ qualities

of objects is invoked to explore this relation.

Some of the conclusions that arise here and in the chapter that follows are

summarised here as a ‘preview’. In some cases, the full significance of the

conclusions must await further explication but they are, nevertheless, flagged to

make the direction of the argument clearer from the outset.

55

The first set of conclusions relates to the visual culture of modern science

and the role of pictures in research. (The first two conclusions stem from analysis

of scientific images and the second two from scientists’ own accounts).

• In modern scientific contexts, pictures form part of a claim about nature

(rather than being the independent evidence on which a claim is made).

• Scientific images often represent a synthesis of two or more distinct

physical theories.

• The distinction between using a picture as a tool and using it for other

purposes is an important one for scientists.

• Scientists will often group all uses of pictures that are not directly related

to research in a single category: They are dismissed as mere ‘pretty

pictures’.

The second set of conclusions relates to the nature of the distinction

between ‘popular’ and ‘professional’ science. The analysis of scientific pictures

reveals the attribution of a text to one category or another to be arbitrary.

However, it also provides an important handle on the distinction itself. (These

conclusions all stem from analysis of images rather than actual viewers).

• Often the only difference between scientific images and popularised

images is their context (the pictures themselves can be identical).

• Viewers implicitly distinguish ‘arbitrary’ from ‘externally determined’

aspects of images. The former represents those aspects that require a

choice to be made by an image maker. This distinction gives us a handle

on the difference between popular and professional contexts for images. In

professional contexts, viewers are empowered and encouraged to

recognise choices that have been made.

• Scientific images are more ‘about’ what they denote and popular images

are more about the connotations they carry.

• In scientific contexts, pictures are approached critically and sceptically

like every other part of the claim and supporting evidence. Popular

contexts invite a more passive gaze when it comes to claims about what

the image denotes. However, popular contexts also liberate a wider

repertoire of interpretative schema with which to make sense of the image.

56

• Another difference between scientific and popular contexts is the extent to

which the synthesis of distinct physical theories is apparent in a single

image. In a popular context, the synthesis is ‘naturalised’ and not open to

scrutiny. In a scientific context, the distinct theories are ‘prioritised’ and

thus the image is ‘read’ in an ordered way.

• In popular contexts, the ‘scientificity’ of an image is not necessarily

related to its scientific use.

The third set of conclusions relates to questions of representation in

science. The conclusions emerge, principally, from the model of scientific

imaging mentioned above that relates visualisation techniques to human vision.

• The difference between scientific imaging techniques and human vision

results in a deficit of optical qualities that arise spontaneously from the

technique itself.

• This ‘super-sensory gap’ in turn, results in choices that have to be made by

image makers and flexibility about what counts as an acceptable

‘visualisation idiom’.

• Imaging in science is often a process of ‘lending’ invisible objects or

phenomena the optical properties of familiar objects – making them fit in

to our experience of sight.

• Imaging is a process of analogy. The objects that are referenced in the

analogy are not only familiar, they are also likely to be things we have

strong feelings about – they are culturally and emotionally charged. In

short, they are things that we are good at looking at for some reason or

other.

The final set of conclusions relates to the interaction of the visual culture

of science with the wider cultures in which it is embedded. In particular, the

conclusions relate to modern scientific visualisation techniques and the sense that

is made of visualisations in scientific and non-scientific contexts. Scientific

visualisation emerges as a significantly novel and singularly important departure

in the development of scientific texts. Again, these conclusions are based on

analysis of scientific images themselves rather than on actual viewers of the

images.

57

• Image makers draw on the conventions of visual culture to make data

accessible to human senses. (The visual culture of science is, in this sense,

not objective, universal nor autonomous.)

• Image makers are constrained both by the visual culture in which they

operate and by the data the images denote. They are, however, ingenious

when it comes to bridging the ‘super-sensory gap’ between scientific

instruments and human vision.

• Modern scientific visualisation marks a departure in visual representation

by explicitly encoding the motivation of the image maker.

• Even in popular contexts, modern visualisations denote the process of

science rather than just a denoting a natural phenomenon and are therefore

particularly interesting from the point of view of the public understanding

of science.

Popular and Scientific Contexts for Scientific Images The main difference between a picture in a popular context and a picture in

a scientific context concerns their rhetorical roles. Images in a scientific paper

form part of a claim about the world (the same claim that is articulated verbally in

the paper’s abstract)1 whereas in a popular context the same image stands as an

iconic representation of the whole claim.

A way of conceiving the difference between the two rhetorics is to note its

resemblance to the distinction Greg Myers (1990: 142) makes between a

‘narrative of science’ and a ‘narrative of nature’. In scientific papers, he notes, the

‘actors’ are the scientists and the instruments whereas in popular accounts of the

same work the actors are the objects of study. This distinction holds as a general

(but not exclusive) rule for professional and popular contexts for scientific images

also. The rhetoric of a scientific paper places accompanying pictures firmly within

the narrative of the science itself – its role in that narrative is explicit. In a popular

account, the picture qua picture does not have a role in the narrative. Instead, the

actor in the narrative is the object or phenomenon that the picture denotes.

1 On the subject of claims in science, Greg Myers makes the point that, “The claim of an article, its main contribution to knowledge, is usually taken to be unambiguous, so that an unqualified reference like “Crews and Fitzgerald, Proc. Natl. Acad. Sci. USA 77(1980) 499-502” can convey it.” (1990: 114)

58

In a scientific paper, images can be approached sceptically like every other

element of the paper. This means that the image itself can be judged in relation to

the methods, results, discussion, etc. to the same extent that the results and

discussion are judged in relation to the image. In a popular context, the

relationship is not reciprocal. The claim about the world may be judged against

the image but the image itself is not subject to the same type of scrutiny as the

accompanying text. We can think of the reasons why images in popular contexts

are not subject to the same scrutiny in two ways. On the one hand, the

accompanying text does not provide enough detail to empower the viewer to

scrutinise the methods by which the image was created. On the other hand, there

is no expectation on the part of the viewer that he or she should be so empowered

as this is not a role that a popular context takes on. Instead, an image in a popular

context acts as an apparently independent source of evidence, which tends to

naturalise the claim with which it is associated. As Ingrid Kallick-Wakker notes,

when the nature and limitations of artificial objects are forgotten then they

become idols and few objects are more artificial than a three dimensional object

constructed from scientific measurements (1994: 311).

The difference, then, between scientific and popular contexts is that, in a

popular context, scientific images appear artless whereas in a scientific context,

the motivation of the author is taken into account when making sense of an image.

Understanding how scientific images in popular contexts achieve this sense of

artlessness is related to the epistemological claims of photography in general2. In

scientific contexts, questions of truth are yet more problematic because of the

tension between the objectivity of the process by which an image is produced and

the motivation for invoking it in a particular context.

In popular contexts, scientific images are often more ‘about’ the

connotations they carry or the claim they iconically represent than the phenomena

they denote. For instance, a dramatic MRI scan of a brain on the cover of a

(hypothetical) book about consciousness would not serve to inform readers about

the structure of the brain itself. Neither would it serve its original purpose of

diagnosis. Instead, the picture would act to connote ‘scientificity’ and to suggest

2 On the origin of the epistemological claims of photography, see Holland, Spence and Watney (Eds.) 1986: 3.

59

that answers to questions about consciousness lie principally in the study of brain-

structure (rather than any rival discipline). However, the unambiguous reference

to an external and independent reality in scientific pictures provides an ‘alibi’ for

the more ambiguous roles it may take on in various contexts. That is, the

objectivity of the process by which the image was produced effectively naturalises

the motivation of anyone who subsequently invokes the image.

In popular contexts, readers of scientific images do not always look for the

social interests served by a particular representation (whether the interests are

related to the scientific arguments being made or are unrelated to the phenomenon

denoted and instead relate to, say, political interests). In contrast, many other

visual representations are understood with reference to the social function they

perform. For instance, viewers of advertising images generally make sense of

them with reference to the interests of the advertiser. (A photograph of a car, for

example, is not viewed ‘innocently’. Instead, the viewer understands that part of

the explanation for why the car looks the way it does lies in the manufacturer’s

desire for viewers of the picture to buy the car.) The social interests served by

scientific images are not disguised by their use but are not apparent either. That is,

they are ‘naturalised’3. It is the tension between, on the one hand, the natural

phenomena represented and, on the other, naturalisation (of social interests) that

makes the study of scientific images in popular culture particularly interesting.

‘Arbitrary’ and ‘Independent’ Elements: The Analysis of Scientific Images

There are various aspects of an image that we distinguish – elements that

go together to make up our whole experience. We notice, for instance, the size of

the reproduction, whether it is colour or black and white, any annotations on the

image itself, etc. Consciously or not, we classify these elements into two

categories by distinguishing aspects of the image that seem arbitrary from aspects

that are determined by an external reality. Arbitrary aspects are elements such as

3 This process of naturalisation is not unlike the operation of ‘mythical speech’ defined by Roland Barthes in the 1950s. For instance, Barthes’ point that “what causes mythical speech to be uttered is perfectly explicit, but it is immediately frozen into something natural; it is not read as a motive, but as a reason” (1973: 140) applies equally to scientific images. Barthes’ concept of myth is explored further and clarified by Marina Camargo Heck (1980). She points out that myth “differs from connotation at the moment at which it attempts to universalise for the whole society meanings which are special to [the particular lexicon of an identifiable sub-group]. In the process of universalisation, these meanings… assume the amplitude of reality itself and are therefore ‘naturalised’” (Heck 1980: 125).

60

• •

• • • • • •

annotations, the size of reproduction, the colours used in a ‘false colour’ image,

the scale of a graph, etc. Conversely, aspects that we feel are determined by

external reality are those that do not seem to involve a choice made by the image

maker such as the colours in an ‘ordinary’ photograph, the co-ordinates of the

points plotted on a graph, etc.

Arbitrary Aspects of Images (aspects of an image that require a choice to be made by the image maker or publisher)

Externally Determined (Independent) Aspects of Images

Size of reproduction Camera settings (exposure, angle of view, depth of field, etc.) Cropping or vignetting Annotations and captions Printing and display Colours in ‘false-colour’ images The scale of a graph Etc.

The co-ordinates of the points on a graph The colours in an ‘ordinary’ photograph The perspective and shading in an ordinary photograph Optical artefacts (such as chromatic aberration or ‘Airy disc’ diffraction patterns from point sources in astrophotography) Etc.

Table 2 Arbitrary vs. Externally Determined Aspects of Images

As viewers, we are aware that a picture maker can alter the sense we make

of a picture with judicious control of the ‘arbitrary’ aspects of an image. For

instance, we understand as viewers that the choice of scale on a graph can affect

our reaction to the data4. Thus, in scientific images there is a distinction to be

made between what we are looking at and what we are encouraged to see.

As viewers, we orient ourselves to the various elements of images in two

main ways. ‘Critical’ viewing is generally reserved for those elements where we

detect an element of choice. Elements that seem to be determined not by the

image maker but by the external world can be treated as independent evidence

allowing the viewer to adopt the role of a ‘virtual witness’ (see below, page 71).

Thus, externally determined elements we can call ‘independent elements’ in

contrast to the ‘arbitrary elements’.

4 For an account of how our reaction to data depends on choices such as this see Tufte 1983, The Visual Display of Quantitative Information; Tufte 1997, Visual Explanations; Briscoe 1996, Preparing Scientific Illustrations; and Jones 1995, How to Lie With Charts.

61

In a scientific paper, we are expected to approach images sceptically. This

expectation arises from the function of scientific papers themselves. The process

of making sense of an image involves a degree of self-consciousness. We are

usually aware that what we make of the image is not necessarily what the image

maker was hoping we would take from it. This is understood as a question of

interpretation. If the interpretation of the image is ambiguous then the onus is on

the author of the paper to provide the correct interpretation and to justify it. The

reader is invited to judge the exegesis for him or herself, hence the acute

awareness by both the author and the reader of the distinction between the actual

information in the picture and how the author hopes it will be viewed. Hence also

the attention paid to the arbitrary aspects of the image including candidness on the

part of the author in explaining the decisions.

Making sense of a scientific image in a popular context is still treated as a

problem of interpretation but this time there is no sceptical process involved.

Again, this arises from the function of popular texts as opposed to scientific

papers. The viewer is not invited to make the image meaningful for him or herself

but merely to understand the interpretation offered by the author. The distinction

between the information in the picture and how the author orients us to the picture

dissolves. In general, viewers can not challenge the significance placed on the

image by the author because they are not provided with sufficient explanation

about how the image was produced and may not have the technical skill to make

sense of such information any way. Challenging the primary interpretation of a

scientific image in a popular context is simply perverse. Alternative meanings and

associations can be brought out as long as they do not contradict the claim about

the world with which the image is primarily associated. (For instance, we may

attach cultural notions of identity to images of DNA but we have no basis on

which to question the chemistry of DNA or even the idea that it is central to

questions of heredity)5. Ultimately, the author of the image determines its primary

interpretation, which is the claim about the world with which it is associated. This

primary meaning may not be the most important meaning it takes on but however

the image is used subsequently, the ‘correct’, scientific interpretation of it is fixed.

5 For a detailed account of how notions of identity are modified with relation to DNA see Dorothy Nelkin and M. Susan Lindee 1995, The DNA Mystique.

62

This gives us a way of distinguishing ‘scientific’ images from

‘popularised’ ones. If we are invited and empowered to question the interpretation

attached to an image it is scientific; if not, it has become popularised. We see that

there need be no difference at the level of the picture itself (indeed, a scientific

picture and its popularised counterpart can be identical). The difference lies at the

level of the picture’s context and how the viewer is expected to engage with it. In

particular, we can make a distinction between the assumed role of the viewer: Is

he or she a reader or a spectator? This distinction forms the basis of the discussion

of scientific and popular contexts below.

At this point we should be clear about our use of the word ‘arbitrary’. The

choices are arbitrary in the sense that there are many alternatives that would be

equally acceptable but they are, nevertheless, determined by the specific purpose

to which the images will be put. That is, to call aspects of an image ‘arbitrary’ is

not to suggest that there are no good reasons for making particular choices. Not

all choices are equally appropriate.

The most important difference between a popular context and a scientific

one is how we engage with the picture’s arbitrary elements. A popular context

neither encourages nor empowers the viewer to distinguish arbitrary elements of a

picture from elements determined by an external reality. For instance, we have no

insight into the decision to choose a particular colour-map6 for an astronomical

image. This is the main reason we are unable to make it meaningful for ourselves

and must rely on somebody else’s interpretation of its features. That is, arbitrary

elements of scientific images become transparent in popular contexts (we do not

recognise them as arbitrary elements). Outside of the rhetorical context of a

research environment, the decisions made in the production of the image appear

‘natural’ – their explanation is assumed to lie in the external reality to which the

image refers rather than the rhetoric of research. This is why the popularised

image can ‘stand in’ for the claim being made – an icon of it – rather than being

seen as just part of the claim.

6 A colour-map is a way of displaying numerical data in a way that makes it easy to recognise patterns visually. A range of values is assigned to a particular colour in a series of colours (or ‘palette’). A familiar example is displaying temperatures on a weather map using white and blue to indicate low temperatures, moving through the light spectrum to orange and red to indicate high temperatures. In astronomical images, colour-maps can be used to increase the contrast between different areas on a black and white image (a map of intensity) allowing subtle features to stand out.

63

Images have always been central to scientific research and scientific

communication7. However, the rhetoric of scientific images has not been constant

over the modern period. There are a wide variety of ways in which scientists

incorporate images into a claim about nature.

The distinction between scientific and popular contexts for scientific

images should not be interpreted as a distinction between two homogenous and

straightforward attitudes to images. We can not simply argue that scientists make

sense of pictures one way and everybody else makes sense of pictures in another

way. Pictures have many uses in both science and in visual culture generally.

Historical and Disciplinary Contexts: ‘Metaphysical’, ‘Mechanical’ and ‘Interpreted’ Images

The distinction between popular and scientific contexts for scientific

images is both subtler and more general than a difference in attitude towards the

role of images. The distinction lies in the level to which a viewer is empowered to

adopt a critical stance towards the image. Nevertheless, understanding the

conceptual shift that occurs when an image is popularised requires us to

understand the range of attitudes towards visual evidence in science. First we

need to recognise that attitudes towards visual evidence are a) historically

contingent and b) depend on the instrumental practices of a research community.

This gives us two levels on which to consider the difference between scientific

and popular contexts – one global and one local.

The Evolution of Imaging in Science: Global Imperatives for Image Makers Lorraine Daston and Peter Galison (1992, and Galison 1998) have

revealed patterns in the evolution of pictorial representation by tracing changes in

the scientific atlas. Atlases have, throughout the history of science, been a central

and defining element of many disciplines. There are atlases of anatomy, stellar

spectroscopy, x-ray atlases of the head, atlases of cells, clouds, the paths of

7 Imaging is so central to science that I could not hope to cover the full range of roles here. For overviews of this subject, see Kemp 1990, The Science of Art; Baigrie (Ed.) 1996, Picturing Knowledge; Lynch and Woolgar (Eds.) 1990, Representation in Scientific Practice; Jones and Galison (Eds.) 1998, Picturing Science, Producing Art; Fyfe and Law (Eds.) 1988, Picturing Power.

64

elementary particles in detectors, and almost anything else one can think of.

Daston and Galison identify three distinct attitudes towards the use of visual

evidence in science – three conceptions of what it means to provide a ‘true picture

of nature’.

In the seventeenth-century, the ideal that image makers aspired to was

‘truth to nature’. Images were intended to represent the essential (platonic)

qualities that lay behind appearances. The process of idealisation required acute

sensitivity on the part of the image maker, which meant that the truth of the image

was predicated on the genius of the natural philosopher-cum-artist. The role of the

atlas maker was to discern the ‘normal’ from the bewildering ‘noise’ produced by

the individual circumstances of an object’s history and its observation. By the

nineteenth-century, particularly after about 1830, this had changed dramatically

(Galison 1998: 328). In a quest for objectivity, scientists made (literally)

superhuman efforts to eliminate any hint of judgement or intervention from their

images. Technologies such as photography were co-opted to remove the hand

(and mind) of the scientist from the process of recording phenomena. Galison

counsels against jumping to a technological determinist explanation for this8. It

was not the availability of cameras that altered the priorities of scientists. The

programme of ‘mechanical objectivity’ and the aesthetics of nonintervention were

evident long before use of the camera was widespread. The photograph was an

innovation of degree, not of kind (Galison 1998: 354, Daston and

Galison 1992: 94)9. Rather than striving for ‘truth to nature’, nineteenth-century

scientists aimed to ‘let nature speak for herself’. Far from craving the sensitivity

to divine the essential nature of things, they endeavoured to remove themselves

from the process of making images altogether.

The problem for nineteenth-century atlas makers was not a mismatch

between world and mind, as it had been for seventeenth-century

epistemologists, but rather a struggle with inward temptation. The moral

remedies sought were those of self-restraint: images mechanically

8 For a discussion of technological determinism in the history of technology see MacKenzie and Wajcman (Eds.) 1985, Introduction. 9 A similar argument is made by Charles Rosen and Henri Zerner about attributing changes in aesthetics to the invention of photography. “…the significance of this has been considerably undermined by the realisation that artists had been making paintings that looked like photographs for more than a half-century before the invention of photography.” (Rosen and Zerner 1984: 101).

65

reproduced and published warts and all; texts so laconic that they

threaten to disappear entirely. Seventeenth-century epistemology aspired

to the viewpoint of angels; nineteenth-century objectivity aspired to the

self-discipline of saints. (Daston and Galison 1992: 82).

The morality of nonintervention was not to dominate for long into the

twentieth century. This is especially clear since the advent of modern scientific

visualisation (see below, page 97). The following comment by Matthew Arrott of

the National Centre for Supercomputing Applications demonstrates how far visual

representation in science has moved away from the concerns of mechanical

objectivity:

[We decided to look with] a broader scope beyond just the issue of

accuracy. We started seeing that there are multiple ways to look at a

problem, there are multiple ways to represent any particular piece of

information. So the issue becomes, ‘what is the appropriate way?’ And

then, to be able to answer the issue of appropriateness, one has to ask

themselves, ‘who is it that I’m showing it to? Who am I trying to

communicate [with]?’ (CD-ROM accompanying Wolff and

Yaeger 1993)

In the twentieth-century, scientists’ judgement and experience have been

celebrated. The scientist is no longer seen as being a weak link in the process of

representation but as having an essential role in interpreting the data collected

automatically with instruments. The clumsiness of ‘objectivity’, (which, to

twentieth-century scientists, usually meant a purely algorithmic way to interpret

data), meant that subjective criteria were legitimised. Frederick and Erna Gibbs

characterise this attitude in their Atlas of Encephalography.

It would be wrong […] to disparage the use of indices and objective

measurements; they are useful and should be employed wherever

possible. But a “seeing eye” which comes from complete familiarity with

the material is the most valuable instrument which an

electroencephalographer can possess; no one can be truly competent until

he has acquired it. (Gibbs and Gibbs, Atlas of Encephalography 1941.

preface. Quoted by Galison 1998: 335)

In a later edition they counsel,

66

Accuracy should not be sacrificed to objectivity; except for special

purposes analysis should be carried on as an intellectual rather than an

electromechanical function. (Gibbs and Gibbs, Atlas of

Encephalography, Vol. 1, 1958: 112-113. Quoted by Galison 1998: 335)

However, the subjective intellectual analysis required of twentieth century

scientists was not the same as the ‘sensitivity’ required of seventeenth-century

natural philosophers. The goal of seventeenth-century representation in science

was a true image of nature, which required the subtle sensibilities of a genius. The

goal of twentieth-century representation was the means to classify and manipulate,

which required training and experience but, in principle, was attainable by

anyone.

Galison’s description of the interpreted image in science is closely related

to Thomas Kuhn’s account of the socialisation of scientists10. In The Structure of

Scientific Revolutions (1970) the interpreted image stands in metonymically for

the process of socialisation itself,

Looking at a contour map, the student sees lines on paper, the

cartographer a picture of a terrain. Looking at a bubble-chamber

photograph, the student sees confused and broken lines, the physicist a

record of familiar subnuclear events. Only after a number of such

transformations of vision does the student become an inhabitant of the

scientist’s world, seeing what the scientist sees and responding as the

scientist does. (Kuhn 1970: 111)

Such examples support the notion of images in modern science being fully

integrated into particular practices in science rather than acting as independent

records.

With his historical scheme, Galison is able to condense the evolution of

image makers and pictorial representation in science into the following epigram:

“Genius to manufacturer to trained expert; metaphysical image to mechanical

image to interpreted image” (Galison 1998: 354). An important distinction is

made between the role of the audience in the twentieth-century and its role in

earlier times. For different reasons, the metaphysical image and the mechanical

image assume passivity on the part of the viewer. The former because it is

10 For a commentary on Kuhn’s account of training in science see Barnes 1982: Chap. 2.

67

(simply) true and the latter because it is nature speaking for herself. On the other

hand, the interpreted image is intrinsically demanding. Viewers must learn to

‘read’ images and bring judgement to bear on them. Thus the assumed spectator

becomes an assumed reader (Galison 1998: 354) and the distinction between

scientific and popular contexts for images noted earlier (page 62) is placed in an

historical context.

The Interpreted Image and the Significance of Context It is only with the emergence of the interpreted image in the twentieth

century that the difference between scientific and popular contexts for images

takes on significance. In the eras of the metaphysical and mechanical image there

may have been a difference between how members of a research community made

sense of their images and how an outsider might have done so. But there was no

essential difference between the rhetoric of an image in a scientific context and its

rhetoric in a popular one11. The implicit claim made by each type of image is that

attending to it is like observing the real world by proxy and this is true in both

popular and scientific contexts. The interpreted image, however, explicitly

encodes theoretical assumptions and experimental practice; it is not just a record

but a tool also. One consequence of this shift is that, with the interpreted image,

the conventional becomes visible.

If an image signifies ‘truth to nature’ as in the case of the metaphysical

image or ‘nature speaking for herself’ as in the case of the mechanical image then

the ‘conventional’ nature of signifiers in the image will be ignored. This is

because conventional aspects of the image have no role other than establishing the

correspondence between the image and the world. The situation is different,

however, if an image is understood as a tool for classifying the world and

intervening in it. In the case of the interpreted image, representational conventions

are understood with respect to the function the image performs in the scientific

process.

Conventional and arbitrary elements are, of necessity, integral aspects of

all images but we do not always choose to pay them specific attention.

Nevertheless, there is much that is conventional about the process of creating an

11 The basis for a rhetorical analysis of images is explored in greater depth in Roland Barthes’ essay, The Rhetoric of the Image (in Barthes 1977).

68

image from observation with, say, a microscope. For instance, the resulting image

will likely conform to conventions of linear perspective and chiaroscuro. Because

such rendering techniques are conventional (indeed, they are so conventional that

they seem natural), the shape of microscopic objects may be easier to discern

from an image than from direct observation12. This is clearly the case with Robert

Hooke’s observations and engravings of a flea discussed below. Observation with

a microscope and ordinary perception are very different processes13 and we have

more experience of making sense of images than with making sense of our

assisted perceptions.

Image makers are faced with a series of choices, from whether to use

colour to what type of cross-hatching the engraver should use. These represent the

arbitrary elements of the picture. The decisions together constitute a ‘visualisation

idiom’. For seventeenth-century natural philosophers, naturalism was the goal of

the visualisation idiom. For nineteenth-century atlas makers, it was important that

the choices they were forced to make were seen not to interfere with the status of

the image as an objective record of reality – the visualisation idiom was realistic.

For twentieth-century image makers, the choices must be seen to assist the image

user in the particular purpose to which he or she puts the image. Neither

naturalism nor realism are important imperatives.

17th & 18th Century 19th Century 20th Century Image Maker: Genius Manufacturer Trained Expert

Viewer: Viewer Viewer Reader or User Images: Metaphysical Mechanical Interpreted

‘Genre’/ philosophy: Naturalism Realism Pragmatism or Functionalism

What is represented: A true picture of nature Nature speaking for herself

A summary of results following an

intervention with nature

Table 3 Summary of three main attitudes towards images in science

Disciplinary (Local) Imperatives for Image Makers

Naturalism, realism and functionality are the ‘global’ imperatives that

determine the visualisation idiom for metaphysical, mechanical and interpreted

images. But, within each era, there is a wide range of practices and interests that

12 For an account of the development of conventional perspective see Kubovy 1986. 13 For a philosophical discussion of just how different (or similar) they are see Hacking 1985 and Seager 1995.

69

also influence the choices involved with the arbitrary aspects of images. These we

can call ‘local’ imperatives. The complex relation of local and global imperatives

is explored below with a few examples.

In 1665, Robert Hooke published Micrographia, a collection of engravings

from observations with a microscope. One of his subjects was a flea (Figure 5

page 70). The engraving (Hooke, 1665, scheme 34) was reproduced as a folded

plate that, when extended, measures almost half a metre in length. For Hooke,

microscopy was a process of seeing beyond the confusing views of alien forms

that the microscope presented to the eye.

I indeavoured (as far as I was able) first to discover the true appearance

and next to make a plain representation of it… I never began to make any

draught [Sic.] before by many examinations in several lights, and in

several positions to these lights, I had discover’d the true form. For it is

exceeding difficult in some Objects to distinguish between a prominency

and a depression, between a shadow and a black stain, or a reflection and

a whiteness in the colour. (Robert Hooke, 1665, Preface).

Figure 6 is a photograph of a flea made by Brian J. Ford with a compound

microscope constructed by Christopher Cock similar to the one that Hooke used.

Hooke also used a simple (single lens) microscope that allowed more detail to be

resolved but was less convenient to use. Both the field of view and the depth of

field were very narrow in the simple microscope, which meant that Hooke would

have been unable to see all the features on the flea simultaneously. Instead, he

would have ‘built up’ a picture of the flea bit-by-bit by moving it beneath the

eyepiece and by continually adjusting the focus up and down.

For Hooke, a ‘true’ image was one that reproduced essential aspects of the

object, not one that reproduced the scientist’s view of the object. This was an

‘aperspectival’ objectivity (a view from nowhere) rather than a mechanical

objectivity.

70

Figure 5 Engraving of a flea from Robert Hooke’s Micrographia, 1665.

Figure 6 Photograph of a flea made with a compound microscope similar to the one used by Hooke (Brian J. Ford 1992: 182)

71

An opposing attitude is revealed in photographs and comments by Erwin

Christeller in an atlas on histology14 published in 1927. The images in the atlas

were produced directly from the original anatomical preparations. Thus,

Christeller believed he had removed a stage in the process that would have

allowed his images to be polluted by his own preconceptions. The samples in the

photographs exhibited several artefacts including tattered edges and an absence of

soft tissue components. Christeller raises the absence of corrections in the

photographs to a virtue, which prompts Daston and Galison to compare the

tattered edges of the samples to the deliberate and humbling fault in a Persian

carpet (1992: 114). For Christeller, an image should be as free from human

intervention as possible. This way it can act as evidence in its own right.

It is obvious that drawings and schemata have, in many cases, many

virtues over those of photograms. But as means of proof and objective

documentation to ground argumentation [Beweismittel und objektive

Belege für Begrunde] photographs are far superior. (Erwin Christeller,

quoted and translated by Daston and Galison 1992: 114)

Christeller and Hooke’s attitudes to visual records and the mediation of

scientific instruments are very different but they are just two strategies for

satisfying one of the principle motivations driving scientific publishing. This is

what Steven Shapin and Simon Schaffer call ‘virtual witnessing’ (1985: 60) – the

idea that a scientific experiment might be properly observed via the agency of a

publication rather than by being physically present15. Since the scientific

revolution16 various alternative attitudes to the role of images in science have

evolved and coexisted. Robert Boyle was a contemporary of Hooke. His New

14 Erwin Christeller, Atlas der Histopographie gesunder und erkrankter Organe, Leipzig 1927. 15 Shapin and Schaffer explain the development of virtual witnessing in the context of the conflict between rationalist philosophers and the emerging community of experimentalists in the late seventeenth-century. Robert Boyle sought to secure assent by way of the experimentally generated matter of fact. Witnessing played a central role in establishing fact and for experimentalists of the early Royal Society, witnessing was a collective act since, as in criminal law, the reliability of testimony depends on its multiplicity. Multiplying witnesses thus becomes a basic goal of science and the “technology of virtual witnessing” is the most powerful way to achieve the multiplication (more efficient than facilitating the replication of experiments or performing experiments in public). 16 The coherence of the term ‘scientific revolution’, which for so long was invoked uncritically to signify the historical moment from Galileo to Newton that saw the introduction of experimental science and the establishment of institutions such as the Royal Society has been questioned by Steven Shapin (1996). It has the advantage, however, when used in an imprecise way (as here) that it is broadly understood.

72

Experiments Physico-Mechanical, Touching the Spring of the Air (1660) included

an engraving of an air pump (Figure 7 page 73). As Shapin and Schaffer point out

(1985: 61-62), the engraving is naturalistic and the level of detail indicates that it

is meant to represent a particular air pump rather than, say, the principle features

of air pumps in general. The intention was to place the reader/viewer at the scene

of the particular experiments described. Doing so signalled that “this was really

done as stipulated” and thus facilitated virtual witnessing (Shapin and

Schaffer 1985: 62).

Boyle’s use of visual evidence in New Experiments contrasts with that of

Hooke’s in his Micrographia. They are both naturalistic but they co-opt virtual

witnesses in contrasting ways. Hooke summarises a series of observations to

present a representation that is (plausibly) ‘true to nature’. Boyle provides the

means for virtual witnesses to project themselves to the scene of a particular

intervention with nature much as a playwright provides enough detail for audience

members to place themselves at the scene of the action. In addition to his

engravings of microscopic subjects, Hooke also assists his virtual witnesses in

precisely this way. His first plate (1665, scheme 1) is a collection of engravings of

his instruments including the actual microscope he used. This is accompanied by a

detailed verbal description of how they were used and the problems he had to

overcome.

73

Figure 7 An engraving of Robert Boyle’s air pump that appeared in his New Experiments Physico-Mechanical (1660). The attention to detail in the image facilitates ‘virtual witnessing’.

There are many ways of satisfying the demands of a virtual witness. At all

moments in its history, science is characterised by flexibility and sophistication

with respect to the issues involved in representation. Scientists are not only

inventive image makers but imaginative when it comes to epistemological issues

raised by visual evidence and to the rhetoric of images.

This gives us another axis along which to divide popular and scientific

contexts for scientific images. The scientific context of an image includes the

prevalent attitude within a discipline to the use of visual evidence. Sometimes this

attitude is contested; sometimes it is contrasted with alternatives. The image itself

74

is made meaningful with reference to these tensions. When it is transferred to a

popular context, the standards by which it is judged are those associated with its

new environment, not the original standards that guided its creation. The attitude

to visual evidence that a viewer will bring to a popularised image is likely to be

inappropriate and impoverished by comparison with its original context.

In visual culture generally, there is a spectrum of ways in which viewers

may orient themselves to images. At one end of the spectrum is a passive,

uncritical gaze. At the other, there is a critical, reflexive and flexible gaze. The

way scientists orient themselves to visual evidence is generally at the latter end of

this spectrum – scientists are active interpreters of visual evidence. However,

sophisticated as it may be, the visual culture of science is not divorced from wider

culture. Attitudes that prevail in science represent a dialogue with the culture

within which it is embedded. Daston and Galison’s account of atlas makers tracks

the moralisation of objectivity. The emergence of mechanical objectivity is

explained in this context. For the Victorians, the metaphor of the machine was

important for three reasons. Firstly, it suggested perfection and standardisation

(akin to the ideal of repeatability in experiments). Secondly, it “embodied a

positive ideal of the observer: patient, indefatigable, ever alert, probing beyond

the limits of the human senses” (Daston and Galison 1992: 120). Daston and

Galison draw a comparison between the way scientists admonished themselves

with the example of the more attentive, more hardworking, more honest

instrument and the way factory bosses measured their workers against more

productive machines. Thirdly, the machine “held out the promise of images

uncontaminated by interpretation” (Daston and Galison 1992: 120). This is the

main reason photography quickly acquired its status as an independent and

reliable witness.

Nonintervention, not verisimilitude, lay at the heart of mechanical

objectivity, and this is why mechanically produced images captured its

message best. Images had always been considered more direct than

words, and mechanical images that could be touted as nature’s self-

portrait were more immediate still. Thus images were not just the

products of mechanical objectivity; they were also its prime exemplars.

(Daston and Galison 1992: 120).

75

Understanding the conceptual shift represented by popularised images

requires sensitivity to the way scientists orient themselves to visual evidence, as

this is a key difference between popular and scientific contexts. However, there is

no one way in which images are used in science. The problem of characterising

the way scientists orient themselves to visual evidence is akin to the problem of

identifying a definitive ‘scientific method’17. The use of images in science

presents us with a bewildering variety of standards and priorities. The attitude

adopted by scientists is intimately related to the visual culture in which they are

embedded; yet, scientists often articulate their standards explicitly. In this way,

they are not unlike other groups of image makers such as reportage and

advertising photographers. The standards of honesty and objectivity are discussed

explicitly in these groups too. For photographers, however, the audiences for the

images generally do not contribute to the debate (about regulation of the industry)

to the same extent as the (primary) audiences for scientific images.

For scientific uses of images, there is no real difference between the image

makers and their audience and this is an important distinction between scientists

and many other categories of image maker. Image making is (and always has

been) an important part of scientific practice. There is no significant difference

between image makers and image viewers in science just as there is no significant

difference between a scientist who designs an experiment and a scientist who

scrutinises the design when it is written up in a journal.

In addition to understanding the difference between scientific and popular

contexts (and thus the distinction between scientific images and their popularised

counterparts), we also need to understand how scientific images signify in each

context. In particular, we need to be able to account for the choices made by

image makers – the ‘arbitrary elements’ discussed above. Below, some general

issues of representation in scientific images are discussed. This leads on to a

discussion of imaging techniques. Eventually we will end up with a scheme for

classifying scientific images and accounting for differences in the sense made of

popular images.

17 This problem, and a range of putative methods are discussed in Gower 1997. Essentialist philosophies of science are discussed in greater depth in the next chapter. See also Feyerabend 1988 and Barnes 1974: Chap. 3. on the futility of trying to isolate a definitive method.

76

Images and Interventions: Classifying Scientific Imaging Techniques Making Invisible Objects Visible

Much scientific effort is directed to making phenomena comprehensible by

making them visible. Recognising and representing features of ‘invisible’ objects

is a two-stage process. First, we are aware of codes that relate representations of

objects to our perceptions of the objects themselves. (That is, we are familiar with

seeing trees and seeing pictures of trees and understanding what pictures can tell

us about actual trees.) Second, we borrow these established codes to render the

information gleaned from devices such as microscopes or radio telescopes

intelligible. The way the images relate to the actual objects they represent is

inferred from our familiarity with the code adopted. In other words, images of

objects beyond human perception always involve reasoning of the type: the

relation of this image of a flea to an actual flea is the same as the relation of an

image of a tree to an actual tree. We only have direct knowledge of the latter

relation; our knowledge of the former relation is an inference18.

Codes of representation are generally motivated by the optical properties

of visible objects. Thus, for instance, we signify shininess by making a specular

highlight a feature of a representation and this has a direct relation to how light

actually interacts with shiny objects. A highlight is not itself ‘shininess’ (which is

a much more complex idea) but a signifier of the idea of shininess. Optical

properties that form the basis of codes of representation are arbitrarily attached to

representations of invisible objects (objects beyond human vision). For example,

the shape (determined with the scanning tunnelling microscope) of the corral of

iron atoms on the surface of copper metal (Figure 13, page 118) is made visible by

arbitrarily attaching properties such as colour, shininess, etc. that properly belong

only to visible objects.

Recognition requires familiarity. To recognise properties of invisible

objects, we associate them with a coherent set of optical properties. These are

chosen to aid recognition by analogy. For instance, we can conclude from

Hooke’s engraving that components of a flea’s exoskeleton are curved like the

18 William Seager’s discussion of ‘ground truth’ (in Seager 1995) explores this theme in much greater philosophical detail.

77

metal plates in a suit of armour because they are shown reflecting light in a way

we are already familiar with from armour (and a limited set of other familiar

objects). Hooke makes this analogy with armour explicitly. Verbally, he describes

the flea as being,

all over adorn’d with a curiously polish’d suit of sable Armour, neatly

jointed, and beset with multitudes of sharp pinns, shap’d almost like

Porcupine’s Quills or bright conical Steel-bodkins. (Hooke, 1665,

Observation 53 Of a Flea: 210)19

In addition to borrowing the optical properties of familiar objects, imaging

invisible objects also involves borrowing familiar lighting regimes. Human vision

and experience predisposes us to particular ways of lighting scenes. We are best at

gleaning information from objects that are lit from above by a mixture of direct

and ambient light. The optimum ratio of direct to ambient light is determined

more by Earth’s atmosphere than anything else. On the moon, there is very little

ambient light and the shadows are very hard. Because of this, Apollo astronauts

had difficulty judging the size and distance of rocks and mountains. On the

surface of Venus the opposite is the case. The planet is coated in thick cloud,

which means there is no direct sunlight. Any astronauts on Venus would, like the

Apollo astronauts, have problems getting information from their visual sensations,

though because flat lighting conditions are more familiar to us than very direct

lighting, the problems would be less.

In imaging invisible objects, scientists choose (where they can) to

‘illuminate’ them in ways that are familiar to us however alien the landscape they

are representing. This usually means that iso-surfaces, molecular models, and

other objects that feature in scientific images appear to be lit as if outside in

sunlight. Choosing a familiar lighting regime ensures that use can be made of the

image – it allows us to discern intrinsic properties of the object being represented

(that is to distinguish its intrinsic properties from the fortuitous aspects of the

representation).

In conclusion, imaging invisible objects is a process of lending them the

optical properties of familiar objects. Imaging or visualisation involves making an

19 For further observations on Hooke’s Micrographia see Kemp 1998, Hooke’s Housefly (One of Kemp’s weekly series of observations on scientific images in Nature).

78

object fit in to our experience of sight (which includes fitting them into our

experience of representation). Thus, a visualised object is one that we could

almost expect to come across in our daily life. Imaging is a process of analogy.

The objects that are referenced in the analogy are not only familiar, they are also

likely to be things we have strong feelings about – they are culturally and

emotionally charged. In short, they are things that we are good at looking at (for

whatever reason that may be). One class of object, for example, that fits this

criterion is landscapes; another is architecture. The visual properties of each can

be employed piecemeal in scientific imaging and visualisation. Shared experience

of landscape; cultural notions that determine their significance; and codes of

landscape representation are all assumed, employed and appropriated in rendering

invisible objects.

This idea of employing familiarity is incorporated into explicit

methodologies in ‘scientific visualisation’ (see page 97) in what Philip Robertson

calls the ‘natural scene paradigm’, which he explains with reference to the aims of

visualisation.

Underlying the concept of visualisation is the idea that an observer can

build a mental model, the visual attributes of which represent data

attributes in a definable manner. (Robertson 1991: 59)

This raises questions about what kinds of mental models carry what kinds

of information effectively and how to induce a chosen mental model in the mind

of a viewer. One approach to such questions is the natural scene paradigm. This is

“based on our ability to glance at a scene and gain an immediate appreciation of

its 3D surface structure, what the surface is covered with and even the condition

or state of that surface covering” (Robertson 1991: 59). The natural scene

approach involves:

1. using clearly and easily understood models such as 3D structures or

scenes,

2. representing data variables by the recognisable properties of the

objects or scenes, and

3. inducing them in the observer’s mind by using graphics scene

simulation techniques. (Robertson 1991: 59)

79

The importance of familiarity is demonstrated with the experience of

astronomers at the Australia Telescope National Facility struggling to represent

radio astronomy data. Volume rendering20 techniques had been co-opted from

medical applications (see Gooch 1995) but for the astronomy data it was difficult

to discern the shape of objects on the screen.

Objects that the user is familiar with, like the skull of a human being, can

be reproduced in a very rough way, and the brain will still recognise the

scene. We have found that for radio data cubes this is not the case. The

calculated image often appears as just a coloured blob on the screen.

Even if we try surface shading to enhance the three-dimensional

perception, the effect is sometimes still poor. The main problem is that

the brain does not perceive the structure of an object just by looking at a

three-dimensional representation of it, because it does not recognise it.

This may get better with time, as astronomers gain experience, but part of

the problem will remain since one of the reasons to observe objects is

that we do not know what they look like! (Oosterloo 1996)

The attempts to reveal the form through shading are attempts to lend an

unfamiliar object the properties of familiar ones but, in this case, the strategy was

inadequate. Instead, the problem was solved by producing animations in which the

object is rocked from side to side. This allows the viewer to discern the form

through motion parallax, which is one of the most powerful depth cues in human

perception (Kaufman 1974: 234-239).

Representation and Invisible Objects: Primary and Secondary Qualities Appropriating the visual properties of things we are good at looking at

allows us to bring visual skills acquired in other contexts to bear on problems of

science. However, it has other consequences for the popularisation of scientific

images also. Hooke’s flea is startling and monstrous not because of its detail but

because of its size. It does not seem to us to be a close up picture of a flea so much

as a picture of a big flea21. This is because the detail has been represented using

20 ‘Volume rendering’ is a computer graphics term that refers to representing objects that are fully three-dimensional, that is, with internal structure (e.g. human bodies, smoke, etc.) rather than objects that can be represented merely as surfaces. See Kaufman, et al 1994. 21 This was pointed out to me by a group of art students to whom I showed the picture. They all agreed that it did not look like a close up view.

80

conventions associated with large objects – we learn about the flea by analogy. As

Hooke himself puts it,

Little Objects are to be compar’d to the greater and more beautiful Works

of Nature. A Flea, a Mite, a Gnat, to a Horse, an Elephant, or a Lyon.

(Hooke, 1665, preface)

In contrast to Hooke’s engraving, the photograph by Brian J. Ford (Figure

6, page 70) looks much more like we are close-up to a small flea than a picture of

a big flea. The difference is even clearer in micrographs made with simple

microscopes (see, for example, the micrograph of a louse’s head in Brian J. Ford,

1992: 183). This is due in large part to the artefacts associated with microscopy –

the coloured fringes and the very limited depth of field. The ‘look’ of micrographs

(the subtle distinguishing visual quality that, with experience, we come to

associate with them) places us in a particular relation to the represented object.

This is true of any imaging technique and the visualisation idiom associated with

it.

Scientific imaging techniques will almost always result in a deficit that

needs to be filled (particularly when they involve representing invisible objects).

Thus, the image maker is faced with choices about how to fill it. The deficit can

be explained with reference to the distinction John Locke makes between

‘primary’ and ‘secondary’ qualities. In considering how we gain knowledge of

objects and what we know of them, Locke distinguishes qualities that seem to

intrinsically belong to the object itself from qualities that seem to depend on the

union of the object and an observer. His list of the former includes solidity,

extension, figure, texture, number and mobility and these he called primary

qualities. (Today we would probably express these qualities differently and the list

would include mass, volume, surface area, shape, acceleration, velocity, etc.)

Qualities thus considered in Bodies are, First such as are utterly

inseparable from the Body, in what estate so ever it be.

(Locke 1979/1689, Chap. 8, §9: 134)

The crucial point about Locke’s primary qualities is that we never have

direct sensory access to them. Instead, we learn about the shape, mass, etc. of

objects through the colour of their surface or the force we detect with our sense of

touch, etc.

81

Such Qualities which in truth are nothing in the Objects themselves, but

Powers to produce various Sensations in us by their primary Qualities i.e.

by the Bulk, Figure, Texture and Motion of their insensible parts, as

Colours, Sounds, Tasts, etc. These I call secondary Qualities.

(Locke 1979/1689, Chap. 8, §10: 135)

Thus, we end up with two interdependent but epistemologically distinct

categories. Primary qualities are intrinsic to bodies. “Ideas of Primary Qualities of

Bodies, are Resemblances of them, and their Patterns do really exist in the Bodies

themselves” (Locke 1979/1689, Chap. 8, §15: 137). Secondary qualities such as

colours and smells are brought about by primary qualities and they are the means

by which we learn about primary qualities but they do not belong to bodies

themselves. “[T]he Ideas, produced in us by these Secondary Qualities, have no

resemblance of them at all. There is nothing like our ideas existing in the Bodies

themselves.” (Locke 1979/1689, Chap. 8, §15: 137).

Whatever the epistemological significance of such a distinction22, it

provides a useful way to approach questions of representation. In particular, it

gives us a way of breaking down the process of representing invisible objects. The

practice of arbitrarily associating optical properties with some aspect of an object

takes the distinction of primary and secondary qualities for granted. We cannot

know primary qualities other than in association with secondary qualities. For

instance, although we can think of ‘shape’ in an idealised or Platonic way, we

cannot imagine an object’s shape other than being defined by its colour or feel.

Thus, we can easily express the shape of a surface mathematically but to visualise

the surface we need to make a whole series of arbitrary decisions about how the

surface reflects light, etc. In addition, we need to decide how to ‘illuminate’ it

(that is, where to place the ‘virtual’ lights in the scene) and where to look at it

from. In a real surface, the secondary qualities (how it reflects light) would, of

course, arise spontaneously. They would arise from primary qualities of the

surface itself – for instance, they would depend on the molecular or atomic

structure of the material the surface was constructed from. Our mathematical

surface, in contrast, has just one primary quality (shape).

22 For a critique of Locke’s epistemology see Berkeley (1975/1734) especially Part 1, paras. 7-20 (pp 79-83). For a discussion of Locke and Berkeley’s use of primary and secondary qualities see Hospers 1973: 496-502.

82

We form strong associations between primary qualities and secondary

qualities. Surfaces that look like plastic carry with them ideas about plastic. The

choice of secondary qualities, then, has the power to imbue a surface with primary

qualities that it does not possess. This makes it very difficult to represent primary

qualities in isolation. In borrowing secondary qualities to make an object visible,

we inevitably end up carrying ideas about their associated primary qualities along

with them. If we make the mathematical surface look like glass, it may also seem

brittle. If we make it look like water, the shape of the surface may seem dynamic

as if it was a moment caught with a high-speed camera; it may also carry other

connotations associated with water such as coolness or relaxation. These

connotations will affect the sense we make of the visualisation even though there

is only one primary quality being represented. Thus, image makers need to be

aware of the connotations and be in control of them.

The strong relation between primary and secondary qualities is desirable

when representing visible objects. It means that materials can be recognised with

very few clues. Strong ideas about scenes can be invoked with just a few brush-

strokes in paintings. Also, video games can create a sense of realism without

having to display and animate high levels of detail (thus minimising the

computational overhead). In addition to there being a pre-existing association

between a particular secondary quality and the set of primary qualities that give

rise to it, there are also established codes for representing the secondary qualities.

These codes are also invoked when secondary qualities are borrowed to represent

invisible objects. If we choose the optical properties of plastic to make a

mathematical surface visible then not only do we end up with a surface that

carries connotations of plastic but a picture that carries connotations of pictures of

plastic. That is, the image is understood in relation to all pictures where plastic is

represented. Its location in this set of images also affects the way we orient

ourselves to it. If a different code were invoked then a different set of images

would act as reference.

The inevitable reference to genres and codes embeds all visual

representation firmly in a cultural context. There is no scope for ‘pure’

representation. Neither the “plain representation” sought by Hooke (see page 69)

nor the unmediated record sought by Christeller (see page 71) were ever actually

attainable ideals. While this does not mean that image makers can not achieve

83

their scientific goals it does mean that they may lose control of the meaning of

their image if it moves outside the context in which it was created. This issue is

addressed later. Next, we need to adapt the notion of primary and secondary

qualities to include invisible objects.

Locke’s own list of primary qualities includes only those qualities that we

can know through our senses (mediated by secondary qualities). For instance, we

can see motion and feel solidity. If we had other senses then there would be other

primary qualities that we would have access to, so the list is not exhaustive. In

addition to the ‘sensible’ primary qualities, we could augment Locke’s list with a

whole series of other qualities such as dielectric constant, temperature, heat

capacity, electric charge, magnetic moment, Young’s modulus, etc. In principle,

there is no difference between these and Locke’s original list. The deficit

mentioned above in relation to representing invisible phenomena is the lack of

secondary qualities to allow a feature (such as one of these insensible primary

qualities) to be subject to our sense of vision. As we have seen, much imaging in

science involves representing these ‘insensible’ primary qualities by arbitrarily

associating them with secondary qualities normally associated with sensible

primary qualities of familiar objects.

The significance of associating primary qualities arbitrarily with secondary

qualities is a matter of degree. Let us compare, for example, infrared photography

and magnetic resonance imaging. The way an object reflects light is a primary

quality (or set of primary qualities) of it. The way it reflects near-infrared light

will be fairly similar23. An image maker can use an instrument to collect data

about the reflection of infrared light in a scene and then display these data by

treating the values as if they represented the reflection of visible light. A scene

illuminated with infrared light is so close in kind to one illuminated by visible

light that we usually do not even think of the visualisation process as having two

steps. Rather, we say simply that we can see in infrared by using an infrared

camera. The secondary qualities that are ‘borrowed’ correspond quite closely with

the primary qualities that are the origin of the data. Comparatively few arbitrary

choices need to be made to create a comprehensible image. On the other hand,

23 Light is a form of electromagnetic radiation. Human eyes are sensitive to just a small part of the electromagnetic spectrum. Infrared light is physically very similar to visible light but its wavelength is slightly longer so it does not stimulate the light sensitive cells in our retina.

84

magnetic resonance imaging (MRI) is so removed from human vision that many

more secondary qualities need to be borrowed and their relation to the primary

qualities that are the origin of the data is much more arbitrary.

Another way to look at the difference between the two is to observe that

MRI provides an image maker with more scope when it comes to representing

physical features than infrared photography. Because the association of primary

and secondary qualities is so much more arbitrary, there is a greater choice of

secondary qualities available. (The MRI community itself determines the actual

limits of the legitimate practices.) The variety of ways of displaying infrared data

is more limited.

Classifying Imaging Techniques: Scientific Imaging and Human Vision What is it about infrared photography that constrains representation in an

infrared photograph in a way that it is not constrained in MRI? Why does an

image maker have more arbitrary choices to make in constructing an MRI image

than an infrared photograph? The answer in both cases is related to the physical

similarity of the imaging technique to the operation of human vision. This gives

us a way to classify imaging techniques: we can sort them according to the degree

of similarity to human vision and this will indicate how significant the arbitrary

elements of a picture may be.

The similarity of an imaging technique to human vision can be measured

along several axes. For instance, one axis is the similarity between the radiation

used and visible light. For example, infrared is very similar to visible light and

electrons are very dissimilar. However, the way the radiation is employed must

also be taken into account and this gives us another axis on which to measure the

similarity to vision. In human vision, a scene is flooded with light that, on hitting

objects, is scattered all directions. The portion of reflected light that reaches one’s

eye is focused by a lens and forms an image on one’s retina. On the other hand,

radar employs electromagnetic radiation in a very different way.

Microwaves, being a form of electromagnetic radiation, are physically

similar to visible light and scanning electron microscopy uses a stream of

electrons – apparently a very different form of illumination. However, the data

that are collected using radar are the time of flight of a pulse (the time between

emitting a microwave pulse and detecting its reflection). From this time, the

85

distance to the reflecting object is calculated and an image displayed that

corresponds to the positions of objects that reflect microwaves. This is very unlike

the way electromagnetic radiation is used in human vision. On the other hand,

many aspects of the way electrons are focused onto a sample and scattered from it

in an electron microscope bear a strong resemblance to the behaviour of light in

human vision and light microscopes24.

Another axis is the similarity to the mechanism of human vision in the

detector. Many aspects of a camera for example bear a strong resemblance to the

operation of eyes. For example, light is focused with a lens onto an area at the

back. The layers of chemicals in the film that each react to a different part of the

visible spectrum can be compared to the three types of ‘cone’ cell found in the

fovea (the colour sensitive central part of the retina). A bubble chamber (used for

creating images of the paths of subatomic particles) works completely

differently25. There is no focusing or illumination. The image and the object

occupy the same space. Completely different physical principles give rise to the

image. Thus, a bubble chamber detector is far removed from the mechanism of

human vision. (Note, however, that to be useful, the image produced by a bubble

chamber must be recorded and this is done with photography. Physicists look at

photographs of bubble chambers rather than the chambers themselves but this

extra step does not bring this form of imaging any closer to human vision).

Yet another way to compare an imaging technique and human vision

concerns questions of resolution. For instance, we can compare the angular

resolution achieved by the technique with human vision. If it is far superior (for

example in the case of the Hubble Space Telescope) or far inferior (for example

electro encephalogram (EEG) imaging) then the technique is a long way from

sight. Closely related to this axis, we can compare the spatial resolution of a

technique (the physical size of the smallest discernible detail) with human vision.

24 Dennis Gabor, one of the pioneers of the electron microscope, lamented the fact that he and his colleagues received insufficient acclaim because it is a “very obvious invention” once one “combines the fact that an axially symmetric electric or magnetic field is an electron lens with wave mechanics” (Seager 1995: 462). See also Watt 1985. 25 A bubble chamber is detector that reveals the path of charged particles. As a particle travels through a volatile liquid, it ionises (strips electrons from) some of the atoms it passes. These ionised atoms cause the surrounding atoms to switch from a liquid to a gaseous state and a tiny gas bubble forms. The path taken by the particle is visible as a string of these bubbles. See also Galison 1997: 313-431.

86

Along this axis, both scanning tunnelling microscopy (STM) and synthetic

aperture radar (SAR) are a long way from human vision. STM provides images of

surfaces with much finer detail than we usually experience them whereas SAR

images provide much less detail than our usual experience of landscapes.

We can also compare resolution of other criteria such as brightness, hue,

saturation, etc. Human vision can distinguish little more than 100 levels of grey

between black and white. Computed tomography scanners (CAT scanners) are far

more sensitive being able to distinguish many hundreds or thousands of levels

between total opacity and total transparency26. On this axis, CAT scanners are a

long way from human vision but on the axis of spatial resolution they are much

closer – (slightly inferior to human vision). On the other hand, human vision can

distinguish millions of different hues (that roughly correspond to different

wavelengths of light) but CAT scanners do not record information about the

wavelength of the x-rays that they detect. The technique is, in any case,

monochromatic – the object is ‘illuminated’ with radiation of a single wavelength.

Thus, the spectral resolution of the technique is inferior to human vision. The

absence of spectral information in CAT scans means that the secondary quality

with which this information is usually associated (hue) is available for other

purposes. In some CAT scan images, the ‘invisible’ (super-human) level of detail

in recorded brightness is represented by (arbitrarily) associating transparency with

colour – different values of brightness are mapped to different colours so that

subtle (invisible) differences can be discerned. Thus, the distance from human

vision along two axes (spectral resolution and intensity resolution) results in an

arbitrary association of primary and secondary qualities in CAT scans.

Applying a colour-map brings with it a new set of problems of

interpretation. A graded scale from black to white is quite straightforward to

26 CAT scanners are generally used for medical imaging. They produce a cross-section through a body, which reveals the density of different tissues. Many such slices can be combined to create three-dimensional representations of internal organs, etc. A thin beam of x-rays is directed through the subject and the intensity of x-rays that reach a detector on the opposite side is measured. The beam and detector are then rotated by approximately 1o and another measurement made. The process is repeated until data from 180o have been collected. These separate measurements can be combined to reveal a map of tissue density within a slice of the subject. Radiographers have used tomography since the 1920s but because the technique requires intensive calculations, it was not widely applied until computers were introduced to the process. The modern technique was invented at Thorn EMI Central Research Laboratories by Godfrey Hounsfield and Allan Cormack in 1972.

87

interpret quantitatively. Adding colour complicates the process. According to a

radio astronomer interviewed as part of an ethnographic study into astronomical

imaging (Lynch and Edgerton Jr 1988),

Colour has been oversold by the computer industry itself. The industry

has told many people that the eye is sensitive to fewer levels of grey than

colour. But the eye gets confused. You still do the best by showing the

most subtle detail by using grey than any but the most disciplined use of

false colour. (Lynch and Edgerton Jr 1988: 193)

The choice of colour-map is not determined by the data themselves so, in a

medical or research setting, we would look for the motivation for the choice in the

purpose to which the image is put. It could be, for instance, that a narrow set of

values corresponds to diseased tissue. If it was the presence of such tissue that a

doctor was interested in looking for then the colour-map may represent this

narrow set of values as a particular shade of red and all other values as shades of

grey. Diseased tissue would then be very obvious in the image. Such an image

incorporates theories of radiology and image processing, theories of anatomy, the

function of the image and the audience for it as well as commonplaces such as

associating red with hazards.

In a medical context, the choices that determine the image and the

motivation behind the decisions are apparent. The connotations carried by the

visualisation idiom are understood with respect to the social and technical context

that led to them being invoked in the first place. In a popular context, all the

decisions are transparent. A CAT scanner that produces such images is treated as

‘a machine that can see diseased tissue’. That is, it is apparently free of human

motivation and merely records and reports an actual state of affairs. The scans that

result and any connotations they carry therefore appear natural just as an ordinary

photograph appears to be a natural representation. (The appearance of objectivity

results from nonintervention rather than verisimilitude – see page 74.)

To reiterate, the ‘look’ of a scan is understood in a medical context with

reference to its purpose. In a popular context, the look of a scan is understood

with reference to the object it represents (the internal organs or whatever). Any

difference between these two ways of making sense of the image arises from and

depends on the difference between the imaging technique and human vision.

88

In summary: the difference between an imaging technique and human

vision results in a deficit of secondary qualities with which to make sense of

features of the phenomenon represented. The deficit means that image makers

must ‘borrow’ secondary qualities associated with familiar objects – especially

when representing ‘invisible’ phenomena. The choice of which secondary

qualities to borrow is more or less arbitrary depending on the degree of difference

between the technique and human vision. Another way to put this is to say that

‘super-sensory gap’ between the operation of imaging technology and human

senses means that secondary qualities are ‘underdetermined’. The difference

between human and machine senses can be ‘measured’ along various axes. These

include the similarity between the operation of the detector used and the operation

of the eye; the similarity between the radiation used and visible light; the

similarity between the spatial resolution of the technique and the spatial resolution

of human vision; etc. Thus, the ‘further’ an imaging technique is from human

vision, the more an image maker must intervene to make a phenomenon visible.

That is, the greater the distance, the more decisions must be taken (though these

decisions can often be built into the imaging apparatus).

For simplicity, the various axes along which we measure the difference

between an imaging technique and human vision can be aggregated into two

groups. On the one hand, there are the ways in which a technique is different from

the operation of sight (the propagation of light, the optics and chemistry of the eye

and the operation of the brain). On the other hand, there are the ways in which the

resolution of the image is comparable to the resolution of human vision.

89

Ultrasound

Photography

ComputedTomography

Aggregate ‘Resolution’ Axis

Agg

rega

te ‘S

imila

rity

to V

isio

n’ A

xis Scanning

TunnellingMicroscopy

Human Vision

Other EM waves

Infra-red / Ultra-violet

Sound

Electron waves

EM absorption(shadows)

Stimulated emission

Time of flight

Tunnelling currentAlitmetry /Clinometry

MagneticResonance

Imaging

Astronomy (various)

PositronEmission

Tomography

Light Microscopy (various)

TransmissionElectronMicroscopy

ScanningElectronMicroscopy

Figure 8 Arbitrary visual elements plot. Points on the right of the plane represent super-human resolution and points to the left, sub-human resolution. The further up the plane, the less like human sight is the mechanism of the visualisation technique. The plot suggests that scanning tunnelling microscopy requires more arbitrary decisions to be made in the imaging process than scanning electron microscopy because it is further from the origin. However, there is no purely objective way to determine the co-ordinates for any particular technique. The diagram gives a rough indication of imaging techniques’ relation to each other rather than a numerical measure of their distance from human vision. Note that the axes are labelled but no scale or units are indicated as units of ‘aggregate resolution’ or ‘aggregate similarity to human vision’ would be meaningless.

Aggregating the axes in this way has the advantage that imaging

techniques can be conceived as occupying a plane (an imaginary/conceptual

plane). Doing so provides immediate insight into the way an image maker is

forced to intervene in the process of representation and the significance of the

intervention.

A technique’s position on the plane is determined by a pair of co-ordinates

one corresponding to an aggregate measure of the difference in resolution and the

other to an aggregate measure of difference between the technique and the

mechanism of sight. The origin of the plane (the point with co-ordinates 0,0)

represents human vision. So, for instance, a CAT scan would be fairly near the

origin on the ‘resolution’ axis but further away on the ‘similarity to the

mechanism of sight’ axis (see Figure 8, page 89). An MRI scan would be slightly

further away than the CAT scan on the resolution axis and much further away on

the other axis. Thus the position on the plane of CAT scans is closer to the origin

90

than MRI which indicates that the relation between secondary and primary

qualities is slightly more arbitrary in the latter case. From this we can conclude

that more intervention is required on the part of the image maker in MRI (that is,

MRI images require more significant choices to be made). The distance from the

origin also indicates that the meaning of an MRI images is more dependant on its

context and more sensitive to shifts in context27. (Though see the caption for

Figure 8 for some important caveats).

27 Issues surrounding the representation of MRI and other data are discussed in Bernice E. Rogowitz and Lloyd A. Treinish 1996, How Not to Lie with Visualization. An account of the “medical-imaging revolution” can be found in Diana Phillips Mahoney 1996, The Art and Science of Medical Visualization and in Michael L. Rhodes 1997, Computer Graphics and Medicine: A Complex Partnership. A more general account of the use of colour can be found in Lindsay W. MacDonald 1999, Using Color Effectively in Computer Graphics. See also Levkowitz, et al 1992, Color versus Black and White in Visualization.

91

3 ‘Pre t ty P ic tures ’ and Other Types of Sc ient i f ic

Image

Towards Taxonomy ‘Pretty Pictures’: Scientists’ Own Perception of Popular and Professional Images

Imaging techniques that lie a long way from the origin on the arbitrary

visual elements plot (Figure 8, page 89) give scientists more control over the

resulting image than techniques that are close to it. This means that images can be

specially crafted to perform particular functions. (The disadvantage however is

that they are less able to act as independent records of the sort Erwin Christeller

was trying to achieve – see page 71). In addition to the technical functions that an

image may perform in scientific practice, there are also a wide range of social

functions.

So far, we have discussed the contexts for scientific images as if there

were just two of them: scientific and popular. This approach is flawed for the

reasons that Stephen Hilgartner (1990) outlines in his critique of the ‘dominant

view’ of popularisation (see page 7). Rather than a clear distinction between the

two, it is more instructive to think of a spectrum or, to borrow Hilgartner’s

metaphor, a ‘stream’ (see page 34). At one end of the stream are images produced

for use within a laboratory setting – images that have no audience other than the

image makers themselves. At the other end of the stream might be decorative uses

of scientific images in contexts that make no reference to science whatever. In

between there is a range of contexts for images and a wide variety of audiences

for them: conference papers; conference posters; papers in journals; review

papers; grant proposals; textbooks; cover images for journals; professional

magazines; advertisements for laboratory equipment; popular articles, etc.

Broadly speaking, the ability for an audience to engage critically with a

scientific image at the level of distinguishing arbitrary elements within it from

externally determined elements diminishes as one moves ‘downstream’. In line

with this, the further downstream a picture’s context, the less it operates as a tool

92

for organising the world and the more its aesthetic qualities become significant to

its function. When an image is moved downstream (rather than being made with a

downstream context in mind), both the arbitrary elements of the image and the

functions that the arbitrary elements had upstream are indiscernible. For instance,

if a colour-map is used in a research environment to isolate particular features in

an image then relocating the image to, say, a popular book has two effects: First,

the colours may be taken as an intrinsic property of the object itself and second,

the colours may be judged according to criteria that are very different from those

that originally motivated the choice.

The distinction between a using a picture as a tool and using it for other

purposes is an important one for scientists. Often they will group all uses of

pictures that are not directly related to research in a single category. Pictures that

fit in this category are called ‘pretty pictures’. Michael Lynch and Samuel Y.

Edgerton Jr in an ethnographic study of the role of aesthetics in astronomy found

this to be a common way to describe the pictures that were reserved for decorative

or promotional use (Lynch and Edgerton Jr 1988: 191-193).

Some of the astronomers had gained reputations for their ‘pretty

pictures’. They were noted for, and took pride in, their innovative use of

false colour and other graphic effects. Several of them had published, or

were currently working on, semi-popular books and articles… Suitable

illustrations were prepared and selected (both by authors and editors)

specifically for such publications and were distinguished from those

produced for ‘scientific’ projects. (Lynch and Edgerton Jr 1988: 191)

Making pretty pictures, as distinct from ‘scientific pictures’ happens not

just in astronomy but in most other disciplines also, often taking up a significant

amount of a researcher’s time. Microscopists for instance will add colour to

monochrome images in a program such as Photoshop or add colour directly to a

print with brush and ink. In general, the use of colour does not assist the scientific

process and there is nothing ‘automatic’ about the process as there is in the case of

applying a colour-map (note 6, page 62). Though even colour-maps can be used

‘gratuitously’ as one of Lynch and Edgerton Jr’s astronomers notes of a picture of

an object against a magenta sky,

False colour is really a cheap way of dressing up the presentation. It

doesn’t really convey much information, but it’s something editors like.

93

An editor will often say, ‘I like that picture, because of the reds,’ where

to me it’s outrageous because it has nothing to do with the science, the

astronomy. It’s a distraction. (Lynch and Edgerton Jr 1988: 194)

Sometimes the colours are naturalistic so, for instance, scanning electron

micrographs of microscopic animals or cells will be coloured in a way that seems

plausible. More often, the colours are bright and eye-catching. They signal clearly

that they are ‘false colours’ and are chosen either to make an object of interest

stand out or to transform an image of visually dull objects such as single cells into

a striking image in its own right. Some researchers place their pretty pictures as

well as their routine work in picture libraries and can earn considerable royalties

for them, becoming well known outside their field in the process. Two examples

are the mircroscopist David Scharf and the astronomer David Malin28. Picture

libraries will sell licenses for a wide range of uses from textbooks to design

applications that are totally unrelated to science. Thus, we see images in picture

libraries moving both upstream and downstream from the original location their

authors had in mind.

The promise of royalties is rarely the primary motivation for producing

pretty pictures. Usually the motivation is personal satisfaction but, especially in

big projects, pretty pictures are important for promoting research.

[O]ne scientist mentioned that spectacular false-coloured images can

come in handy as colour illustrations for grant proposals reviewed by

NASA and other government bodies. Another researcher amassed a large

collection of slides, and developed captions for them, to be packaged as a

promotional programme for a series of astronomical satellites proposed

for launching by NASA. Images were selected and prepared specifically

for these purposes, using false colour schemes believed to appeal to lay

audiences. (Lynch and Edgerton Jr 1988: 192)

Promoting research involves public relations on two levels. On one hand,

researchers seek awareness and approval from funding bodies (to attract funds).

On the other hand, they seek awareness and approval from their colleagues and

the public (for self-esteem). Many research programmes have World Wide Web

28 Many examples of pretty pictures in anatomy can be found in Ewing 1996 and in astronomy in Henbest and Marten 1996.

94

sites that list publications and ongoing work and usually contain striking images

on the first page and in gallery pages. This is especially true of big astronomy

programmes – expensive space based experiments in particular29. For the bigger

projects, promotional activity is aimed more at the public in general than funding

bodies in particular.

The stylistic considerations that go into pretty pictures are those of

contemporary design practices. They are culturally embedded in precisely the

same way as advertising, fashion and other genres of images. Colours change over

time as fashions change. Fine art practices influence and inspire stylistic choices

in scientists’ promotional images in the same way they do in advertising. For

instance, many false coloured micrographs and astronomical images make

reference, stylistically, to abstract expressionism. This often means that it is

difficult to decide if a picture represents a galaxy or a virus because the look of

each is the same. A popular book about cosmology published in the early 1990s

carried a picture of an influenza virus on its cover. The picture editor involved

was not too concerned when this was pointed out to her. The cover required,

above all else, an eye-catching image. It was to be fairly abstract but it should

signify astronomy. Because many of the signifiers of microscopy also signify

astronomy, false coloured influenza viruses do the job required of the cover image

perfectly well.

We can now mark another distinction between ‘popular’ and ‘scientific’

images. In general, connotation is more important than denotation for scientists’

‘pretty pictures’ and the opposite is the case for their scientific pictures. This

conclusion can be recast avoiding the binary opposition between scientific and

popular inherent in scientists’ own discourse. The further ‘downstream’ an image

is located the more important are the connotations it carries and the less important

is the phenomenon it denotes.

29 See for example, the Web sites associated with the examples in this paper: Scanning Tunneling Microscopy: http://www.almaden.ibm.com/vis/stm/ (July 1999) and SMU Geophysical Imaging Compound: VENUS, http://www.geology.smu.edu/~dpa-www/venus.html (Sept. 1999). Examples of space based astronomy sites are SOHO: The Solar and Heliospheric Observatory http://sohowww.nascom.nasa.gov/ (March 2000) and the Hubble Space Telescope site: STScI/HST Public Information http://oposite.stsci.edu/ (March 2000).

95

However, the distinction between denotation and connotation should not

be confused with an epistemological claim – it is a purely analytic distinction30. It

is true that the function of images in scientific contexts is most likely to relate to

their correspondence to phenomena in the real world (that is, images in scientific

contexts are, generally, ‘iconic signs’31) and that the function of images in non-

scientific contexts is more likely to relate to other ideas inspired by the image. But

the correspondence to the real world (or lack of it) is not the moot issue here.

In general usage, ‘denotation’ is equated with the literal meaning of a sign

and ‘connotation’ equated with associated but unfixed meanings that require the

intervention of codes32. In Elements of Semiology (Roland Barthes’ textbook,

1972) denotation is not given this special status of ‘natural meaning’. Rather, it

refers only to the ‘first system of signification’ from which a second system is

generated. The important difference in this account is that denotation refers not to

‘literal meaning’ but to those aspects of a sign that are taken to be its ‘literal

meaning’ – whether this putative literal meaning corresponds to some unique

feature of the real world or not. Rather than ‘literal meaning’ and ‘denotation’, we

could, perhaps, speak of the “near-universally consensualised meaning”

(Hall 1980: 133). This would break the implied connection with the real world in

the common use of the term ‘denotation’ but it is unnecessarily clumsy so long as

we are clear about the definition. In images in general and in scientific images in

particular, the correspondence with the world is problematic. Thus, conflating

denotation and ‘truth’ is counterproductive33. Barthes comes to the following

conclusion about denotation,

Denotation is not the first sense but pretends to be. Under this illusion, in

the end, it is nothing but the last of connotation (where the reading is at

the same time grounded and enclosed), the superior myth, thanks to

which the text pretends to return to the nature of language… We must

keep denotation, old vigilant deity, crafty, theatrical, appointed to

30 This discussion draws on Stuart Hall’s definition of ‘denotation’ and ‘connotation’ in Encoding/Decoding (1980: 132-133). 31 For discussions of semiotic approaches to iconic signs see Eco 1982 and Kress and van Leeuwen 1996. 32 See, for instance, Guiraud 1975: 28-29. 33 The strategy of ‘bracketing off’ epistemological problems in science studies is discussed in greater detail below (page 154).

96

represent the collective innocence of language. (Barthes, S/Z, quoted by

Heck 1980: 126).

Despite the problems with the distinction between denotation and

connotation, it is still a valid and useful one to make because it is principally at the

level of connotation that any struggle over meaning takes place34. It is at this level

of a sign that ideologies transform signification. But, as Stuart Hall points out,

This does not mean that the denotative or ‘literal’ meaning is outside

ideology. Indeed, we could say that its ideological value is strongly fixed

– because it has become so fully universal and ‘natural’. The terms

‘denotation’ and ‘connotation’, then, are merely useful analytic tools for

distinguishing, in particular contexts, between not the presence/absence

of ideology in language [and images] but the different levels at which

ideologies and discourses interact. (Hall 1980: 133).

However, despite the preceding hair-splitting, the most interesting finding

in Lynch and Edgerton Jr’s study was not the distinction between pretty pictures

and scientific pictures. Rather, it was the way science and aesthetics interact in

each context. When one of the astronomers in the study was pressed about why

one pretty picture was more beautiful than another was, it became clear that the

reasons were not purely aesthetic (or that aesthetic judgements were not limited to

patterns and colours). Although the astronomer claimed that the reasons were

artistic rather than scientific, the important criterion was what the colour scheme

said about the actual object and the scientists’ interest in it.

At one time I thought I wanted to make the x-ray purple and the radio

red, because it gives you an idea that this is a higher energy photon.

(Lynch and Edgerton Jr 1988: 199)

In light of this type of response from their subjects (and corresponding

responses when it was scientific rather than popular uses that were being

discussed), Lynch and Edgerton Jr make the following cautious point about the

distinction between pretty and scientific images,

34 We see this in, for instance, Barthes’ example of the pasta advertisement in The Rhetoric of the Image (in Barthes 1977) and his example of the cover of Paris Match in Myth Today (in Barthes 1973).

97

Without denying the validity of the distinction’s reference to separate

markets for scientific products, we can note that it is misleading to

suppose that, 1) the sense of what is nice or beautiful about popularised,

colourful renderings has nothing to do with substantive astronomical

properties, and 2) doing ‘real’ science excludes any notion of art or

aesthetics. (Lynch and Edgerton Jr 1988: 196-197)

The term ‘pretty pictures’ is deliberately demeaning in common usage (in

science). It suggests trivial or vulgar images in contrast to cognitively demanding

scientific images. This opposition establishes upstream uses of images as more

legitimate than downstream ones – the ‘proper’ use of scientific imaging

techniques is in the production of scientific knowledge. Amongst scientists in

general, this distinction does not reflect contempt for downstream uses, merely an

ordering of priorities. For one group of researchers though, the distinction was

particularly important.

Scientific Visualisation: A New Type of Image? A New Role for the Viewer? ‘Scientific visualisation’ can be defined as the process of making data

directly accessible to human senses to foster insight into intrinsic patterns or

trends. The term entered common usage as late as 1986 (Rosenblum and

Nielson 1991: 15). In the late 1980s and early 1990s, it emerged as a new field

attracting funding in its own right35. Before this time, the field lacked coherence

and general assent. Lawrence Rosenblum and Gregory Nielson speak of

visualisation researchers in the early 1980s as, “‘true believers’ hidden away in

scientific laboratories [battling] to obtain funding” (Rosenblum and

Nielson 1991: 15). Just as researchers in other disciplines do, the visualisation

community promotes itself with striking images. Being an inherently visual and

visually innovative field it has many resources to draw on for its own promotion.

However, in establishing itself as a technical field in its own right the field of

scientific visualisation seeks to distance itself from the ‘merely aesthetic’ role of

images. This results in an odd ambivalence to its own products. The visualisation

experts are happy to celebrate striking images, many of which have become icons

35 The development of scientific visualisation is itself a rich and fascinating topic, discussion of which would augment the observations on imaging made here. Unfortunately, space does not allow the topic to be dealt with in any serious depth.

98

of progress or of science itself36 but they are at pains to stress that visualisation is

not about ‘mere’ images. Expressing contempt for ‘pretty pictures’ is a short hand

way of indicating that visualisation is (or should be) an integral part of the

research process.

The most exciting potential of wide-spread availability of visualisation

tools is not the entrancing movies produced, but the insight gained and

the mistakes understood by spotting visual anomalies while computing.

Visualisation will put the scientist into the computing loop and change

the way science is done. (McCormick, et al 1987: 6)

In the 1990s, scientific visualisation had enormous influence on a range of

practices. For instance, the solution of problems in visualisation affected the

development of computer graphics in entertainment. (The influence has travelled

in the opposite direction also.) In addition, visualisation solutions in science have

been rebuilt as ‘data-mining’ tools for marketing and as display systems for

financial information37. The influence of scientific visualisation can be accounted

for in the way the field brings previously distinct disciplines together. Disciplines

such as design, psychology, computer graphics, etc. share an interest in vision but

communication between them was limited before the emergence of scientific

visualisation. Many of the problems addressed by scientific visualisation required

an integrated approach from a broad range of researchers. As a result, innovation

in, say, graphic design is quickly translated into innovation in visualisation and

vice versa. As visualisation has been responsible for many developments in visual

technologies and practices, the field represents the most obvious location for a

dialogue between science and visual culture in the rest of society. Because of this,

it is worth examining the social context of its development.

36 Many of the examples cited in McCormick, et al 1987 took on (almost) cult status and were employed as icons of the new field or of science itself. See Kallick-Wakker 1994 for a discussion of visualisations as icons. 37 This is especially true of Advanced Visual Systems, a company that grew out of Stardent Computer in 1992. In the early to mid 1990s, Stardent’s main product became one of the most important development environments for scientific visualisation solutions. The product (The Application Visualization System) was developed in the immediate aftermath of the NSF report (McCormick, et al 1987) and announced in 1989 (Upson, et al 1989). The marketing of AVS now reflects a significant shift away from science towards business markets. See, for example, AVS – What it Can Do For Your Business, www.avs.com/solution/cando.htm (Sept. 1999).

99

In 1987 a report for the US National Science Foundation38 claimed that the

millions of dollars being invested in supercomputing was being wasted because

scientists could not make use of the results.

Secretaries who prepare manuscripts for scientists have better interactive

control and visual feedback with their word processors than scientists

have over large computing resources which cost several thousand times

as much. (McCormick, et al 1987: 7)

The problem identified in the report was that computers could run

sophisticated simulations and could handle very large (automatically collected)

data-sets but there was a limit to what machine processing could achieve without

human intervention. The report lamented:

Today’s data sources are such fire hoses of information that all we can do

is gather and warehouse the numbers they generate. (McCormick,

et al 1987: 4)

The solution proposed by the report was to increase investment in

‘visualisation in scientific computing’. (The term ‘scientific visualisation’ had yet

to be adopted universally.) They advocated a research programme that covered the

psychology and aesthetics of computer graphics as well as software techniques,

graphics hardware and networks that could carry the huge amounts of data that

visualisation techniques generate.

A list of big sources of data mentioned by the report include the following:

Supercomputer simulations; Earth observing satellites; scientific space missions;

radio astronomy; geophysical instrument arrays; and medical scanners. The ‘push’

for visualisation from big data-sets was augmented by a ‘pull’ from scientists

who, the report claimed, wanted to interact with their data.

Researchers want to steer their calculations in close-to-real-time; they

want to be able to change parameters, ... and see the effects. They want to

drive the scientific discovery process; they want to interact with their

data. (McCormick, et al 1987: 5)39

38 McCormick, et al 1987. A synopsis of the report appears in the July 1987 issue of IEEE Computer Graphics and Applications, pp 61-70. 39 For more detailed (technical) discussion of the concept of ‘steering’ see Mikael Jern and Rae A. Earnshaw 1995, Interactive Real-Time Visualization Systems Using a Virtual Reality Paradigm.

100

Although creative use of visual techniques has always been a signal

feature of science, there was no coherent topic called ‘visualisation’ before the

NSF report. Instead there was a collection of disparate practices. The cutting edge

of the field was driven by relatively few enthusiasts. Other visualisation practices

were confined to individual disciplines. For instance, the problem of selecting an

appropriate colour-map to bring out features within data was addressed separately

within each discipline and different conventions were established in astronomy,

radiography, fluid-dynamics, etc.

One effect of the report was to legitimise those at the cutting edge of

computer graphics in science – their status switched from ‘maverick enthusiasts’

to ‘mainstream’. In addition, researchers began to regard disparate computational

and visual practices in science as part of a unified research programme. Computer

graphics experts, psychologists, scientists from every discipline and even artists

recognised their common interests and the subject became a research interest in its

own right. For instance, problems that in the past had been ‘problems of

astronomy’ or ‘problems of medical physics’ (such as selecting colour-maps or

interacting with a 3 dimensional arrays of data) became ‘problems of

visualisation’ – amenable to a common set of transferable solutions and

techniques40. In describing the work of his group, radio astronomer Ray Norris of

the Australia Telescope National Facility notes,

In addition to astronomical visualisation, the group has strong links into

medical and geophysical imaging projects at the CSIRO [Australian

scientific research body] Division of Radiophysics, so that a significant

degree of cross-fertilisation takes place. (Norris 1994)

Another member of his group, Richard Gooch (1995), describes how

research in computed tomography visualisation for medical applications was co-

opted for use in radio astronomy. By the time Gooch was working on visualisation

in radio astronomy, the field was well established. The inauguration of a new

journal in the field prompted the following confident comment,

40 David Stern, the president of Research Systems International (a visualisation software company whose main product is ‘ICL’) believes that the significance of the NSF report is overstated here. According to him, the report, “started a visualisation craze but otherwise was not particularly important. It was a way for the [National Centre for Supercomputing Applications] to get more money”. (Personal communication).

101

We are currently witnessing an explosive growth in the research,

development, and use of visualisation and computer graphics techniques

and systems. This field is ranked among the most promising and enabling

technologies of computer science and engineering covering the

computer/human interface level. (Arie E. Kaufman 1995: 1)

We can track the development of scientific visualisation in a number of

ways. In the early 1990s a number of textbooks were published that helped to

define the field and more recent textbooks continue this work41. Several

postgraduate programmes in scientific visualisation have been established in the

1990s and their curricula also give a sense of the constitution of the field. One

particularly telling index is the way scientific visualisation is discussed in

professional magazines and journals. IEEE Computer Graphics and Applications

for instance has had several special issues on visualisation since 1991. In the

fourth of these Gregory M. Nielson and Arie E. Kaufman (1994) themselves point

out,

The guest editors’ introductions to these special issues serve as a

chronicle of visualisation’s development. (Nielson and

Kaufman 1994: 17)

They note that articles from the series are amongst the most widely

referenced sources in visualisation. The articles tend to stem from papers

delivered at visualisation conferences. The editors’ assessment of the status of the

discipline in 1994 draws on “the metaphor of life”,

[W]e could say that visualisation has just graduated from high school and

is going off to college. Despite the success it has already achieved, the

real challenges and contributions lie in the future. Visualisation must

leave the security of its supporting disciplines and branch off on its own.

(Nielson and Kaufman 1994: 17)

In an earlier special issue on visualisation Lawrence J. Rosenblum and

Bruce E. Brown suggested renaming the field ‘realisation’. ‘Visualisation’ they

believed was a misnomer because the dictionary definition of the term involves

41 See for instance Brodie, et al (Eds.) 1992; Keller and Keller 1993; Wolff and Yaeger 1993; Grave, Hewitt and Le Lous (Eds.) 1994; Pickover and Tewksbury (Eds.) 1994; Rosenblum, et al (Eds.) 1994; Groß 1994; Gallagher (Ed.) 1995; Göbel, Müller and Urban (Eds.) 1995; Bowie 1995.

102

forming a mental image whereas they were concerned with making images that

were more than mental. Their suggestion failed to be widely adopted though and

now the term ‘scientific visualisation’ is firmly established.

The growing coherence of the term ‘visualisation’ is reflected in the

numbers of articles in the Science Citation Index that mention the word in their

title, abstract or keywords and this offers another way to track the development of

the field. The graphs (Figure 9, page 103) show that the number of articles with

visualisation or related words in the title increased from 273 in 1981 to a projected

figure of 768 in 1999 – an increase of 280% over the 19-year period. When we

normalise these figures against the total number of articles indexed in each year,

the increase is less impressive but still significant at 180%. (In 1981, 4.9 papers in

every 10 000 contained ‘visualisation’ or related words in their title. In 1999 this

figure had increased to 8.2 papers in every 10 000.) The number increased fairly

slowly throughout the 1980s and much faster during the 1990s which is consistent

with view that modern visualisation began in 1987. The increase seems to be

levelling off as we enter the new century.

The Science Citation Index also indicates the growing importance of

imaging to science. As Figure 9 (Bottom) shows, papers with ‘imaging’ or related

words in their title made up about a third of 1% of all papers indexed in 1981

(2088 papers out of 599729) and about 1% of all papers indexed in 1999 (9539

papers out of 941136 – projected figures). This change reflects the number of new

techniques introduced over the period and the ease of integrating them into

established scientific practices since the widespread adoption of personal

computers.

103

visuali* in title

273 33

7

333

309

317 38

2

377

368 40

1

347

366 43

8 474 53

6 633 66

6

762

766

768

0100200300400500600700800900

1981

1983

1985

1987

1989

1991

1993

1995

1997

1999

no. o

f pap

ers

0.00%0.01%0.02%0.03%0.04%0.05%0.06%0.07%0.08%0.09%

% o

f all

pape

rs

imag* in title

2088 27

48

2811 32

67 4190

4266

4483 49

21

4435 51

76

5355 64

10

6064 69

47

8405 90

87

9253

9351

9538

.5

0

2000

4000

6000

8000

10000

12000

1981

1982

1983

1984

1985

1986

1987

1988

1989

1990

1991

1992

1993

1994

1995

1996

1997

1998

1999

no. o

f pap

ers

0.00%

0.20%

0.40%

0.60%

0.80%

1.00%

1.20%

%ag

e of

all

pape

rs

Figure 9 Top: articles in the Science Citation Index with the word ‘visualisation’, ‘Visualization’, ‘visualize’, ‘Visualizing’, or similar in their title. The columns represent the number of papers and the line represents the number as a percentage of all papers indexed. Bottom: articles with ‘imaging’, ‘image’ or similar in their title (but not ‘imagination’). The figures for 1999 are a projection from the values up to and including August 1999. (See Appendix 1: Imaging and Visualisation in the Science Citation Index for a more detailed explanation.)

In tracking the development of visualisation, however, it is more

instructive to look at implicit and explicit attempts to define the subject than it is

to count papers. Visualisation was defined above as “the process of making data

directly accessible to human senses so as to foster insight into intrinsic patterns or

trends” (page 97). The definition was deliberately general. Although the term

‘visualisation’ is usually enlarged to include techniques that provide aural and

104

tactile information42, the visual sense is dominant in almost all accounts of the

subject. As McCormick et al. put it,

An estimated 50 percent of the brain’s neurones are associated with

vision. Visualisation in scientific computing aims to put that neurological

machinery to work. (McCormick, et al 1987: 3)

Most textbooks on visualisation contain a verbal definition along the lines

of the one outlined above. That is, most accounts attempt to capture the essence of

the subject in a simple linguistic manner. For instance, McCormick et al. define it

thus,

Visualisation is a method of computing. It transforms the symbolic into

the geometric, enabling researchers to observe their simulations and

computations. Visualisation offers a method for seeing the unseen…[it]

embraces both image understanding and image synthesis. That is,

visualisation is a tool both for interpreting image data fed into a

computer, and for generating images from complex multi-dimensional

data sets. It studies those mechanisms in humans and computers which

allow them in concert to perceive, use and communicate visual

information. Visualisation unifies the largely independent but convergent

fields of: computer graphics, image processing, computer vision,

computer-aided design, signal processing, user interface studies.

(McCormick, et al 1987: 3)

This definition is as much a proposal as a summary. It is a way of selling

the idea to the authors’ principal audience – the National Science Foundation. At

the same time it is a banner that potential visualisation researchers can rally

around43. Another way that visualisation is defined is ostensibly – by pointing to

42 A particularly interesting example of a visualisation technique that gives tactile feedback is a project at the University of North Carolina called the nanoManipulator. This is a virtual reality interface that allows researchers to interact with an atomic-scale surface through the medium of an atomic force microscope (AFM) which is similar to a scanning tunnelling microscope (see Weisendanger 1994, Chap. 2). The user wears special glasses that give him or her a 3 dimensional view of an atomic ‘landscape’ constructed from AFM data. The tip of the AFM is moved around with a joystick. The resistance the researcher feels through the joystick is directly proportional to the actual forces on the tip. Individual atoms (or molecules) can be pushed around the surface and arranged into patterns. See The nanoManipulator (UNC-CH): www.cs.unc.edu/Research/nano/ (Feb. 1997) and the references listed therein. 43 A collection of such definitions from various sources can be found at Definitions and Rationale for Visualization: www.education.siggraph.org/materials/HyperVis/visgoals/visgoal2.htm (Sept. 1999).

105

examples. This is where pretty pictures play a role and why it is so important to

distinguish aesthetics from utility. In a definition of visualisation, the most

important thing that a pretty picture signifies is that it can be used to gain insight

into the data it denotes. That is, the promotional material of the visualisation

community does not encapsulate a claim about the world in the way other

scientific images do (see page 57). The downstream uses of images emanating

from the visualisation community are intended to be seen as ‘constructed’ images.

So far we have contrasted the sophisticated, self-conscious approach to

‘the interpreted image’ (see page 65) inherent in scientific practice with the less

sophisticated role of the observer when scientific images are popularised (moved

downstream). With the advent of scientific visualisation, the contrast is no longer

so clear. Even if viewers are not empowered to use the image, they are

nevertheless encouraged to view the image as a tool rather than a ‘plain

representation’. With visualisation, the status of the scientist is promoted from a

‘reader’ to a ‘user’ of images. The status of downstream audiences is (in many

instances) also promoted from ‘consumer’ to ‘reader’. Thus, there is still a

significant distinction between the two audiences but ‘popular’ audiences are

significantly empowered in comparison with other forms of scientific image in

popular contexts. (The terms ‘consumer’, ‘reader’ and ‘user’ are more complex

and have more contested histories than is suggested by this brief discussion of

visualisation. Here, the intention is to distinguish three levels at which an

audience can orient itself to a visual text. Broadly, an audience can address a text

passively, actively or proactively.)

In visualisation, the arbitrary aspects of the images are not intended to be

transparent. Rather, the choices they represent and the significance of the

decisions are made explicit. As pictures, promotional images from visualisation

denote ‘science’ rather than phenomena. Thus, even in downstream applications,

scientific visualisations operate at a self-conscious ‘meta-level’. The viewer is

encouraged to see the function behind the form. Unlike more traditional scientific

images, modern visualisations in popular contexts are generally not a form of

‘mythical speech’.

Empowering the popular viewer in this way is an important departure for

the scientific image – one that may, in years to come, disrupt some of the

dimensions along which scientists and the public are discriminated. This is

106

because as techniques have become more sophisticated, the visual idioms

employed in visualisation have increasingly originated outside science – borrowed

from more familiar visual genres. The visual and (more importantly) the critical

skills that scientists must mobilise to make sense of data are the same as the visual

skills needed in ‘public’ media. Thus developments in visualisation may

inadvertently bring about a change in the role of popular audiences. ‘The public’

thus empowered can look beyond the ‘claim about the world’ inherent in an image

and understand the rhetoric of that claim (as happens elsewhere in visual culture).

The popular audience for scientific images may yet be transformed. Viewers who

previously were consumers of scientific images will, as visual culture develops, be

critics44.

However, the disruption that attends actively encouraging an audience to

acknowledge the constructedness of an image can travel in the opposite direction

also. Visualisations are unambiguous in the sense that what they denote is clearly

understood. Moreover, the questions they address about whatever it is they denote

are also made clear. Thus, the meaning of the image is clear and the way it

interacts with the viewer to create meaning is clear in a way it never is in, say,

reportage. This candidness can reveal how examples of genres, such as reportage

and others appropriated by visualisation, themselves interact with the viewer to

create meaning45. That is, in being uncommonly explicit, scientific visualisation

can make both scientific and non-scientific audiences more critically aware of

issues around representation.

In The Rhetoric of the Image Roland Barthes asks a series of questions

about the meaning of images. “How does meaning get into the image? Where

does it end? And if it ends, what is there beyond?” (Barthes 1984: 32-33). These

questions are addressed with reference to the range of messages that images may

contain. Recognising and distinguishing messages within images is not

straightforward. Mindful of the difficulties, Barthes proposes a strategy to

simplify the problem for analysis.

44 The notion of ‘visual literacy’ developed here is related to Richard Hoggart’s distinction between ‘basic’ and ‘critical’ literacy (Hoggart 1992). 45 For accounts of images in reportage see Harold Evans 1997, Pictures on a Page; and Vicki Goldberg 1991, The Power of Photography.

107

We will start by making it considerably easier for ourselves: we will only

study the advertising image. Why? Because in advertising the

signification of the image is undoubtedly intentional; the signifieds of the

advertising message are formed a priori by certain attributes of the

product and these signifieds have to be transmitted as clearly as possible.

If the image contains signs, we can be sure that in advertising these signs

are full, formed with a view to the optimum reading: the advertising

image is frank or at least emphatic. (Barthes 1984: 32-33)

A similar argument can be made for scientific images and for modern

visualisation techniques in particular. Like the advertising image, the signification

of the scientific image is undoubtedly intentional and the signifieds of the

scientific message are formed a priori by certain attributes of scientific theories.

These signifieds have to be transmitted as clearly as possible: they are both frank

and emphatic – more so even than in advertising.

At the beginning of the twenty-first-century, visually sophisticated publics

approach the inherent claims of photographs more critically than audiences at the

beginning of the century. For instance, the famous hoax performed on Arthur

Conan Doyle by two young girls would be unlikely to succeed today46. The

growing sophistication of audiences is due in part to the advertising image and the

fact that the signification of the advertising image is intentional. As viewers we

learn to distinguish what advertising images denote from the connotations they

carry and we come to recognise the intended effect of the latter. This forces

advertisers to be ever more subtle in their approach and advertising images to

become ever more sophisticated. We are usually not supposed to see how

advertising images achieve the goals of advertisers – to understand how they work

would undermine their effectiveness. Photographs in general do not make their

role in discourse clear. Viewers are invited to admire them as objects but

generally not invited to question their social function. (That viewers generally do

question the social function of images in making sense of them is a different

issue.)

46 Elsie Wright and Frances Griffiths made photographs that apparently showed fairies dancing around the head of a girl. Conan Doyle, a committed spiritualist, was completely taken in and popularised the photographs widely. See Mitchell 1992: 196.

108

This is not the case with scientific visualisation. To see a visualisation is to

see how it achieves its goals – to understand it as a tool. However, visualisations

can also function simply as pretty pictures in the same way as other scientific

images. They can carry connotations of ‘scientificity’ without inviting a critical

gaze. Ingrid Kallick-Wakker addresses this aspect of visualisation in her account

of the iconography of scientific visualisation,

The exact purpose or use of a visualisation is rarely evident on its

surface. The compelling and engaging nature of visualisations often

obscure their limitations as representations. (Kallick-Wakker 1994: 311)

However, this is a criticism of scientific images in general – the same

point made here. If a visualisation of, say, gravitational waves associated with a

rotating black hole (e.g. Anninos, et al 1993) were used in a popular book about

general relativity then the criticism would hold. If, on the other hand, it were

presented in downstream contexts as an example of scientific visualisation then

the purpose and limitations of the visualisation would be apparent.

Visualisation, then, is an important departure for the scientific image

because, even in downstream contexts, visualisations denote the process of

science rather than just a natural object. They explicitly encode the motivation of

image makers and thus can not be divorced from their function – even in

downstream contexts. They advertise their status as constructed images as

opposed to ‘records’. Their candidness and sophistication may have a profound

influence on visual culture.

Scientific Images and Artistic Images: A New Look at an Old Distinction

We are now in a position to make a distinction between scientific images

(scientific visualisations in particular) and artistic images. (This is not the same as

the distinction between scientific images in popular as opposed to scientific

contexts.) The relations of science and art have come under intense and prolonged

scrutiny in a variety of forums47. Visualisation impacts on the debates around this

subject in several ways (most of which can not be discussed here). One point that

we can extract from the discussion so far is related to the way visualisation idioms

47 See for instance Kemp 1990; Colonna 1994; Weimer 1994; Hays 1993; Lock 1994; Jones and Galison (Eds.) 1998; Miller 1996; Baigrie (Ed.) 1996; Olson 1994; Atkins 1989; Cox 1988; Burgess 1996; Snow 1969. See also the discussion of Inside Information below (page 212).

109

draw on fine art practice and in turn inform those practices. Another is related to

the way representation in scientific images is anchored in external reality. The

latter point is dealt with first.

The distinction between scientific images and artistic ones stems from the

way scientific images are necessarily anchored to an external reality – they are

always ‘about the world’ rather than autonomous objects in their own right. Even

in downstream contexts where the connotations carried by scientific images are

more important than the object it denotes, a picture is still primarily a picture of

something – ostensibly, the ‘content’ is the beginning and the end of the message

carried by the image. This is evident in the attitude of the astronomer quoted on

page 92, who believes it outrageous to choose images for publication on the basis

of colour. One reason that scientific images can operate on the level of ‘myth’ so

easily (see page 57) is this ever-present reference that can be pointed to even

when an image is being used to communicate an idea that is quite unrelated to

what the image denotes.

The same is not so true of art photography or of painting. The object

represented in painting does not have to be ‘what the painting is about’. Rather,

the viewer actively seeks ‘second order signification’ (Barthes 1973: 123-124).

The contrast is particularly clear in Georg Baselitz’ attitude to painting and his

strategies for disrupting the relation of a painting’s content from what the painting

represents,

The object expresses nothing at all. Painting is not a means to an end. On

the contrary, painting is autonomous. And I said to myself: If this is the

case, then I must take everything which has been an object of painting –

landscape, the portrait, and the nude, for example – and paint it upside

down. That is the best way to liberate representation from content.

(Georg Baselitz quoted in Boyne 1995: 71)

It is remarkable how effective this simple strategy can be in fine art. In

visualisation, such a trivial transformation as inversion could not possibly

“liberate representation from content”. It is already part of a visualiser’s repertoire

of transformations that can be applied to data to make the relation of the image to

the object it denotes yet more concrete. With visualisation, viewers/users get used

to reading radical alterations to objects – false colours, projections,

110

magnifications, etc. – as operations that increase the degree to which a picture is

‘about’ the object it represents.

It does not follow from the existence of a fundamental difference between

artistic and scientific images that there is no connection between the two. Fine art

and scientific visualisation are not wholly separate activities. Though the

‘visualisation goals’ are very different for scientists and artists, the ‘visualisation

idioms’ they employ may be identical. Moreover, innovation in one field will be

quickly adopted in the other. The ‘dialogue’ between art and science manifests

itself in a very concrete way in image based software. Adobe’s Photoshop has

been the standard ‘bitmap editing’48 package in media industries throughout the

1990s and into the twenty-first-century. Its real strength lies in the way it acts as

an interface or ‘bridge’ between chemical photography, printing and electronic

displays. As well as an interface though, it also acts as a way of communicating

and mediating a wide variety of imaging techniques and approaches to images.

Being so centrally positioned, Photoshop incorporates a wide variety of

practices in a single package. For instance, darkroom practices are mimicked to

the extent that lightening and darkening small patches of an image is called

‘burning’ and ‘dodging’. The icons for these represent the equivalent technique

used by photographers working with enlargers in darkrooms. Even more notable

however are the ‘filters’ and transformations. Many of these employ signal-

processing routines developed for scientific applications. The ‘look’ achieved

with such tools (which has become familiar in a range of popular contexts) often

bears a striking resemblance to much older imaging techniques in science.

Photoshop is such a versatile tool that even scientists use it for routine signal-

processing as well as for producing images with downstream applications in mind.

Third party manufacturers produce ‘plug-ins’ that adapt the program for particular

uses such as techniques in microscopy49.

(continued on next page)

48 When an image such as a photograph is digitised it is broken up into an array of picture elements or ‘pixels’. The colour of each of these pixels is encoded numerically. The picture thus becomes an array of numbers corresponding to its colour at each point. Such a format is called a ‘bitmap’ and is contrasted with ‘vector’ formats used for illustration and typography that record details of the shapes in a picture rather than recording an array of pixels. Bitmap editors are used to adjust colours and tonal ranges, to cut and paste areas of images, to transform and distort images and many other things besides. For more detail see Mitchell 1992, Chaps. 4 & 5. 49 For instance The Image Processing Toolkit contains plug-ins for Photoshop. See, The Image Processing Toolkit CD: http://members.aol.com/ImagProcTK/ (Sept. 1999). For other uses of

111

The dialogue between visual idioms in science and art is neither a product

of the computer, nor of the twentieth-century. The utility in science of visual

idioms developed by artists is clear in the way Galileo was able to employ his

knowledge of art to draw radical conclusions about his observation of the moon

and the sun. At the beginning of the seventeenth-century, there was a certain

amount of debate about the nature of the moon but the consensus opinion was the

Aristotelian belief that its surface was perfectly smooth. Suggesting otherwise

occasionally attracted accusations of heresy. In 1609 in Holland, Hans Lippershey

invented the telescope (Robin 1992: 22). A description of the instrument reached

Galileo (1564-1642) in Florence and by November 1609 he had built one for

himself and directed it at the moon. Unknown to him, an Englishman, Thomas

Harriot (1560-1621) had obtained a Dutch telescope and produced drawings of the

surface as early as July 1609.

contd… Photoshop in science see, Adobe Photoshop for Science and Research Uses: http://www.cbn.med.umn.edu/bipl/Pshp_science.htm/phtshp.htm (Sept. 1999).

112

Figure 10 Galileo’s watercolour sketches of the moon made from observations with a telescope in November or December 1610.

For Galileo, the new instrument settled the residual debate about the

surface of the moon: it was clearly mountainous. However, where Galileo saw

mountains and craters on the surface Harriot saw and recorded none until news of

Galileo’s observations had reached him. The discrepancy can be explained by

Galileo’s artistic education (Edgerton Jr 1984). His familiarity with the behaviour

113

of shadows on curved surfaces came from learning to represent them. He could

apply this experience both to creating scientific knowledge and publishing it in a

form that allowed others to recognise its validity. Harriot’s observations lacked

the insight of Galileo’s because they were not informed by the same visual

culture. Where Galileo saw roughness, Harriot saw only a “strange spottednesse”

(Edgerton Jr 1984: 227).

Figure 11 Left: Thomas Harriot’s sketch of the moon made on 26 July 1609. Right: a photograph (by António Cidadão) of a 5-day-old moon (the approximate view on 26 July 1609 – Julian calendar).

Even in the early seventeenth-century, visual idioms developed by artists

were an essential tool in natural philosophy. Galileo’s natural philosophy was

embedded in the visual culture of his time. The visual techniques of fine artists

were incorporated within sophisticated scientific approaches to nature. For

instance, Galileo went on to combine his application of visual conventions with

geometrical arguments and was thus able to measure the height of the mountains

on the moon. In 1612, he was able to resolve a dispute about sunspots with

recourse to the laws of linear perspective (Kemp 1990: 94-95).

Whilst Galileo was observing the moon, Lodovico Cigoli was painting The

Virgin of the Immaculate Conception, a fresco in Santa Maria Maggiore in Rome

(Figure 12). For Renaissance Christians, the moon was a sign of the Immaculate

Conception and ‘pure as the moon’ was a common simile

(Edgerton Jr 1984: 226). Conventionally, the virgin is depicted standing on a

smooth moon but, following Galileo, Cigoli chose to make the moon rough50.

50 Edgerton Jr notes that to this day the image is officially called the Assumption of the Virgin rather than the Immaculate Conception but the prominence of the image represents tacit acknowledgement by the Church of Galileo’s moon.

114

This last bit of the story is intended to signal the dialogue that has always existed

between the visual culture of science and the visual culture of the society in which

it operates. It is also an early example of the popularisation of scientific images.

Even in the early seventeenth-century, changing the context of a scientific image

could change its meaning. Over time, science has become more, not less

embedded in visual culture and the dialogue has become increasingly productive.

115

Figure 12 Detail from Lodovico Cigoli’s Assumption of the Virgin (1610-12) Santa Maria Maggiore in Rome (fresco). The virgin is seen standing on a rough surfaced moon. The first example of the popularisation of a scientific image?

Examples For concrete examples of scientific images we return to the STM images

(Figure 13, page 118) and the images of the surface of Venus (Figure 15,

116

page 134). The pictures of a ring of iron atoms were produced by a team at IBM51

using a scanning tunnelling microscope and are impressive in two ways

simultaneously. Firstly, they offer a dramatic view of an extremely tiny scene. So

high is the magnification and so fine is the detail in the images that before the

invention of scanning tunnelling microscopy in 198152, nobody would have

imagined that such a view would ever be possible. Secondly, the pictures offer a

clear view of interference amongst electrons: a phenomenon that physicists

understand well but never expected to ‘see’ nor even imagined to be ‘visible’ in

this way. The Magellan pictures are also technically ingenious. A thick layer of

cloud cloaks the surface of Venus (see Figure 16, page 135) making it invisible (to

human vision and optical telescopes). However, microwaves can penetrate the

cloud so the Magellan probe ‘illuminated’ the surface of the planet with pulses of

microwaves and recorded the reflections (i.e. they used radar). The pictures of the

surface produced by NASA were constructed from these data.

Both sets of pictures were produced with synthetic shading techniques

from three-dimensional computer graphics. In particular, the images in Figure 15

and Figure 13 (left and centre) employ a technique called ‘ray tracing’ in which

the colour of each pixel in an image is determined by ‘projecting’ a series of

virtual rays of light from the virtual camera, through each pixel to the virtual

scene. Each ray is followed (by the computer program doing the ‘rendering’) as it

is reflected by surfaces, it’s colour shifted by pigmentation, its path deviated by

refraction, its intensity attenuated by dispersion in ‘virtual haze’, etc.53 The result

of all these transformations along the ray’s path is the colour of the pixel in

question.

Both sets of pictures can be understood in two distinct ways: they are both

research tools and a form of landscape art. What is remarkable about them is the

way these two roles interact. Interpretation in a scientific context draws on the

cultural sophistication viewers bring to the interpretation of landscapes. By

51 Figure 13 (left) appeared first on the cover of Science Vol. 262, 8 October 1993. Figure 13 (right) appears in M. F. Crommie, C. P. Lutz and D. M. Eigler, Confinement of Electrons to Quantum Corrals on a Metal Surface, Science, Vol. 262, 8 October 1993, pp 218-220: 219. Figure 13 (middle) appeared first on the cover of Physics Today, November 1993. 52 Scanning tunnelling microscopy was invented in 1981by Gerd Binnig and Heinrich Rohrer of IBM’s Zurich Research Lab. They received a Nobel Prize for this contribution in 1986. 53 For a more detailed description of synthetic shading techniques and ray tracing see William J. Mitchell 1994, The Reconfigured Eye, pp 136-161.

117

invoking the commonplaces of landscape representation to make data subject to

our sense of vision, researchers create images that can be read directly as if there

was no interpretation of data involved. These readings are immediately available

to both physicists and non-physicists, for instance the IBM picture can be read as

a picture of atoms in the same way that a still life can be read as a picture of fruit

or a landscape can be read as a picture of hills. Physicists, however, understand

that there is another level of interpretation involved, that their ‘picture of atoms’ is

not like a picture of hills; it is actually a picture of an array of measurements that

themselves are determined by other, more basic measurements.

‘Reading’, ‘Viewing’ and the Context of Visual Texts: Pictures of Electron Waves

Both physicists and non-physicists agree that Crommie et al’s scanning

tunnelling micrographs are ‘pictures of electron waves’. However, what it means

to say ‘this is a picture of electron waves’ is different for each group. (This

distinction is more fundamental than any confusion that might be caused by a

layperson’s ignorance about atoms and quantum mechanics.) To non-physicists

the picture looks like the result of an experiment – the evidence on which

physicists draw conclusions. That is, the picture itself is independent of and

distinct from, the claim that it is a picture of electron waves. To physicists, on the

other hand, the picture is read as an integrated part of a claim about electron

waves. It is not independent evidence on which conclusions are drawn – the

independent evidence was the actual data recorded by the instrument. The picture

is more like a summary of results that makes conscious reference to the theoretical

context in which the data are explained – more a picture about electron waves

than a picture of them.

118

Figure 13 Scanning tunnelling micrographs of 48 iron atoms placed on a copper surface. The ring of iron atoms acts as a ‘corral’ that causes electron waves on the surface of the copper to be partially reflected (see page 253). All three images were created with the same data. The height of the surface features has been exaggerated by a factor of 50 to 100 times. The image on the left was created for the cover of Science (Vol. 262, 8 Oct. 1993); the centre image was created for the cover of Physics Today (Nov. 1993) and the image on the right was used in Crommie, et al, 1993.

To reiterate, physicists cannot see the picture in isolation from the network

of claims made in the paper. For physicists, the picture is part of a greater whole –

to say ‘this is a picture of electron waves’ is to implicitly refer to the rest of the

paper. Non-physicists do not recognise this reference. It is sufficient to treat the

picture as a ‘snap’ – an independent record. Physicists treat it as a summary of

results rather than the result of an experiment. Non-physicists can imagine that the

picture speaks for itself.

The IBM pictures show 48 iron atoms arranged on the surface of a crystal

of copper. The researchers placed each iron atom individually using the stylus of

the scanning tunnelling microscope54. The ripples in the picture are caused by

electrons at the surface of the copper crystal. The surface electrons scatter when

they collide with the iron atoms (Crommie, Lutz and Eigler 1993; Heller,

Crommie, Lutz and Eigler 1993). The researchers describe the arrangement of

iron atoms that they built as a ‘corral’ because electrons within the circle are

reflected inwards and (partially) prevented from escaping.

In quantum mechanics (the physical theory of small-scale phenomena),

electrons are understood to have the properties of both waves and particles. This

‘wave-particle duality’ is often taken to be the signal characteristic of quantum

mechanics. In the case of the surface electrons in the copper crystal, the scattering

of electrons by the iron atoms is understood as the partial reflection of electron

waves. One of the characteristics of wave phenomena is ‘interference’: Where the

54 For details of the manipulation technique see Eigler and Schweizer 1990. For further details of the IBM pictures and scanning tunnelling microscopy in general see Appendix 2: Scanning Tunnelling Microscopy and Atomic ‘Corrals’ (page 253).

119

peak of a wave meets the trough of another wave the two waves cancel each other

out in that place. Where two waves meet with their peaks and troughs aligned they

combine to form one big wave in that place. On the surface of copper, where

reflected electron waves meet on-coming electron waves they interfere and a

‘standing wave’ forms55. The picture, then, shows electron standing waves.

In general, if physicists make electrons visible they look like particles. For

instance, they appear as discrete flashes of light on a phosphor screen or as a

clearly defined path in a bubble-chamber photograph (see note 25, page 85). In a

celebrated experiment (A. Tonomura, et al 1989), electrons were made to interfere

by passing a single wave though two slits. Beyond the slits, individual electrons

arrived one at a time and were recorded on the screen. The overall pattern of

electrons revealed interference (from the two waves emerging from the slits) yet

each individual flash on the screen was caused by the arrival of a particle. This is

known as a ‘double-slit’ experiment. If you check to see which slit the electron

goes through then the pattern disappears56. In contrast to Tonomura’s results,

there is nothing particle-like about the IBM pictures of electrons. Because of this,

they stand out in the iconography of quantum mechanics and many physicists find

them particularly engaging.

Wave-particle duality is a troublesome aspect of modern physics. Though

quantum mechanics is an indisputably effective way of describing phenomena, it

does not provide a coherent picture of what is going on, merely a way to predict

outcomes. By itself, quantum mechanics does not tell us what an electron is like.

Instead, it tells us the probability of finding one in a particular place. The

interpretation of quantum mechanics was a controversial issue when the theory

was established in the 1920s and 1930s. In recent years, physicists have again

begun to think seriously about the problem57. However, the strategy most

55 This is a wave that, because it is restrained at some point, does not seem to travel in a longitudinal direction. The motion of a violin string, for example, is a standing wave – the wave does not travel along the string in contrast to the sound wave it causes, which travels away from the instrument. 56 For a popular explanation of the effect see Davies and Brown 1986: 7-11; Feynman 1992: 130-148; and almost every other popular discussion on quantum mechanics. 57 See for instance the essays in Cushing and McMullin 1989. For accounts of the ‘Copenhagen hegemony’ see Fine 1996 and Cushing 1994. Three popular accounts that reflect the current ambivalence towards the interpretation of quantum mechanics are Davies and Brown 1986; Gribbin 1995; and Squires 1994. See also Mermin 1990, Chap. 16, What’s Wrong with this Pillow?

120

physicists adopt is to avoid thinking about the interpretation of quantum

mechanics if at all possible. This is the advice given by some of the twentieth

century’s most celebrated physicists. For instance, Richard Feynman (who

received a Nobel Prize for his contribution to the quantum theory of the

electromagnetic force) explains to his (popular) audience,

I am going to tell you what nature behaves like. If you will simply admit

that maybe she does behave like this, you will find her a delightful,

entrancing thing. Do not keep saying to yourself, if you can possibly

avoid it, ‘But how can it be like that?’ because you will get ‘down the

drain’, into a blind alley from which nobody has yet escaped. Nobody

knows how it can be like that58.

Steven Weinberg (who received his Nobel Prize for extending Feynman’s

theory to include the ‘weak nuclear force’) makes the point with an anecdote

about a promising student who was supervised by a colleague of his. A few years

after Weinberg had met the student he still hadn’t heard of anything the he had

done. When he bumped into his colleague, he asked what had interfered with the

student’s research. His colleague shook his head and replied sadly, “He tried to

understand quantum mechanics” (Weinberg 1993: 66). David Mermin captures

the attitude of most physicists to the ‘Copenhagen Interpretation’ (the dominant

interpretation of quantum mechanics) when he says, “If I were forced to sum up in

one sentence what the Copenhagen interpretation says to me, it would be, ‘Shut up

and calculate!’” (1990: 199).

Quantum mechanics seems to imply that we can only ever know about one

aspect of nature at a time and that different views of the same phenomenon will be

irreconcilable. This forces many physicists to adopt a pragmatic view of their

subject. They argue that the goal of physics is not to provide a ‘picture’ of nature

but to be able to make predictions59. However, physicists still exhibit the desire to

fully describe and understand phenomena; they want to be able to say what nature

is like. The ambivalence towards quantum mechanics comes from the tension

58 Richard Feynman, Probability and Uncertainty – the Quantum Mechanical view of Nature. BBC TV lecture, Cornell University 1965. In Feynman 1992: 128-129 59 A view very much like this is expounded by John Taylor in The Ghost in the Atom: “You can’t take a particular electron, at a given place, and say I’m now measuring its momentum, because it doesn’t mean anything. That’s not allowed.” (Davies and Brown 1986: 110)

121

between physicists’ pragmatism and their realism. The IBM pictures are

remarkable because they seem to reconcile these to some extent. It is satisfying to

see a picture of electrons that looks like a wave. In his commentary on Crommie

et al.’s paper in Science, Mark Reed makes the following observations,

When the electron density inside the corral is measured, distinct circular

ripples are observed, reminiscent of water waves in a circular pool… For

the student of quantum mechanics, this is a pleasingly aesthetic and

convincing visual demonstration of quantum mechanical wave

interference. (Reed 1993)

A commentary by A. Zettl, on an earlier paper by the IBM group

(Crommie, Lutz and Eigler 1993), also isolates the direct observation of wave

phenomena as the main contribution of the STM studies.

A central concept of modern physics is the wave-particle duality of

matter. The wave-like nature of freely propagating particles can be

inferred from various diffraction and scattering experiments. These

methods typically extract the quantum-mechanical properties of matter

from changes in the momentum or energy of interacting particles. [In this

issue of Nature] Crommie et al. describe an experiment in which the

wave nature of electrons is directly observed from standing-wave

patterns spatially resolved on the surface of a clean copper crystal… Such

visualisation of purely quantum-mechanical interference phenomena

brings gratifying reality to these typically ethereal concepts. (Zettl 1993.

Emphasis added).

The main reason the IBM pictures are affecting relates to the education

and socialisation of physicists. Mark Welland recognises this when he argues that,

“It won’t be surprising if this image soon appears in undergraduate quantum

mechanics courses” (1994: 36). This has indeed occurred. Crommie et al’s images

feature in several new editions of undergraduate textbooks60. Graham P. Collins

also recognises the connection with education but rather than considering the next

generation of physicists, he looks back nostalgically to his own generation’s

initiation in physics.

60 For example, Richard Wolfson and Jay M. Paschoff 1998, Physics with Modern Physics for Scientists and Engineers (Figure 13 (left) appears on page 1088) and B. Cameron Reed 1994, Quantum Mechanics: A First Course, 2nd Edition (Figure 13 (left) appears on the cover).

122

Everyone who has completed an elementary quantum mechanics course

has seen plots of solutions of the Schrödinger equation for the archetypal

“particle in a box”. Now IBM researchers have used scanning tunnelling

microscopy to image the rippled density of states of electrons inside a

“corral” built with a few dozen carefully positioned iron atoms on a

copper surface. (Collins 1993: 17)

The power of the pictures comes from the way they conform to physicists’

expectations on a variety of levels. They look like the solution to a familiar

undergraduate problem in theoretical quantum mechanics allowing physicists to

recognise the scene. They also have a range of attributes that signify waves,

allowing Mark Reed to make the connection between these waves and water

waves in a circular pool. Finally, they are ‘real’ pictures – images created from the

interaction of an automatic imaging apparatus with the external world – and so

have the same sense of artlessness as that carried by photographs. They are

particularly evocative because they hint at a straightforward resolution to a

generally unsatisfactory (or at least unsatisfying) aspect of modern physics. They

allow physicists to believe for a minute that they can know electrons in the same

way as they know other objects that make up the world – by looking at pictures of

them.

Our knowledge of the enormous effort that went in to producing the

images does not disrupt our faith that the world is like that if you look at it right.

And, indeed, the world can legitimately be said to be like the images. What is

easier to overlook in our enthusiasm is the number of choices that have been made

by the researchers. There is a process of ‘translation’ involved in scanning

tunnelling microscopy. The mysterious, microscopic properties of atoms (to which

we have no direct access) are translated into their analogues in the macroscopic

world. The technique requires arbitrary choices to be made about how the data are

rendered into an image of a surface: what angle the virtual camera looks from,

how the surface is illuminated by virtual lights, what colours it should be, what

the colours represent physically, how to interpolate between discrete data to form

a smooth surface, etc. It would be perverse of the researchers if the choices they

made did not make their pictures of electron waves conform to physicists’ beliefs

and expectations. In particular, the choices about rendering should make sense in

123

the context of the specific claims made in the paper. At one level, such decisions

are entirely arbitrary but on the rhetorical level, they are motivated.

The three images (Figure 13) are identical in what they denote. They are

each a visualisation of the same data. However, they were produced with different

audiences in mind:

The black and white versions were created for the text of our papers. The

colour images were specifically created for communication in less formal

formats, i.e. talks, cover illustrations, communicating to people who are

not trained as scientists. (Donald Eigler, personal communication)

In particular, Figure 13 (left) was produced specifically for the cover of

Science (8 October 1993) by Donald Eigler and Figure 13 (centre) was produced

specifically for the cover of Physics Today by Dominique Brodbeck. The

colourful images have also appeared in other ‘downstream’ contexts including

television, magazines61 and textbooks (note 60, page 121). Eigler produced most

of the renderings to have emerged from the IBM group, using custom

visualisation software designed by him and William Rudge. The software gives

the image-maker full control of all of the ‘secondary qualities’ represented

including the perspective.

The only information in the images is the shape of the surface, i.e. the

trajectory of the tip. The use of colour is ‘false’ in the sense that colour is

not inherent in the data. My view is that the use of colour is appropriate if

it serves to better convey the information (in this case the shape)

contained in the data. We have also chosen our colouring to help

distinguish the iron adatoms from the surface state electrons. If there was

any doubt about what was an iron adatom and what was a standing wave

due to a surface state electron then I might have been reluctant to use

colour in this way. But there was no doubt at all. (Donald Eigler)

To understand the nature of the arbitrary elements in STMs (and thus the

significance of their context) we need to understand the technique itself. The

technique is described in greater detail in Appendix 2: Scanning Tunnelling

Microscopy and Atomic ‘Corrals’ (page 253). For the moment, it is sufficient to

note what scanning tunnelling microscopy can tell us about surfaces.

61 For example, Business Week, 8 January 1996.

124

The trajectory of the tip of the instrument represents a contour of constant

electron density62. Thus, the images represent an iso-surface of electron density.

There are two groups of electrons that contribute to this density. Electrons in the

first group, the ‘bulk’ electrons, stick out further into the vacuum at the sites of the

iron adatoms so the electron density iso-surface shows a bump at these sites.

Electrons in the second group, the ‘surface state’ electrons, undergo their own

density oscillations due to interference as they are reflected from iron atoms, other

impurities, defects in the crystal structure, step edges in the crystal structure, etc.

(Crommie, et al 1993a). A rough description of the situation is as follows. The

object in the picture is a leaky wall (the 48 iron atoms). Because of interference

between incident and reflected electron waves, the surface state electrons within

the corral are bunched up in places and spaced out in others. This creates the

corrugations in the electron density iso-surface63.

Let us compare STM with standard electron microscopy. The image in a

transmission electron micrograph (TEM) represents the areas where a coherent

beam of electrons has been obscured by a microscopic object. It is a shadow

pattern akin to an x-ray. There is something about both x-rays and TEMs that is

already ‘optical’. The patterns of light and dark on the film or the screen

correspond to an actual, physical, interaction. Although the primary quality we are

(usually) interested in learning about is shape, the secondary qualities by which

we are able to sense shape (the patterns of light and dark) have their origin in

other primary qualities of the object (the primary qualities that give rise to the

interaction with x-rays or electrons)64. This is not the case with STMs. The

patterns of light and dark in a scanning tunnelling micrograph are completely

divorced from the actual primary qualities of the surface (the interaction with the

62 More precisely, in constant current mode with a low bias voltage between the tip and the surface an STM image represents first order approximation of a contour of constant surface local density of states at the fermi level (Weisendanger 1994: 113). 63 I am grateful to Don Eigler for an interesting discussion on what is meant by ‘surface’ in STM and how to interpret the corral images. Remaining mistakes are, of course, my own. 64 One technique in TEM is not so dissimilar to STM. To reveal topographic features, the sample is prepared with a process called ‘shadow casting’ in which a carbon cast of the object is bombarded from a low angle with atoms of a heavy metal such as gold resulting in areas of high density of metal and areas of shadow. It is not the sample itself that is imaged but this metal-coated cast (Watt 1985: 84-87). Ultimately, the only primary quality of the sample involved in the process of representation is the shape of the surface. The image that results can be interpreted as if it was an object illuminated from one side. As with other techniques, we can understand this as a process of lending an invisible object secondary qualities.

125

surface does not give rise only to a topographical array). Only one primary quality

is represented (shape). This means that any coherent representation of shape can

be employed whereas in the case of x-rays or TEMs we are generally limited to

the representation that arises as a result of the particular interactions involved.

The difference between STMs and TEMs also has consequences for the

‘look’ of each. The look of an imaging technique is the property or style

associated with it that allows us to recognise its products as belonging to a family.

It is the look of a technique that allows us to recognise and categorise images even

before we have identified the object or phenomenon they represent and that guides

us in how to take this next step. The look of TEMs (generally grainy,

monochrome, usually either very high or very low contrast) is determined in large

measure by the physical limitations of the technique itself. The look of STMs is

almost entirely conventional. In both cases, with a little experience, a viewer can

identify an image as a TEM or a STM from its look. However, in the case of

STMs what the viewer recognises is the conventions of representation that have

emerged spontaneously in the STM community (and become ossified in

commercial instruments and visualisation software). The look of STMs does not

depend on anything intrinsic to the technique itself. The conventional methods of

representing surfaces have evolved with the technique and will continue to

become more sophisticated as viewers/users become more visually ‘literate’ and

push the conventions in subtle new directions.

We see in the images of the corral of iron atoms three very different

representations. Each uses precisely the same data and each is equally accurate

and ‘true to nature’. That is to say, each can claim to say something about what

this particular arrangement of matter is ‘like’. However, each representation

extends the analogy beyond the shape it denotes. The representational codes

adopted operate at the level of connotation as well as denotation. This is

particularly obvious in the use of the colour-map in Figure 13 (centre). The way

that colour denotes height is easily ‘read’ because of the connotations of light

spectra the colour-map carries.

Analogy and selection are crucial aspects of the process of representation.

Our problem is to account for the particular set of representations the authors

adopt. As we have seen, only one primary quality is denoted in the images and so

there is considerable freedom when it comes to representing it. The freedom

126

comes not only from the lack of restraints but also from the sophisticated visual

culture in which the image makers are embedded. The image makers have a wide

variety of established representational codes to draw on and they can rely on a

high level of visual literacy amongst their audience. (This comes from experience

with a variety of visual genres and large ‘vocabulary’ of representational codes.)

In practice only a subset of the possible codes are considered appropriate for

representing information about surfaces acquired with scanning tunnelling

microscopy. (It is this subset that determines the look of STMs.) But despite the

self-imposed restrictions there is considerable flexibility. Working as they are in a

dynamic, evolving medium, the image makers are at liberty to combine codes

chimerically or even to create wholly new codes. Given this flexibility, the actual

codes employed can be explained only with reference to the image makers’

interest in the object represented. As Gunther Kress and Theo van Leeuwen point

out (1996: 6-7), it is the interest in the object being represented that guides the

selection of what are taken to be the ‘criterial aspects’ of it.

Kress and van Leeuwen argue that it is never a ‘whole object’ that is

represented but only a few carefully selected ‘criterial aspects’ that are regarded

as adequately or sufficiently representative of the object in a given context. This

notion has to be modified in the case of scientific imaging techniques. The

selection of criterial aspects of an object takes place at the moment of choosing

the appropriate imaging technique. In using scanning tunnelling microscopy in

constant current mode an image maker is ipso facto selecting the topography of a

surface (or the topography of an iso-surface of electron density) as a criterial

aspect of the object. However, the data acquired constitute, in effect, a new object

that needs to be represented. As we have seen, this new object is ‘under-qualified’

– we have to lend it secondary qualities before we can represent it. Thus, on the

level of the data acquired with an imaging technique, not only is the ‘whole

object’ represented but also a set of fictional criterial aspects have to be selected

to do so. As Kress and van Leeuwen go on to argue (1996: 7), sign-making is the

process of the constitution of a metaphor in two steps. In the case of Crommie et

al’s STMs, iron atoms and surface electrons on copper are (most like) a

topographical surface and a topographical surface is (most like) a pattern of light

and shade (in the case of Figure 13 (right)). The first analogy is intrinsic to the

127

technique itself. Analogy is a process of classification, which in this case is

justified by and refers to theories of solid-state physics.

The second analogy also involves classification – this time the way

viewers orient themselves to the surface is established. The three different ways to

render the shape represent three different types of surface – that is, three different

ways to think about a nano-scale atomic surface. In Figure 13 (left), the position,

orientation and angle of view of the camera as well as the way the scene is lit and

the way the image is cropped resembles (and makes reference to) representations

of landscapes. Thus, Figure 13 (left) implicitly includes the claim that the object is

like a landscape. This allows viewers to employ their previous experience with

landscapes and (especially) their previous experience with representations of

landscapes in making sense of the object they are confronted with.

Figure 13 (left) was produced specifically for the cover of Science by

Donald Eigler. Of all the corral pictures it is his favourite for the reasons below,

I like this most because 1) peering through the gaps between iron atoms

gives a strong sense of perspective and communicates just how large

atoms have become and 2) you really get a sense of the ups and downs of

the surface state wave function in the corral. You really get the idea of

something really waving in there. (Donald Eigler, personal

communication)

It is clear that Eigler is quietly proud of the image (quite justifiably so) not

just as an image maker but for the scientific achievement it represents. On the one

hand, there is general pride in the technique of scanning tunnelling microscopy:

the image “communicates just how large atoms have become”. On the other hand,

there is particular pride in his own group’s achievement, which is displayed

clearly in the image. One of the tasks of the image is to communicate these two

levels of achievement. The language Eigler uses to express his enthusiasm for

STM is interesting too. Rather than making viewers small and precise, scanning

tunnelling microscopy makes atoms large. This is more significant than a just a

figure of speech because STM can only make atoms large. We will never truly

feel that we are observing and interacting with a tiny scene. As with most

experiences with microscopy, we feel we are observing a scene that has been

magnified – which, of course, is precisely what we are doing. This is particularly

true for STM because there is such a deficit of secondary qualities. As with

128

Hooke’s flea (page 79), the detail has been represented using conventions

associated with large objects – we learn about the atoms by analogy.

One of the analogies is landscape. The “strong sense of perspective”

comes from our familiarity with the landscape genre – we can readily place

ourselves in the scene. The particular achievement of the IBM group that Eigler

wants to represent is the interference pattern within the corral. It should be a

picture of electron waves, thus he adopts a visualisation idiom that carries as

many connotations of waves as possible to “get the idea of something really

waving in there”.

Figure 13 (centre) is not so readily classified as a landscape. The reasons

for this are various, the most important being that unlike Figure 13 (left) the scene

does not extend to the edge of the image. In addition, the colour-map adds a level

of abstraction to the image that acts against the sense of naturalism associated

with representations of landscape (the image would seem more at home in an atlas

than in, say, a book of landscape photography). Also, the position and view of the

camera places the viewer above the scene whilst landscapes are generally viewed

from within. Figure 13 (centre) is more easily identified with still-life than

landscape. This genre of representation is almost as rich and sophisticated as

landscape. Thus, Figure 13 (centre) makes reference to geographical atlases and to

still life at the same time.

In many images, that employ colour-maps a specific visual clue is adopted

to make them easier to read as three-dimensional objects. This involves choosing

a minimum value below which values are not plotted. The data that are plotted

look like islands poking up from the sea, and this analogy is one reason why the

technique helps viewers to make sense of the form. For this reason, the colour that

represents the minimum value is often chosen to be blue. Selecting a minimum

value to plot also provides a base-line (or base-plane) from the front to the back to

help the viewer decide whether a point’s position on the page signifies height or

distance in the three-dimensional scene.

In Figure 13 (centre) the whole data set is represented – this minimum

value technique is not adopted. However, the nature of the surface itself means

there is already a base-plane that assists the interpretation of the form. The choice

of purple and blue for values on this plane serves several purposes. For the colour-

map to benefit from its familiarity it must begin at red and end at purple or vice

129

versa. In this case, the lowest values are purple. The blues suggest water, which

immediately allows the viewer to recognise the plane as a horizontal base-plane.

There is also a choice about the exact distribution of colours. In this case, the

difference between warm and cold colours marks a clear distinction between the

surface electrons and the adsorbed iron atoms. Finally, what the image makers are

really interested in are waves. For both them and their audience, observing and

ripples in water is the most important source of visual experience of waves.

Making a connection between the surface and water allows viewers to bring their

experience of waves to bear on making sense of the form they are presented with.

The monochrome image (Figure 13 (right)) is the most economical with

secondary qualities and representational codes. We learn of the shape of the

atomic scene from chiaroscuro alone rather than from a combination of

chiaroscuro and linear perspective. This also makes it the most ambiguous

representation. There are insufficient clues for a viewer to know for certain what

shape is being represented. Shadow is only an indication of form so long as a

viewer knows where the light is coming from. In the absence of other clues, the

viewer must guess. Western observers most frequently assume the light falls from

high up on the left-hand side. This is partly because sunlight rarely lights objects

from below and partly a convention arising from this being the most convenient

way for right-handed draughtsmen to shade objects (Gombrich 1977: 229;

Gregory 1977: 185-187). In Figure 13 (right), the viewer has to choose whether

the light falls from the top right or the bottom left. Most will opt for the former.

Choosing the latter inverts the shape making depressions look like prominences.

Thus, the viewer is faced with the same problems as Hooke was in trying to make

sense of his view with a microscope (see page 69). Turning an ambiguous image

upside down will usually cause the depth cues to reverse

(Gregory 1977: 185-187)65. Such ambiguities do not occur in the other two

representations because linear perspective allows the viewer to recognise which

lighting option is correct.

65 A particularly striking example of this effect appears in one of the IBM group’s own papers (Crommie, Lutz, Eigler, and Heller 1994, Fig. 7, p 133). The image (which is fairly similar to Figure 13 – right) appears to have been printed upside down because most viewers find it almost impossible to see the iron atoms as prominences rather than depressions. On turning the page upside down the iron atoms immediately spring up again.

130

Despite its ambiguities, this is the visualisation idiom that was chosen for

the Science paper itself. Donald Eigler believes that the other two would have

been inappropriate. “The black and white image in the paper actually provides the

reader with more and clearer information than the rendering on the cover of

Science” he explained. This is presumably because we do not have to rely on our

ability to ‘decode’ the linear perspective to see the shapes of the waves and the

corral. In this sense, the images are less ambiguous. However, there is probably

also a desire to avoid over interpreting the data for the authors’ colleagues (or

being seen to over interpret it). ‘Gratuitous’ detail in scientific images serves to

disguise their constructedness and makes them appear more like straightforward

photographs (Moser 1996). That is, overloading an image with arbitrary elements

can disguise their arbitrariness.

In choosing a visualisation idiom for images that appear in the papers

themselves, the guiding principle is parsimony. The representational codes

adopted must be seen to do the specific job required of them. The choices

associated with the ‘arbitrary aspects’ of the image (see page 59) should appear to

be motivated only by the role they play in the paper. Other motivations such as

aesthetic considerations are viewed with suspicion. In the case of scanning

tunnelling micrographs, there is still considerable flexibility when making

‘legitimate’ choices. This is because scanning tunnelling microscopy is a long

way from human vision on both the ‘resolution’ axes and the ‘similarity to sight’

axes (see page 88). Given this wide range of legitimate representational codes to

choose from, the authors choose those that carry connotations that support the

claims they make in the paper.

Thus, the images provide enough secondary qualities to enable a viewer to

discern the shape but no more. Where decisions have to be made they choose

colours, surface qualities, lighting regimes, etc. that have become the ‘default-

option’ in the field and thus indicate that the choice is not a personal one. This

also allows their colleagues to recognise the type of object they are looking at

much more quickly. Yet, even with these restrictions, the image makers still have

a choice and choose representations that fit with their argument. Overall, the

131

papers of the IBM group66 show a preference for simple, monochrome views like

Figure 13 (right). Monochrome views with a perspective like that of Figure

13 (centre) are also popular. Another visualisation idiom used by the group maps

the colour or shade to the slope or curvature of the surface (for example the

surface is bright where the curvature is great but the flat areas are dark)67.

To make the shape easily ‘readable’, image makers draw on well

established codes. These rely on selecting criterial aspects of objects in what

Gunther Kress and Theo van Leeuwen describe as a “double metaphoric process

in which analogy is the constitutive principle” (1996: 7). The choices made by

Crommie et al are driven by two imperatives. On one hand, they strive for the

same ideal as Erwin Christeller in his Atlas of histology (see page 71). That is,

they seek an image that is (apparently) free from human intervention – an image

that a viewer can take as independent evidence against which to judge the claims

made in the paper. Such an image conforms to the norms of ‘virtual witnessing’

(see page 74). On the other hand, they want the claims made in the paper to

emerge spontaneously from the image. That is, they want the image to direct the

reader to the author’s conclusions.

It should not come as a surprise that, as Mark Reed notes, the corral

images are, “reminiscent of water waves in a circular pool” (see page 121). The

codes chosen to visualise the data and the ‘fictional secondary qualities’ attached

to the data were chosen deliberately to carry connotations of wave interference

and these are carried most clearly by water. The rhetoric of the images encourages

readers of the paper to come to the same conclusion about the result of the

experiment as the rhetoric of the text. There is nothing deceitful about this in the

‘age of the interpreted image’. In the age of the mechanical image the rhetoric of

the image was quite distinct from the rhetoric of the text and such a move would

be frowned upon. But in the papers of the IBM group, the audience for the images

is expected to read them as surely (and as sceptically) as it is expected to read the

text.

66 Eigler and Schweizer 1990; Crommie, Lutz and Eigler 1993a & 1993b; Heller, Crommie, Lutz and Eigler 1994; Crommie, Lutz, Eigler and, Heller 1995 and 1996. 67 See Eigler and Schweizer 1990 (including the cover of Nature) and the group’s Web site, Scanning Tunneling Microscopy: www.almaden.ibm.com/vis/stm/ (July 1999).

132

For the downstream viewer though, reading the images is not at all

straightforward. There is little chance of distinguishing the intrinsic features of the

surface from the convenient fictions that make those features visible. Even the fact

that the vertical scale is greatly exaggerated in the images is not at all obvious. An

interesting feature of most visualisation idioms employed in STM is that they give

the impression of a very sharp distinction between substance and vacuum, which

does not reflect the intrinsic problems of even defining ‘surface’ on such scales

(see page 253). This is because the surface defined by the trajectory of the tip is

very distinct. Also, it is much easier to represent shapes if you can use visual cues

such as specular highlights and distinct shadows, which are associated with

smooth, polished materials. In principle, STMs could show the surface of a

material looking fuzzy either by including theoretical assumptions in the

rendering or by combining a number of passes – collecting several different iso-

surfaces of electron density. The difference this would make can be seen in Figure

14 (page 133), which shows two representations of a calculated ‘atomic orbital’ or

probability of finding an electron with particular energy in an atom. Figure

14 (right) represents the ‘density’ of the electron (or probability of finding it) at

each point in space by relating it to opacity. Thus we get a sense of the fuzziness

of the boundary of an atom. Figure 14 (left) represents an iso-surface of the

‘density’, which gives us a much clearer sense of the shape of the orbital. STMs

are more like Figure 14 (left) than (right). This raises the question, why do surface

scientists not explore alternative ways of representing the surfaces of materials?

Why are the surfaces always so hard and shiny?

133

Figure 14 Two representations of an atomic orbital (4f0). Electrons in atoms occupy distinct volumes of space according to their energy. The shape they inhabit (and the chance of finding one within that shape) is called an orbital. This is an analogy with planets in orbit around the sun – each electron has a different orbital just as each planet has a different orbit. Unlike planets where the orbits are all ellipses, atomic orbitals can be very complex shapes. The situation is very similar to the way electrons occupy space at the surface of metals. The electron spends half its time in the blue region and the other half in the red region. The probability of finding an electron in a particular place is indicated in the right hand image by the opacity at that point. The left hand image shows an iso-surface of probability (i.e. the probability of finding an electron is equal at each point on the surface). The left hand image employs specular highlights, reflections and shadows in addition to linear perspective to make the shape easy to discern. The right hand image uses just shadows and even these are indistinct.

The explanation is that scanning tunnelling microscopy is a tool rather

than an artistic medium. Surface scientists are not interested in pictures of atoms

as an end in themselves. If they were, then perhaps we would see a greater variety

of atomic scale images just as there are many styles of landscape art. The range of

styles we do see (for instance at the IBM Web site – see note 29, page 94) can be

accounted for with reference to the potential uses to which they can be put. Unless

and until there is a use for ‘fuzzy’ images of surfaces they will not be produced.

Both physicists and artists try to capture an ‘essence’ of a landscape. John

Constable, Paul Cézanne and Ansell Adams’ individual views of landscapes are

their own justification. In contrast, an individual view of an atomic landscape

must be explicitly justified – preferably, with reference to how it facilitates a

physical intervention.

134

Making Sense of Alien Landscapes: Pictures of Venus

Figure 15 Four views of the surface of Venus produced with radar data from the Magellan, Pioneer and Venera missions by David P. Anderson. The vertical scale in the pictures has been exaggerated by a factor between 20 and 25. Top left, Maat Mons (the highest volcano on Venus at 8km); top right, Maxwell Montes (the highest mountain range in the solar system at 11km above the plains – image produced with Venera and Pioneer data); bottom left, Sappas Mons, (Large doubled-headed shield volcano on the plains west of Maat Mons, which is visible on the horizon); Ozza Mons and Maat Mons from Southwest.

Venus would not be a pleasant place for humans to visit: The atmospheric

pressure at the surface is 92 times that of Earth at sea-level; the mean surface

temperature is 480oC (hot enough to melt lead); there is no liquid water; the

atmosphere is comprised mainly of carbon dioxide and the planet is completely

coated in thick clouds of sulphuric acid.

The pictures of Venus above (Figure 15, page 134) do not give the casual

viewer any sense of what it might be like to stand at the surface of the planet or

even what it would look like. They have the same ‘look’ as landscape photographs

of the Western United States68. They carry connotations of the myth of the

(continued on next page)

68 I have shown these images to several groups of people and asked them to guess where or what they are. Usually, they guess that the pictures are photographs of Arizona, Utah, or a nearby state. Some are less specific and just guess, “a desert”. Some people think that Figure 15 (top left) looks like Scotland. Some people guess correctly that the images are not ordinary photographs but

135

American West and the pushing back of frontiers. They conform to Western

conceptions of ‘dramatic landscape’69.

The only actual photographs of the surface of Venus were transmitted to

Earth by the Venera landers (see Figure 17, page 136). The images in Figure 15

were constructed from radar data collected by the Magellan satellite in three

mapping cycles between 10 August 1990 and 14 September 199270.

As in the scanning tunnelling micrographs, the optical qualities

represented in the pictures of Venus produced with radar data (such as those in

Figure 15) are ‘underdetermined’. That is to say, many of the qualities exhibited

in the images are ‘arbitrary’ in the sense defined previously (page 59). The camera

position, the clouds, the lighting, the colours of the rock, the atmospheric

perspective effects71, the vertical scale, and several other elements represent

choices that the image maker is forced to make.

Figure 16 Two ‘photographs’ of Venus showing the thick clouds that obscure the planet’s surface from vision. Left: recorded by the Mariner 10 probe on 5 February 1974. Right: recorded by the Galileo probe on 10 February 1990.

contd… produced with three-dimensional computer graphics software. (The people who notice this are usually very familiar with commercial packages such as Bryce 4 and 3D Studio Max). A few people guess that the scene is extra-terrestrial (they usually plump for Mars). 69 This aspect of the images was pointed out by Martin Kemp in a talk on art and science at the British Association’s Annual Festival in Birmingham, September 1996. 70 For a summary of the Magellan mission see Magellan Mission at a Glance http://www.jpl.nasa.gov/magellan/fact.html (March 1999) and R. Stephen Saunders and Gordon H. Pettengill 1991. 71 Atmospheric perspective (as opposed to linear perspective) refers to the depth cues that result from the passage of light through an atmosphere. For instance, distant mountains look lighter and less distinct than near-by objects because there is more scattered light in the line of sight.

136

Figure 17 ‘Photographs’ of the surface of Venus made by Russian landers. Top: Venera 13 image at 7o 30’ S, 303 E (East of Phoebe Regio). Venera 13 survived on the surface for 2 hours, 7 minutes – long enough to obtain 14 images on 1 March 1982. This 170o colour panorama was produced using dark blue, green and red filters and has a resolution of 4 to 5 minutes of arc. Part of the spacecraft is at the bottom of the image. Flat rock slabs and soil are visible. The true colour is difficult to judge because the Venusian atmosphere filters out blue light (see text). The surface composition is similar to terrestrial basalt. On the ground in foreground is a camera lens cover. Bottom: Venera 14 images at 13o 15’ S, 310 E (about 950 km Southwest of Venera 13 near the Eastern flank of Phoebe Regio on a basaltic plain). The landers also obtained a rock sample that was analysed by an x-ray fluorescence spectrometer and found to be similar to oceanic tholeiitic basalts.

At this point, we should be clear about our use of the word ‘arbitrary’. The

choices are arbitrary in the sense that there are many alternatives that would be

equally acceptable but they are, nevertheless, determined by the specific purpose

to which the images will be put. That is, to call aspects of an image ‘arbitrary’ is

not to suggest that there are no good reasons for making particular choices. Not

all choices are equally appropriate.

In the case of scientific images, the choices may be entirely conventional

or they may be motivated by physical considerations over and above what the

imaging technique itself can tell us about a physical situation. Effectively, this

means that different physical theories may be combined in a single image – each

relying on the other. For instance, the topography of Venus is determined by

interpreting altimetry data in conjunction with synthetic aperture radar data; the

137

topography is then visualised with reference to theories of geology (see below).

The resulting image is thus a synthesis of (at least) two physical theories but so

entwined are they in the representation that they have the effect of reinforcing one

another.

Another difference, then, between ‘upstream’ and ‘downstream’ contexts

(see page 91) is the extent to which the synthesis of distinct physical theories is

apparent in a single image. In a downstream context, the extent to which distinct

theories lend weight to each other is not clear. The synthesis is ‘naturalised’ and

not open to scrutiny. In an upstream context, however, the distinct physical

theories are ‘ordered’ – the viewer reads the image as primarily representing a

single physical interpretation of particular data and secondarily invoking other

theories or assumptions to make this interpretation visible.

Figure 18 Two alternative renderings of the Magellan data. Left – Maat Mons by JPL. The vertical scale has been exaggerated by a factor of 10. The view is from 634 km north of Maat Mons at an elevation of 3 km above the terrain72. Right – The Western Ishtar Terra (upland) region from the Venus Hypermap by UCLA Dept. of Earth and Planetary Sciences73.

We see this in the choice of colours in the images of Venus. In Figure 15,

the colours of the surface are arbitrary in the sense that the synthetic aperture

radar data contain no information about colour. Because the colour is

underdetermined, image makers have considerable flexibility when it comes to

making the choice. Some alternatives are presented in Figure 18 (page 137) The

image on the right uses a colour-map to give additional clues to the height of the

surface features – augmenting the clues that come from the use of linear

72 For more detail see Catalog Page for PIA00106 http://photojournal.jpl.nasa.gov/cgi-bin/PIAGenCatalog.pl?PIA00106 (March 2000) 73 For more detail see Western Ishtar Terra http://www.ess.ucla.edu/hypermap/highlands/ishtar.html.

138

perspective. It is apparent in almost all contexts that the colour is conventional

(i.e. that it is an arbitrary element of the representation). The image on the left was

produced by NASA’s Jet Propulsion Laboratory. It was one of a series of images

released to the press during the Magellan mission that were reproduced widely74.

The orange colour was motivated by the only ‘photograph’ of the surface of

Venus (Figure 16, page 135). However, the reasoning that led to the JPL team

choosing the orange colour for the surface had been called into question several

years before. In an influential paper, Andrew T. Young (1984) questioned

common inferences about colours in planetary geology75. David Anderson’s

images were, in part, a response to the decision by the JPL team to make the

surface look orange.

When the first JPL images of the surface of Venus began to appear a few

years later, we were astonished to see the same colour schemes and

imaging techniques, now in such disrepute, had been applied to the

Venusian data. This encouraged us to begin an independent programme

of visualisation. (David Phillip Anderson, 1999, personal

communication)

The colour scheme that David Anderson adopted was based on available

knowledge of the geology of Venus (which came, in part, from other experiments

on the Venera landers).

The geologists at the time were fairly sure that most of the surface of

Venus is in fact basalt, the most common rock in the solar system and, on

Earth at least, normally grey or brown… Our decision was to use a

generic brown basalt base colour, and modulate that tone with the

intensity of radar reflections from the planet surface. Thus where the

reflections are low intensity the tone is modulated toward black, and

where the reflections are high it is modulated toward tan. In this sense

our choice was as arbitrary as [JPL’s], with the caveat that no serious

planetary scientist believes the surface of Venus is orange. As long as we

are guessing, let us make it an educated guess based on what we know

74 See for instance Ellen R. Stofan, 1993. William Newcott, 1993; William J. Mitchell 1994: 10. Corey S. Powell, 1993 compares the JPL and David Phillip Anderson representations. There is a discussion of the vertical scale in the JPL images in Edward R. Tufte 1997: 23-25. 75 See in particular Andrew T. Young 1984 Appendix A: Colors on Io, pp 209-215.

139

about the visual appearance of basalt. (David Phillip Anderson, personal

communication, 1999)

We see then in the David Anderson images and the JPL images two

solutions to the problem of underdetermination in SAR/Altimetry imaging. The

choice is arbitrary in the sense that the data themselves do not indicate that one

solution is better than another is but there are, nevertheless, ‘scientific’ reasons for

choosing each solution76.

An even more basic problem than choosing colours for the surface is how

to interpret the data as a surface in the first place. This too involves ‘arbitrary

choices’ to a certain extent. Again, the choices and relative value of the

alternatives depend on the uses to which the images are put.

The Magellan mission produced several different data sets77. In particular,

two instruments produced data that were combined to form the images in both

Figure 15 and Figure 18. These were the synthetic aperture radar (SAR) and the

altimeter78. The resolution of the altimeter data varies from about 13 km to 31 km

(Pettengill et al 1991: 263)79. The resolution of the SAR data on the other hand

was up to 400 times finer – of the order of 75 m (Pettengill et al 1991: 262).

One way to visualise the data would be to create a three-dimensional

surface from the altimetry data and apply the SAR data as a ‘texture map’ to this

surface. However, as David Anderson points out,

The images produced by this technique show a smoothly rounded surface

covered with sharp high frequency detail. They don’t look particularly

real and impart a strong ‘computer generated’ sense to the landscape.

(David Phillip Anderson, 1999, personal communication).

Instead, both the JPL team and David Anderson’s team used the SAR data

to deform the topographic surface produced from the altimetry data. However,

while JPL used a technique known as ‘radar clinometry’ (which involves using

the length of radar shadows to determine the height of surface features), David

76 For more discussion of the choice of colour see Appendix 3: David Phillip Anderson (page 255). 77 For a detailed description of the Magellan data products see Gordon H. Pettengill, et al 1991. 78 For an explanation of radar imaging see What is Imaging Radar http://southport.jpl.nasa.gov/desc/imagingradarv3.html (March 2000) by JPL’s Tom Freeman. 79 The interpretation of the altimetry data is rather more involved than simply measuring the time of flight of each pulse as explained in Gordon H. Pettengill, et al 1991: 262-263.

140

Anderson’s team adopted a quite different approach. They used terrestrial data to

relate the ‘radar brightness’ of surface features to the ‘fractal dimension’ (in loose

terms, ‘roughness’) of the surface80. This had the advantage of creating fewer

artefacts and producing very satisfying images.

I knew we were on the right track when the first image emerged after

about 20 hours of CPU time. One of the planetary people came into the

lab and, looking over my shoulder declared, “Damn! That looks so

GEOLOGIC!” (David Phillip Anderson, 1999, personal communication).

We can explain the problem faced by David Anderson, JPL and other

visualisers in the following way: The data obtained by the Magellan probe were

‘super-sensory’ and this resulted in an underdetermination of secondary properties

– a gap between the data and human vision that needs to be bridged by lending the

data the optical properties of familiar objects. An additional problem was the fact

that the object represented is an alien landscape. Thus we are faced with a similar

problem to that faced by the radio astronomers discussed previously (page 79,

page 100). Oosterloo (1996) explains how important familiarity is in making

sense of an object such as a ‘radio data cube’. Eventually, Oosterloo’s team

overcame the lack of familiar shapes that could be recognised by rocking the

image from side to side to allow the user to discern the shape of the object through

motion parallax.

Motion parallax is an important tool in planetary visualisers’ toolkit also

and both JPL and David Anderson made ‘flythroughs’ – animations in which the

virtual camera’s position moves around the landscape81. However, what is

perhaps more important than simply discerning the shape of the surface is that

previous geological experience can be brought to bear on data. For this reason it is

valuable to imbue the landscape with familiar optical properties and adopt an

idiom that we are already ‘good at looking at’. When it comes to representing

landscapes and making sense of representations of landscapes we have

considerable cultural resources to draw on. As long as we can recognise the

features represented (the lighting, atmospheric effects, reflective properties, etc.)

80 For a more detailed discussion of these techniques see Appendix 3: David Phillip Anderson. 81 For examples of David Anderson’s flythroughs see SMU Geophysical Imaging Compound: VENUS, www.geology.smu.edu/~dpa-www/venus.html (Sept. 1999).

141

we can use them to make sense of the data. Making the images look like

photographs of the American West was a deliberate (and controlled) strategy

adopted by Anderson. When asked about the similarity he explained,

This is certainly no accident. I travel with my family every spring and

summer to New Mexico and Colorado, Arizona and Utah. I have flown

over this area repeatedly, and the huge vistas and breathtaking rocky

desert landscapes are burned deeply into my experience. It is not

surprising that it would emerge when asked to generate a 3D landscape.

(David Phillip Anderson, 1999, personal communication)

Anderson goes on to describe how he studied the work of landscape

photographers including Ansell Adams and sought advice from landscape

artists82. The motivation for making the images as powerful as possible came

from more than a single imperative. There was competition with JPL, the

particular imperatives of planetary geologists and an acute awareness that the

images would be viewed by a variety of audiences. The image makers were aware

from the outset that the images would be viewed on a variety of levels – different

contexts would lead to different ways of viewing the images.

We had in mind from the beginning that these images might be widely

published and that we were neither trained nor skilled in the craft of

visual representation. This coupled with the critically harsh environment

that is the warp and woof of science encouraged us to get as much

independent input as possible. (David Phillip Anderson, 1999, personal

communication)

The secondary qualities added by Anderson and his team were designed to

work on all levels but they could not control the apparent epistemological claims

of the images in downstream contexts. That is, they could not control how

downstream audiences would be encouraged to orient themselves to the various

types of information in the images – what the images actually say about Venus.

An arbitrary element that David Anderson employs that the JPL team does

not is atmospheric perspective. The early images lacked a sense of size or scale. It

was pointed out as part of the consultation exercise that an important clue to

distance scale is the amount of detail discernible in the foreground as opposed to

82 See Appendix 3: David Phillip Anderson (page 259).

142

the background. “The detail in the background of the images was just as crisp and

sharp as the foreground, hence the entire area appeared small and close-up.”

(David Phillip Anderson, 1999, personal communication). The decision to add

atmospheric effects was at first combined with a desire to present a ‘true’ picture

of Venus. Thus the team sought ‘realistic’ values for the refractive index,

diffusion, etc. (see Appendix 3: David Phillip Anderson, page 259). The results,

however, did not aid the reading of the landscape so instead they adopted a simple

atmospheric model of evenly distributed haze. “This is probably not an

unreasonable guess, and produces the desired sense of depth and scale without

unduly distorting the topography.” (David Phillip Anderson, 1999, personal

communication). The priority is that the principal data (the SAR/Altimetry data)

are visualised. References to atmospheric models of Venus are ‘secondary’ –

introduced with the express purpose of assisting the accessibility to human senses

of the principal data. Whilst it would be nice to include additional information

about Venus in the representation, it is only justified to the extent that it fulfils this

primary purpose. As long as the introduced elements of the representation do not

‘lie’ (and even haze is “probably not an unreasonable guess”) anything can be

added and still satisfy the ‘will to truth’ of the visualiser.

We see a more extreme example in the case of the clouds. These are also

introduced to provide distance clues both from linear perspective (by representing

a horizontal plane) and by providing a contrast between foreground and

background. The presence of the clouds is justified by the way they assist us in

discerning the topography rather than representing a claim about what Venus

looks like. The distance clues work through our familiarity with terrestrial clouds.

However, in a downstream context the viewer is not empowered to distinguish

between what the representation is ‘about’ (topography/geology) and the elements

that have been added purely to assist the reading. Thus the pictures appear to be

telling us about clouds as much as they tell us about geology.

Our familiarity with clouds, etc. aids our reading of the landscape in more

than just a mechanical way. It is not simply the perspective clues we receive from

the clouds but also cultural notions with which clouds are imbued that operate to

assist our reading of the scene. We are greatly assisted in recognising significant

features of the topography of Venus by the strong connotations carried by the

borrowed visual elements. However, in an upstream context the connotations of

143

the American West, the ‘Frontier’, etc. carried by the representations are

recognised as being secondary to the shape of the surface. Topography is

recognised as what the image is ‘about’ and the association with cultural notions

recognised as just another arbitrary element of the representation – though the

utility of strong associations is appreciated as it provides us with wider repertoire

of viewing strategies.

Yet again though, in a downstream context we are unable to decide what

significance to grant the connotations carried by various visual elements. Do the

pictures look like the American West because Venus is actually like the American

West or do they look that way because the analogy helps us to discern the shape?

If they arise spontaneously from the denoted object itself then the connotations

would provide information about Venus. On the other hand, if (as is the case) they

are purely arbitrary and serve only to assist the viewer in making sense of a shape

then the associations between the connotations and Venus itself are misleading.

In an upstream context, connotations with which scientific images are

deliberately imbued serve to direct us to a simple, primary quality of the denoted

object. The further downstream we go, the less interest we have in the particular

primary quality and the less we know about how the image is constructed

rhetorically. Thus, in downstream contexts the connotations carried by the visual

elements seem to be the information that we as viewers are supposed to be

looking for. When we look at pictures of Venus in National Geographic

(Newcott 1993) we are not doing geology and we are not particularly interested in

recognising geological features. We are much keener to seek in or from the

pictures a sense of awe. We read the images as ‘reports from the frontier’ rather

than tools for doing esoteric science. Thus the principal information that the

images seem to carry are the connotations carried by the representation and the

topography denoted seems secondary to these (an inversion of the case of

upstream contexts).

Scientific images in downstream contexts can operate at the level of

‘myth’ (see page 57) but this is not to suggest that David Anderson’s pictures of

Venus or indeed any other scientific images in downstream contexts are inherently

disingenuous. Image makers do not have much choice in the matter anyway. They

are constrained both by the visual culture in which they operate and by the data

the images denote. They are, however, ingenious when it comes to bridging the

144

‘super-sensory gap’ between scientific instruments and human vision. The last

word on the interaction of culture and science – how conceptions of ‘dramatic’

landscapes influence the sense we make of data – is left to David Anderson:

We’re sort of stuck with it, aren’t we? We have to guess at what we don’t

know, and base those guesses on what we do know. As is said, data

constrains creativity. I’m not so willing to separate out our understanding

of ourselves and the universe into science and art, or philosophy and

religion, or dry calculation and high drama. They seem to all flow

together, and the boundaries are very fluid and ill-defined. Most of the

interesting stuff appears to happen on the boundaries. Computerised

scientific visualisation strikes me as one of those boundary phenomena,

requiring a crossbreeding of raw science with computer imaging and

software technology (itself a poorly understood art form), and with an

artistic and aesthetic sensibility. (David Phillip Anderson, 1999, personal

communication)

Conclusion With reference to the ‘interpreted image’ and by employing John Locke’s

distinction between primary and secondary qualities we have managed to explain

how the meaning of scientific images can be radically different in upstream and

downstream contexts. By bringing the ‘arbitrary visual elements’ of images to the

fore, the distinction gives us an analytical handle on visual communication in

science. It also allows us to recognise those aspects of images that are likely to be

understood differently upstream than downstream. Importantly, the discussion of

the difference between ‘popular’ and ‘scientific’ images has made no reference to

‘translation’ or ‘degradation’ or ‘accuracy’. The model of communication that

seems to be appropriate in the case of the popularisation of scientific images is not

one that assumes that the correct interpretation somehow gets lost in translation.

Instead, images are approached as polysemic texts whose meaning depends on the

function they are asked to perform. The significance of elements within scientific

images depends on the context of the image and the degree to which various

audiences are empowered to orient themselves to the various elements (especially

whether they are empowered to distinguish arbitrary from independent elements).

145

The examples looked at have not been particularly controversial ones but

the analysis indicates how ideology can operate in popularisation – how scientific

images can have the effect of naturalising social relations. In problematic areas of

visual depiction in science83, such an analysis can be particularly informative.

83 See, for example, Sarah Kember’s (1995) account of issues surrounding surveillance and classification in relation to medical imaging

146

4 The Boundar ies o f Sc ience: Polemica l and

Phi losophica l Approaches to the

Demarcat ion Problem and the Rhetor ic o f Popular

Sc ience This chapter develops the concept of negotiation outlined in the

introduction (page 38). There is a tendency to examine popular science texts in

isolation from the debates in which they intervene and the interests involved and

thus we do not recognise their ‘dialogical’ nature. Nevertheless, in addition to all

their other functions, popular science texts serve as a forum for negotiation.

In the introduction, two contrasting ways of looking at popular science

texts were distinguished: We can look at texts individually and draw conclusions

about how well they achieve their goals or we can try to understand the contexts

within which they acquire meaning. The former strategy has a longer pedigree and

a reasonably well-developed critical language associated with it. The latter

strategy is much newer and less well developed (certainly as far as popular

science is concerned)1. Each raises very different questions.

As we have seen in the case of scientific images, shifting scientific texts

‘downstream’ allows them to take on new meanings. This is often conceived as a

‘problem of translation’. Steven Shapin for instance makes the following points

about science communication since the mid-late nineteenth century,

It may be that our public language contains the ineradicable residues of

the teleological, anthropocentric and anthropomorphic cosmology in

which it was shaped. To the extent that scientific statements are couched

in, or even appear to be couched in, ordinary public language, problems

may be endemic. On the one hand, scientists may decide that certain

scientific conceptions simply cannot be expressed in the public language.

1 For an account of media studies of popular science since the war see Dornan 1999/1990.

147

On the other hand, scientists’ endeavours to use that public language may

involve metaphors and analogies whose resonances they cannot expect to

hold in place and control. In either case, the differentiation of scientific

and public culture has precipitated serious problems of translation whose

nature is largely undefined and whose remedies are unclear... Are all

attempts to ‘popularise’ science doomed to failure or fraud? Are modern

science and its public divided by the illusion that they possess a common

language? (Shapin 1989: 997)

For an example of this type of distinction between public and scientific

accounts we can look to the concept of Darwinian evolution. Gillian Beer points

out that a teleological account of evolution remains pervasively popular even

though this conflicts with the very essence of Darwin’s theory (Beer 1985).

Shapin’s account, as we can see from the extract above, explains this as an

atavistic element in public culture: public language was left behind during the

great fracture of the ‘common cultural context’ (see below, page 179). While this

explanation is compelling, it is not entirely convincing. Shapin’s account assumes

that the modern categories of science and the public are now fairly coherent

noting that we have little difficulty in assigning knowledge to one or other of

them. But while the latter point is largely true, the categories are not so coherent

that we can speak of scientists and the public possessing different languages.

Rather, the stream metaphor (Hilgartner 1990) discussed in the introduction

(page 34 ff) and developed in chapter 3 (page 91) is again appropriate. Different

ways of distinguishing science and the public (different boundaries) do not create

different languages but different contexts in which the same language takes on

different meanings. Again, we have seen this already in the case of scientific

images: The visual idioms employed in scientific images are the same (familiar)

idioms employed in popular culture but their significance is different.

When we consider the way that (popular and professional) scientific texts

are employed in boundary disputes (and later, when we consider how the

boundaries themselves are employed in literary texts) Lewenstein’s web model of

communication in science is appropriate (see page 36). This is because of the way

it refuses to privilege one genre over another – there is no intrinsic order in

scientific contexts in the web model. Instead, the importance of a context is

fortuitous – determined by the nature and circumstances of the dispute rather than

148

any metaphysical concept such as the quality of knowledge available in a given

context or the access it provides to the ‘real world’.

This view fits well with the boundary disputes discussed in this chapter.

But the fact that Hilgartner’s stream metaphor with its definition in terms of

narrow and broad audience appeal (see page 34) is also necessary in

understanding boundary disputes, underlines the importance of flexibility when

employing models of communication in science. ‘Popular’ (downstream) contexts

tend to be the most important forums for intervening in the negotiation process

because they are accessible to a broader audience. This explains why the principal

sites for boundary work (those in the centre of the web) are often popular

contexts. This chapter, then, combines a view of distinct categories (science and

non-science) with the idea of a ‘spectrum’ and a ‘web’ of contexts. It concentrates

on the social origin of the categories but also addresses their epistemological

status.

Though understanding boundaries requires considerable theoretical

flexibility, looking at texts from the perspective of the boundaries they address

and the boundary work they perform brings a sense of coherence to an otherwise

amorphous collection of genres2. As we have already seen, there are many ways

in which science is manifest in popular culture and these are too frequently

conflated. Science on television, for instance, is treated as a single topic whether it

is cartoons or lectures that are being discussed3. Conclusions drawn from such

analyses are generally unsatisfactory. The concept of boundary work in popular

texts offers a coherent way (the only coherent way discovered during extensive

research of the field) to speak of popular science as a single topic rather than a

collection of disparate interventions. ‘Boundary work’ allows us to compare, say,

cartoons with lectures by Richard Feynman. The conception of boundaries

outlined below also provides a new handle on issues that have already attracted

considerable attention such as ‘two cultures’ debates (see page 212) and questions

2 Baudouin Jurdant (1993) investigates the status of popular science texts as a literary genre by comparing popular science with science fiction and autobiography. 3 This was particularly true at a discussion at the British Association Annual Festival in Birmingham in September 1996 following a lecture on science and television by Steven Rose. All the criticism that was directed towards television science centred on ‘accuracy’ and pedagogy – whatever programme was being discussed.

149

of science policy. For instance, rather than looking at science and art as two

monolithic entities the relation between them is understood with reference to the

‘intermediate entities’ that depend on them both. (These can be as various as paint

technology and the aesthetics of electron micrographs.) Rather than a single

boundary between the two we find several. Boundary work is understood as a

‘negotiation’ (see page 38) over the ownership and rules that govern these

intermediate entities.

This conception of boundaries challenges two assumptions that underlie

many accounts of science and, before discussing what is meant by ‘boundary

work’, we must address these. The first assumption is that there are a priori

criteria for distinguishing science from non-science. That is, we assume that there

is some essential quality (or a family of qualities) that all things scientific share.

The problem of demarcating science is usually approached as the analytical

problem of finding these essential qualities. It is not usually considered as a

practical problem for scientists. But whether or not there are essential

characteristics that ultimately distinguish proper scientists from charlatans, this

does not mean that there will be no struggle over definitions. Even if there is a

‘true’ or ‘natural’ boundary, there is no way of discursively demarcating science

that will be convincing in all circumstances. However necessary essential

characteristics may be to science, they are not sufficient to explain scientists’

cultural authority.

The second assumption is that the boundaries are not disputed. We believe

this because at any moment in history the boundaries seem unproblematic. We

feel we know what science is in the sense that we could point to anything, any

practice or piece of knowledge and say whether it was scientific or not. We might

not feel empowered to judge its validity but, for most things, we believe we can

identify which category it belongs in without difficulty. For instance, phrenology

is uncontroversially placed in the ‘non-science’ category today whereas anatomy

and physiology are undoubtedly sciences4. Astrology is clearly non-science while

astronomy is equally clearly a science. Cloning is science but deciding whether

we should clone humans is ethics or politics or maybe theology but not science.

4 The modern clarity about the status of phrenology is the result of significant ‘boundary work’ often in the context of popular science. See Cooter 1984 and Gieryn 1983: 787-789.

150

For most boundaries, making these sorts of decisions is so easy that it is hard to

believe that there is any ‘work’ or negotiation going into maintaining them. They

seem obvious and timeless and natural. This is itself a moot point as, in general,

the boundaries between science and non-science are not natural but have instead

been ‘naturalised’. Science texts can operate at the level of ‘myth’ in the sense

described by Roland Barthes (1973) – both disguising and disguised-by the work

that goes into maintaining boundaries (see also note 3, page 59).

There are two main claims made about boundaries here. The first is that

the boundaries of science are not fixed. In arguing this point, this account draws

on the historical and sociological accounts of Steven Shapin (1989) and Thomas

F. Gieryn (1983, 1994, 1999). The second is that popular science is the most

important site for contesting the boundaries. This chapter describes how

boundaries operate in popular science. The following chapter develops the

framework outlined here with reference to specific examples.

In the first section, I argue that the boundaries of science are not fixed by

a priori demarcation criteria. I distinguish questions about ‘science’ from

questions about ‘what science is taken to be’ and discuss the areas where each

category is relevant. I also distinguish questions about the social authority of

scientists from questions about the epistemological status of scientific knowledge.

I conclude that the question ‘what is science?’ can be invoked in two ways. On

one hand, it can be employed ‘polemically’. That is, its purpose is to settle a

dispute or deny alternative accounts any legitimacy by providing an unequivocal

way of assigning knowledge and expertise to one category or another. The notion

of a ‘pithy definition of science’ is introduced to stand for the rhetorical strategy

of invoking the philosophy of science polemically. On the other hand the

question, ‘what is science?’ can be invoked ‘philosophically’. In this case, its

purpose is to open up a new set of questions about science and to develop a

framework for exploring these.

Questions about the ‘true’ nature of the boundaries belong to the realm of

metaphysics. The account of popularisation in this chapter does not support or

refute any particular epistemological position. Here we concentrate on changing

patterns of authority (see page 156) and deal with the topic on a pragmatic level.

The question is not ‘what is science?’ but, ‘given that people disagree about what

science is, how is the answer decided and what is the source of the conflict in the

151

first place?’ Such an analysis has application within several metaphysical

frameworks.

Following the discussion of the philosophy of science and how it is used in

boundary work, I go on to develop the concept of boundary work as a critical tool

for the analysis of popular science texts. Thus, I follow Thomas Gieryn’s

prescription for science and technology studies – to get constructivism “out of the

lab”,

If science studies has now convinced everybody that scientific facts are

only contingently credible and claims about nature are only as good as

their local performance, the task remains to demonstrate the similarly

constructed nature of the cultural categories that people in society use to

interpret and evaluate those facts and claims. (Gieryn 1994: 440)

However, Gieryn’s own conception of boundaries (which is based firmly

on a cartographic metaphor) fails to provide a full account of the types of

boundary work observed in popular (or ‘downstream’) contexts. Thus, an

alternative conception is introduced that places emphasis on the ‘dependent

intermediate entities’ involved. That is, rather than conceiving of a simple

boundary between, say, science and politics, we address the various issues or

points of contact between the two. For instance, ‘radiological protection’ depends

on both politics and physics. The task then is to examine how authority and

responsibility for these dependent entities and their relation to the ‘principal

entities’ (physics and politics in the case of radiological protection) are negotiated

in a popular context by the various interested groups.

Scientists and non-scientists are often discussed as if they were two

identifiable groups with clearly articulated interests but boundary work is not a

simple power struggle between ‘scientists’ on the one hand and ‘non-scientists’ on

the other. Instead, various different groups with various different interests have

engaged at various different boundaries between science and non-science.

Within a discussion of a definition of the term ‘boundary’, four types of

‘boundary work’ are identified. These are: 1) establishing a new boundary;

2) moving a boundary; 3) changing the characteristics of a boundary; and

4) changing the rules that govern agreed intersections (page 173). Thomas Gieryn

distinguishes types of boundary work according to their utility and thus identifies

three ‘genres’ of boundary work: expulsion, expansion and protection of

152

autonomy (Gieryn 1999: 15-17). However, when we come to examine the way

boundaries between science and non-science are employed in popular culture,

many other genres of boundary work emerge. Or rather, the ways that boundaries

manifest themselves are so numerous that identifying distinct genres no longer

aids analysis.

One point that emerges from an analysis of downstream texts in particular

is that boundary work does not always involve power. In Gut Symmetries by

Jeanette Winterson and Hapgood by Tom Stoppard, the boundaries between

science and non-science take on a dramatic role (see pages 218ff and 230ff). Both

authors challenge their audience’s preconceptions about the categories at the same

time as employing ‘commonplaces’ about the boundaries between science and

non-science to dramatic effect. The use of boundaries is clearer in the television

series Star Trek. This established a boundary between science and emotion across

which moral arguments could be ‘reflected’. In particular Doctor McCoy could

‘ironically reinterpret’ Mr Spock’s ‘scientific’ response to a moral crisis. This

would make Captain Kirk’s moral decision even bolder and more emotionally

charged (see page 219).

Another downstream example of science in popular culture is Inside

Information, an exhibition of micrographs at the Wellcome Trust’s Two 10

gallery (see page 212). The question asked of this exhibition is whether we can

decide if it is an ‘appropriation’ of science or a ‘popularisation’. The distinction

between ‘legitimate popularisation’ and other public science is implicit in the

dominant view of popularisation (see page 7). However, we find in the case of

Inside Information that the distinction is meaningless. The ambiguity brings into

question the utility of the dominant view. (It serves no analytical purpose but the

distinction itself helps to maintain boundaries between those who can legitimately

call on scientific authority and those who can not.) A more powerful way to begin

to understand the distinction between the micrographs in the exhibition and in

upstream popularisations is to examine, in each case, the various boundaries that

they impact on. This gives us insight into the role of science in public culture that

is unavailable to us if we insist on distinguishing popularisations from

appropriations. We can look for which boundaries are invoked, which direction

they are moved and how the existence of a boundary is used creatively.

153

Analysis of boundary work in more upstream popularisations reveals

functions of popular science that would otherwise be obscure. The dominant view

of popularisation understands popular texts as concessions by scientists to non-

scientists. The direction of communication is assumed to be in one direction –

from specialists to lay public – and the goal of communication is education. But

popularisations can also serve an important role amongst specialists. The

necessary strictness of formal communication in science leaves little room to

express impressions of a field or to speculate without strong justification and

demonstrable relevance. The history of quantum mechanics for instance is littered

with letters Einstein wrote to Schrödinger, comments Bohr made to Heisenberg

and long conversations. For ‘on the record’ rumination of this sort, the main

forum is popular science books5.

The account of The Ghost in the Atom below (page 197ff) concentrates on

three issues. The first issue concerns how controversy over the interpretation of

quantum mechanics is dealt with. We find that boundaries provide an axis about

which physicists can ‘ironically reinterpret’ rival positions (see page 203). In

addition, we see how boundaries allow controversy over quantum mechanics to be

incorporated into the ‘narrative of nature’ (see page 57) constructed by the book.

This strategy is contrasted with other accounts of quantum mechanics that attempt

to diffuse the controversy by casting it in social terms.

The second issue concerns how physicists ‘police’ the boundaries of

quantum mechanics and establish who has authority in relation to dependent

entities such as ‘consciousness’. (Whether it is psychologists, philosophers,

theologians or physicists). In The Ghost in the Atom questions about mind,

consciousness, epistemology, etc. are all cast as physical questions. A possible

alternative narrative presents physicists pushing beyond their competence into

areas in which other researchers have authority. In this story the big questions do

not become physical questions and physicists have to defer to the experts in other

fields to make progress. To construct the former narrative careful use has to be

made of those elements of explanations that come from ‘outside’ – the metaphors

and appeals to authority.

5 For an account of the use of popular science in debates about cosmology see Jane Gregory 1998.

154

The third issue concerns the way boundaries are used to construct a role

for physicists as mediators of quantum reality. It is suggested that the role

established in The Ghost in the Atom is like that of a ‘shaman’ (see page 209).

The perception of boundaries can also help to explain how popular science

texts are received by different audiences. Robert Oppenheimer’s 1953 Reith

Lectures were not well received (according to Freeman Dyson, 1989: vii). A

possible explanation for this is that Oppenheimer failed to address his audience’s

conception of the boundaries between physics and politics (see page 190ff).

The account concludes with a discussion of the utility of this conception of

the boundaries of science with respect to: 1) defining popular science;

2) analysing popular science texts; and 3) questions about the public

understanding of science.

The Boundaries are not Fixed The question ‘what is science?’ is not merely philosophical; it has

concrete, material implications. Overwhelmingly it is these that provide the

motivation for addressing the question in popular texts. To understand

popularisation we need to analyse the ways the question itself is used. As far as

the answer to the question is concerned, the category ‘science’ is revealed to be

historically situated, evolving and without essence. The word science is

re-invented every time it is invoked.

The negotiation over the limits of science and the privileges and

responsibilities of scientists is driven by individual and group interests that are

themselves understood and made meaningful within an ideological context. We

can understand the role of popularisations in the negotiation with the following

analogy: Popularisations are like individual speeches (or occasionally like ironic

asides) in a never ending debate. Popularisations are not the only voices in this

debate. Any representation of science or scientists can play a role. Any

representation of an issue where science impacts on society implicitly draws a line

between science and non-science.

The Philosophy of Science Questions about science as a category usually fall within the disciplinary

boundaries of the philosophy of science. The claim that the boundaries of science

155

are not stable is thus a philosophical intervention. However, the additional claim

that the boundaries result from negotiation within an ideological context seems to

imply a competing disciplinary approach. Clearly there are boundary issues to

consider ahead of those concerning the demarcation of science. It is as well,

therefore, to orient the discussion with respect to recent philosophy of science

before the claim is addressed in any detail.

A claim that the boundaries of science are determined through negotiation

in popular science texts could be described as ‘deflationist’. A deflationist account

is one that aims to dissolve traditional metaphysical and epistemological problems

in science into questions about actual practice in a social context. Because,

deflationists argue, we cannot find anything – any one thing – that makes

scientific truths true we are confined to studying the reasons that scientists have

for making some claims over others (Sismondo 1997: 219-220). Opposing

accounts that do posit essential qualities that distinguish scientific knowledge

from other knowledge are dubbed ‘essentialist’. The deflationist tendency has

become widespread in recent years though it remains a source of controversy. The

principal reason it remains controversial is that emphasising the fortuitous, local,

particular aspects of science seems to challenge its claim to universally objective

knowledge.

In seeking answers to the question ‘what is science?’ in cultural

representations of science, I am ipso facto rejecting essentialist approaches to the

question. Much recent work in history, philosophy and social studies of science

shows that such a move is justified by the sheer heterogeneity of science

(Sismondo 1997: 221). However, the approach adopted here is not aimed at

arbitrating on epistemological questions. The following discussion is consistent

with many metaphysical positions. This is because it does not address the validity

of scientific knowledge, just the way it is employed in discourse.

Whatever the metaphysical foundations of science, the way it is invoked in

popular culture is not determined by a priori criteria; the profuse meanings of the

word ‘science’ are not prescribed by metaphysics. If we are interested in the

cultural functions of science then it is as well to adopt a functionalist approach to

its definition and understand ‘science’ from the perspective of power within

society. Thus I concentrate on the cultural processes that lead to conceptions of

science becoming dominant. However, this does not obviate philosophical

156

approaches to the question ‘what is science?’ Indeed, part of the motivation for

this move is to permit philosophical inquiry into otherwise complex and

intractable knowledge.

The conception of dynamic boundaries between science and non-science is

not in itself an adequate explanation of scientific practice. I follow Sismondo

(1997: 220) in claiming that metaphysics, or something very much like it, still has

a place in the study of science when we adopt the deflationist attitude. However,

by adopting this conception of boundaries we can show that much of the effort

that goes in to analytic descriptions of the category ‘science’ is misplaced. Our

principal motivation for doing so here is to elucidate the relations of popular

science texts to their context.

The following discussion is philosophical only in the way it seeks to

distinguish questions about science from questions about ‘what science is taken to

be’. For many questions, ‘what science is taken to be’ is all that is important.

Speaking exclusively about ‘what science should be’ or ‘what science must be’

imposes an enormous burden on a researcher attempting to discover how science

operates in a particular instance. (The main problem is that a comprehensive

conception of science must be articulated and justified in each place it is invoked.

Much effort goes into demonstrating that a particular conception of science is

relevant before it can then be used to illuminate a particular issue or text.)

Ultimately, this account is silent on the philosophy of scientific knowledge except

in outlining the limits of its applicability and to discuss the way it is used in

popularisations.

Especially in the wake of the ‘Science Wars’, the exact nature of the

claims being made need to be spelled out very clearly – so I reiterate: To argue

that the authority of scientists is based on the social function of science is not to

argue that scientists do not have privileged access to reliable knowledge.

Although I argue that what counts as scientific knowledge is often decided in

popular forums, the universality of any particular element of knowledge is not a

product of the attitude adopted towards it. The case made below is simply that

knowledge is not the same as authority or, to put it another way, authority is not a

simple function of knowledge. To fully understand science we need to understand

authority as a social relation. This means we have to ‘bracket off’ questions about

157

the validity of scientific knowledge. (This point is further clarified in the

conclusion, page 235).

Essentialist Accounts of Science The category ‘science’ has been subject to several attempts at essentialist

definition. Here I summarise three influential examples from the philosophy,

sociology and history of science6. Although each has much to commend it, they

fail to address the question of the demarcation of science as a practical problem

for scientists. As we shall see, there is no unequivocal way to decide what

constitutes science. Below I distinguish a philosophical use of the question ‘what

is science?’ from a polemical use of it. Both motivations for invoking the question

inform the following examples.

Perhaps the most celebrated attempt at distinguishing science from the rest

came from Karl Popper (1983: 118-133)7: to be properly scientific, a claim must

be falsifiable. The practice that distinguishes science from non-science is making

bold conjectures together with the readiness to think up tests and refutations of

those conjectures. This is a good starting point for exploring scientific knowledge.

It is also a useful heuristic for those cases where we do find it difficult to assign

knowledge to one category or another.

Another approach is that of Merton (1973: 267-278)8 who was concerned

to understand how science could be performed so successfully despite the fact that

it was conducted by imperfect human actors. His conception of science can be

summarised thus:

The institutional goal of science is the extension of certified knowledge.

The technical methods employed toward this end provide the relevant

definition of knowledge: empirically confirmed and logically consistent

statements of regularities (which are, in effect, predictions). The

institutional imperatives (mores) derive from the goal and the methods.

[...] The mores of science possess a methodologic rationale but they are

binding not only because they are procedurally efficient, but because they

6 These examples are discussed in a similar context but in greater depth in Gieryn 1994: 394-404. 7 Originally published as Karl R. Popper, Replies to My Critics § 5-8, in P. A. Schilpp (Ed.) The Philosophy of Karl Popper, Open Court, 1974. 8 Originally published as Science and Technology in a Democratic Order, Journal of Legal and Political Science vol. 1 (Oct. 1942): 111-115.

158

are believed to be right and good. They are moral as well as technical

prescriptions.

Four sets of institutional imperatives – universalism, communism,

disinterestedness, organised scepticism – are taken to comprise the ethos

of modern science. (Merton 1973: 270)

These four ‘norms’ are the characteristic features of Merton’s conception

of science. In short, proper science is characterised by a propensity to judge

claims on their merits and not on where they emanate from (universalism); to

distribute results freely (communism9); to conduct research without expecting to

gain materially from the result (disinterestedness); and to suspend judgement until

facts overwhelmingly point to a conclusion (organised scepticism).

Today, the usual starting point for thinking about science is Thomas

Kuhn’s notion of revolutionary episodes interspersed by periods of ‘normal

science’ (Kuhn 1970). This stresses the social process by which common values

and research programmes are established within the scientific community. Kuhn’s

conception, however, accounts for the authority and social responsibilities of

scientists in terms of the success of research programmes or ‘paradigms’. Kuhn

does not offer us a priori demarcation criteria for science but gives an account of

how the coherence of scientific fields is established. The boundaries of science are

dynamic and determined socially but the negotiation remains within the science

community. This conception of science is not essentialist in the way Popper and

Merton’s are because we can not find a set of criteria that are shared by all

scientific practices at all moments in history. Nevertheless, Gieryn

(1994: 402-403) identifies the following extract from Kuhn’s Structure of

Scientific Revolutions as an essentialist demarcation principle:

Work under the paradigm can be conducted in no other way, and to

desert the paradigm is to cease practising the science it defines.

(Kuhn 1970: 34)

Though this conception of science is not essentialist in the way that

Popper’s is, both Kuhn and Popper have the same goal: to provide a theoretical

9 Because the word communism is culturally loaded – ever more so since the end of the Cold War – the more neutral ‘communality’ is often substituted.

159

framework for unequivocally distinguishing science from non-science. This could

be described as an essentialist project.

In Popper, Merton and Kuhn’s accounts we have three coherent

descriptions of the category ‘science’. However, they only partially help us to

understand how we actually come to attribute knowledge to one category or

another or how we decide where the authority that comes with scientific expertise

begins and ends. This brings us to the question itself. Why is the question

invoked? In all three examples there are two simultaneous motivations for raising

the question and seeking an essentialist way of distinguishing science from

non-science.

The Question ‘What is Science?’: Polemical and Philosophical Uses In one sense, Popper, Merton and Kuhn’s conceptions of science are

prompted by philosophical questions rather than practical ones. They are not

designed to be the final word in the philosophy, sociology or history of science.

The criteria for proper science they propose are designed to encourage progress in

a limited area of (what has come to be known as) science studies. Popper is quite

explicit about the impotence of ‘final words’ on science:

If I define ‘science’ by my criterion of demarcation (I admit that this is

more or less what I am doing) then anybody could propose another

definition, such as ‘science is the sum total of true statements’. A

discussion of the merits of such definitions can be pretty pointless.

(Popper 1983: 123)

There are many examples of ‘pithy’ definitions such as ‘science is the sum

total of true statements’. This is an example from Edward O. Wilson:

Science, to put its warrant as concisely as possible, is the organised

systematic enterprise that gathers knowledge about the world and

condenses the knowledge into testable laws and principles.

(Wilson 1998a: 53; Quoted in Wilson 1998b: 2048)

Although, as Popper argues, discussing the merits of competing definitions

of science is pointless in a philosophical context, pithy definitions can be

mobilised to defend or capture territory in a boundary dispute. A ‘pithy’ definition

of science is one that appears to provide a simple method of assigning knowledge

unproblematically to the categories ‘science’ and ‘non-science’. It allows a

160

competing conception of science to be rejected on logical grounds, which is more

decisive than comparing the strengths and weaknesses of two different views. It is

not the definitions themselves that are significant so much as the belief that

science is disposed to pithy definition. Popper, Merton and Kuhn’s philosophies

of science reject this idea. Their accounts raise more questions about science than

they answer. They are well developed and comprehensive. Pithy definitions by

contrast are not designed to raise questions. Instead they legitimise one class of

questions at the expense of others.

Philosophies of science are invoked in different ways depending on what

is at stake. In particular, a philosophy of science can be invoked in two main

ways: The first way encourages debate – its purpose is to raise questions and open

new avenues of research. The second way closes a debate down – its purpose is to

provide a formulaic mechanism for settling a dispute. Thus, we can distinguish a

‘philosophical’ from a ‘polemical’ use of the question, ‘what is science?’

The differences between these two are very subtle indeed. We can see this

in the example of Popper’s philosophy of science. One of the problems that

provoked the inception of his falsification criterion was the status of both

Marxism and psychoanalysis. In the early twentieth century, each of these

disciplines was struggling to establish its credentials as a science. In each case,

Popper disputed this designation. At least part of his motivation in seeking

a priori criteria for demarcation was to deny these two groups of researchers the

special status enjoyed by scientists (Laudan 1983). That is, Popper himself was

‘policing’ the boundaries of science – expelling groups he felt were not deserving

of the cultural authority that came with science and thus protecting the status of

‘legitimate’ scientists from erosion. His philosophy of science was inspired by

twin motives: to provide a mechanism for policing boundaries and to raise general

questions about science.

Today, the falsification criterion is still invoked as a mechanism for

distinguishing science or non-science, despite broad agreement in the

philosophical and scientific communities that it is not up to the job (i.e. that

science is more complicated than that). A recent example of its use is found in a

draft statement from the American Physical Society that attempts to define science

161

for the public in 200 words10. The statement is motivated in large part by opinion

polls showing that the public belief in rivals to mainstream science such as faith

healing and astrology is growing. The governing council of the American Physical

Society rejected the proposed statement. Part of the reason for this was the fear

that the public would misunderstand references to falsifiability. (Personal

communication with principal author, Tom Moss. See also Macilwain 1998).

Edward O. Wilson’s article in Science from which the extract above

(page 159) was taken is an explicit challenge to the current boundaries of science:

[S]cholars have traditionally drawn sharp distinctions between the great

branches of learning, and particularly between the natural sciences as

opposed to the social sciences and humanities. The latter dividing line,

roughly demarcating the scientific and literary cultures, has been

considered an epistemological discontinuity, a permanent difference in

ways of knowing. But now growing evidence exists that the boundary is

not a line at all, but a broad, mostly unexplored domain of causally linked

phenomena awaiting co-operative exploration from both sides.

(Wilson 1998b: 2048)

The rhetoric of the article follows a familiar (and perfectly legitimate)

pattern: An essentialist conception of science is invoked polemically rather than

philosophically (this naturalises the terms of reference). Science is actually

defined by they way Wilson then implicitly draws the boundaries between science

and non-science. The article goes on to outline areas where, “natural sciences

10 Rejected APS statement supplied by the principle author, Tom Moss, National Academy of Sciences, Washington DC USA: What is Science? Scientific results and theories have created new stores of knowledge, stirred public imagination, and brought great benefit to individual life and human civilisation. The American Physical Society wishes to affirm the crucial core values and principles of science. • Science is a disciplined quest to understand nature in all its aspects. It always demands testing

of its scientific knowledge against empirical results. It encourages invention and improvement of reliable and valid methods for such testing.

• Science demands open and complete exchange of ideas and data. Science cannot advance without it. Part of this exchange is to insist on reproducing, modifying, or falsifying results by independent observers.

• Science demands an attitude of scepticism about its own tenets. It acknowledges that long-held beliefs may be proved wrong and demands experimentation that may prove its theories false. This principle has built into science a mechanism for self-correction that is the foundation of its credibility.

Scientists value other, complementary approaches to and methods of understanding nature. They do ask that if these alternatives are to be called “scientific”, they adhere to the principles outlined above.

162

have entered the borderland”. These include cognitive neuroscience, behavioural

genetics, environmental science and the field that Wilson himself helped to

establish: sociobiology. In this outline it becomes clear that “co-operative

exploration from both sides” means social scientists accepting that the foundation

of their subject is the same as the foundation of natural sciences: genetics is the

foundation of mind and culture so this is where explanations of cultural activity

should begin.

Because mind and culture are material processes, there is every reason to

suppose, and none compelling enough to deny, that the social sciences

and humanities will be strengthened by assimilation of the borderland

disciplines. For, however tortuous the unfolding of the causal links

among genes, mind and culture, and however sensitive they are to the

caprice of historical circumstance, the links form an unbreakable

webwork, and human understanding will be better off to the extent that

these links are explored. (Wilson 1998b: 2049)

From this it seems that, at the very least, explanations in social sciences

and humanities should fit-in with and make reference to scientific explanations of

more basic phenomena. Out of context like this, Wilson’s argument sounds

distinctly imperialistic. As a whole, the article is much less controversial than it

might seem given the claims he makes about social sciences and the humanities.

The main reason Wilson can make such strong claims is that he mobilises his

conception of science ‘polemically’ rather than ‘philosophically’ (in the sense I

have described, page 160).

The concept of ‘consilience’ that Edward O. Wilson outlines, like

Popper’s falsification criterion, can be used polemically or philosophically. (The

word consilience refers to the interlocking of causal explanations across

disciplines.) The concept could be explored ‘philosophically’ as the basis of an

open-ended research programme in science studies. His book (1998a) does this to

some extent. In the Science article (1998b) however, consilience and Wilson’s

pithy definition of science have the effect of ‘closing the subject down’. The result

is a simple, uncontroversial definition of science that gives him a foundation on

which to introduce more controversial ideas about the relations of science to other

disciplines. When consilience is invoked ‘polemically’, a quite different research

programme is proposed which involves an expansion of science. The definition of

163

science leaves little scope to challenge Wilson’s terms of reference, which has the

effect of ‘naturalising’ them. This means that claims that might otherwise seem

imperialistic and problematic are taken to be simple and straightforward. For

instance:

The continuing quest for [...] inborn biasing effects promises to be the

most effective means to understand gene-culture coevolution and hence

to link biology and the social sciences causally. It also offers a way, I

believe, to build a secure theoretical foundation for the humanities, by

addressing, for example, the biological origins of ethical precepts and

aesthetic properties in the arts. (Wilson 1998b: 2049)

Out of context, this extract is an assault on the autonomy of the humanities

that might inspire some sort of defensive reaction from philosophers or historians.

In the context of the article as a whole, however, the suggestion emerges as a

natural corollary of the definition of ‘science’. The article as a whole demonstrates

many different strategies for shifting the boundaries of science and is thus a good

example of a popular context acting as a forum for negotiation. (Despite appearing

in Science, we should still consider the article to be a popularisation or at least

‘downstream’.)

The idea of consilience excludes some claims to knowledge because they

don’t ‘fit in’ to the rest of science. At the same time, other areas of knowledge that

have previously enjoyed autonomous status such as aesthetics are shown to be

peripheral branches of science. All philosophies are available to be used like this.

Steven Weinberg speaks of physicists using the insights of philosophers to protect

them from the preconceptions of other philosophers.

The value today of philosophy to physics sees to me to be something like

the value of early nation-states to their peoples. It is only a small

exaggeration to say that, until the introduction of the post office, the chief

service of nation-states was to protect their peoples from other nation-

states. The insights of philosophers have occasionally benefited

physicists, but generally in a negative fashion – by protecting them from

the preconceptions of other philosophers... (Weinberg 1993: 132)

This is the way, in this instance, the question ‘what is science?’ is invoked

by Wilson. The question is not explored for its own sake; indeed, it is not explored

at all. It is invoked to pre-empt any resistance to his expansionist adventures. By

164

ostensibly approaching the question philosophically, Wilson disguises his more

practical imperatives. The question ‘what is science?’ is answered implicitly by

the way he orients himself to other researchers and the hierarchy he constructs

rather than explicitly by his pithy definition and list of traits.

Thus, we can understand Wilson’s claims in two ways: they are both a

considered philosophical position and part of a struggle over authority. There is no

fundamental difference between these but the distinction allows us to see how a

philosophy is used in a particular text. Wilson can choose to invoke ‘consilience’

in different ways – his motivation dependent on the circumstances. He can choose

to engage with Popper, Merton, Kuhn and other philosophers, sociologists and

historians of science in a spirit of open and disinterested debate or he can mobilise

his philosophy to defend the status of science and gain new territory. In the

Science article, the latter option is chosen.

The discussion so far makes Edward O. Wilson’s article seem

disingenuous at best and possibly even devious but this is not the case. An

important point to stress is that the article is not devious or even exceptional.

Defining the boundaries of science is a normal aspect of popular science.

Merton’s essentialist conception of science was, like Popper’s, motivated

by the need to police the boundaries of science (Gieryn 1995: 399). His norms

were conceived in 1942, the era of ‘Aryan Science’. He was responding to the

threat posed by the Nazis, in particular the way they had invoked the authority of

science. Merton’s essay was an intervention in a gargantuan ideological struggle.

The introduction to The Normative Structure of Science (1973: 267-278) describes

science in crisis. It is an alarm signal warning of an assault against the edifice of

science. The threat to science that Merton describes is analogous to the one

described by modern commentators,

The revolt from science which ... appeared so improbable as to concern

only the timid academician who would ponder all contingencies,

however remote, has now been forced upon the attention of scientist and

165

layman alike. Local contagions of anti-intellectualism threaten to become

epidemic. (Merton 1973: 267)11

He goes on to describe the boundary work that scientists were forced into

in the forties:

Incipient and actual attacks upon the integrity of science have led

scientists to recognise their dependence on particular types of social

structure. Manifestos and pronouncements by associations of scientists

are devoted to the relations of science and society. An institution under

attack must re-examine its foundations, restate its objectives, seek out its

rationale. Crisis invites self-appraisal. Now that they have been

confronted with challenges to their way of life, scientists have been

jarred into a state of acute self-consciousness: consciousness of self as an

integral element of society with corresponding obligations and interests.

(Merton 1973: 267-268, emphasis in original)

Merton’s essay is an intervention in the crisis facing science. It is a

polemical attack on the status of Aryan Science. The desire to discredit Nazi

scientists is what motivated his conception of science. The result is much more

than a mechanism for discrediting corrupt alternatives. Merton’s conception of

science is not intended to be a ‘last word’ in a sociological context, though it was

designed to be a last word in a polemical context. Though Merton’s conception of

science is rich and open-ended, his norms can also be invoked as a ‘pithy’

definition of science as a way of settling a dispute.

Another example of pithy definitions of science being employed in a

boundary dispute can be found in an exchange between a maverick solar physicist

and a mainstream meteorologist in a BBC television programme12. Piers Corbyn

claims to be able to make long term weather forecasts by observing solar activity.

He is secretive about his methods but maintains that they are ‘scientific’. He

established his credibility by placing bets with William Hill bookmakers rather

than through the usual peer review process. He now runs a successful business

11 Originally published as Motive Forces of the New Science, chap. 5, Robert K. Merton, Science, Technology and Society in Seventeenth-Century England, Bruges, Belgium, Saint Catherine Press, 1938; with a new preface, New York: Howard Fertig and Harper and Row, 1970. 12 QED July 1996

166

selling long-term weather forecasts. His company, Weather Action, was floated on

the stock exchange in the autumn of 1997, making Corbyn a millionaire.

In the QED programme the scientific status of his research was brought

into question by a mainstream meteorologist. The exchange was good-natured

though the meteorologist was clearly exasperated with Corbyn’s reticence. The

mainstream meteorologist maintained that a claim is not scientific until the

scientific community has judged it so. He cited John Ziman to lend this idea some

authority. In his defence, Piers Corbyn argued that the important thing about

science is that it is a method, a phrase he repeated several times.

Here we see a pithy definition of science being mobilised by Corbyn as a

way of dealing with objections to his research. If, as he claims, science is a

method then he can go on to argue that questions about publishing etiquette are

irrelevant. (If science is just a method then one can do it secretly and assent from

the scientific community is not a prerequisite of scientific knowledge.) Following

the argument through, an adequate measure of the validity of his research is the

money he has made from William Hill.

The actual definition of science (the idea that science is a method) is not

important in this example. More significant is the idea that the phrase ‘science is a

method’ comprehensively encodes the essence of science. Other characteristics of

science such as the peer review process are thus rendered secondary or peripheral.

Corbyn could have used another pithy definition of science (for instance, ‘science

is the sum total of true statements’) to achieve the same purpose.

We can understand this example as the boundaries of science being

negotiated in a popular forum. At stake are the basic epistemological foundations

of science but there is no will on either side to examine these in any detail.

Instead, the exchange is motivated by more mundane imperatives. Corbyn wants

the esteem of his scientific colleagues without having to give up his secrets. He

also needs scientific credentials to be able to market his forecasts. Corbyn’s

detractor wants access to Corbyn’s theory. He also wants to keep control of

meteorological knowledge generally and avoid competition over who counts as an

expert.

These are the reasons why: a) what counts as science; and b) who counts

as a scientist need to be decided in this case. Despite the philosophical nature of

the dispute, the way these questions are decided is as banal as the motivation for

167

raising them. Raising the question, ‘what is science?’ serves to close down the

argument, not broaden it. Invoking a pithy definition of science reinforces the idea

that science is straightforward. In a dispute situation this adds credence to the

(necessarily) simple solution being advocated: if ‘science’ is simple then a simple

solution to a boundary dispute is not only adequate but compelling.

What was the outcome of this dispute in terms of the boundaries of

science? The programme presented Piers Corbyn as a maverick and an outsider

and contrasted him with mainstream scientists who did not recognise him as one

of their own. To mainstream scientists his research was revealed to be unscientific

according to their own criteria. However, according to an interview in Saga

Magazine, Corbyn is “consummately proud of the programme” (Garner 1998). To

non-scientists, his failure to publish seems less significant and presenting him as a

maverick may even have enhanced his status. Corbyn conforms to the current

popular understanding of the word ‘scientist’ and his work, dealing as it does with

natural phenomena, looks like science to non-scientists. He may not have achieved

the esteem of his colleagues but Corbyn feels that the programme helped to

establish his reputation (which we must take to mean his reputation as a scientist).

His status as a scientist is what has allowed him to establish his business.

Corbyn’s success in business depends on his status as a scientist.

Mainstream meteorologists would like people to feel that Corbyn does not deserve

the privileges afforded to them. However, the definition of science that emerged

from the programme does not conform to mainstream meteorologist’s criteria; it

conforms to Corbyn’s. In this case, Corbyn won and mainstream meteorologists

lost. The result is that public scrutiny of knowledge is not an essential aspect of

‘science’ and Corbyn can enjoy many of the privileges of a scientist.

‘Pithy Definitions’ of Science: Summary

Philosophy, sociology and history of science as exemplified by, for

instance, Popper, Merton or Kuhn, give us good reasons for believing that the true

boundary between science and non-science is determined by factors internal to

science. If we are interested in the logic of scientific discovery or the structure of

scientific revolutions, then philosophy and sociology of science provide a

framework for investigating these. In popular contexts on the other hand,

discourse on the meaning of ‘science’ is more concerned with defending or

168

capturing territory than exploring the metaphysical subtleties of knowledge about

nature.

The nature of the putative true boundary is not important in understanding

how the categories science and non-science are applied in popular culture.

However, the belief in an objective and independent boundary prevents us from

seeing the work that goes into constructing the functional categories of science

and non-science. Because we believe that there are true boundaries out there

somewhere, we do not look at how we actually come to categorise them.

The question of how the categories are distinguished in practice is a

different type of problem. To understand how the boundaries between science and

non-science operate we cannot think like philosophers. We must put our questions

about the logic of scientific inference to one side and instead see the boundaries

emerging from negotiation. The negotiation is historically and ideologically

situated and the negotiators are characterised by their specific material interests.

When looking at boundary disputes and popularisations, philosophy of science is

not so much an intellectual pursuit as a rhetorical device.

The boundaries drawn between science and philosophy and between

scientists and philosophers are discussed in greater detail below. Using examples

drawn from The Ghost in the Atom (Davies and Brown 1986) and Steven

Weinberg’s Dreams of a Final Theory (1993) I will show how popularisations are

used as a forum to negotiate authority over philosophical questions.

Boundary Work Problems with the Cartographic Metaphor

We have seen that there is a case for basing an account of popular science

on ‘what science is taken to be’ rather than on a priori criteria for proper scientific

knowledge. We have also seen some examples of people policing or modifying

the definition of science and the authority of scientists. Following Gieryn (1983),

we can call this activity ‘boundary work’. So far, however, the notion of a

boundary between science and non-science has been only vaguely defined. For

sociologists the mere existence of a boundary is often enough. There is no need to

develop a general ‘taxonomy’ of them. Indeed, the imperative for a sociologist to

avoid being prescriptive about science is often more pressing than any desire to

169

generalise. Sociological accounts place emphasis instead on the particular.

Following a discussion of boundaries with respect to parapsychology, Barry

Barnes (who Gieryn credits with theoretically setting up the ‘boundary

problem’)13 sums up his approach,

From a sociological point of view there is little more to be said about

[parapsychology] or about the boundary of science in general. The

boundary is a convention: it surrounds a finite cluster of concrete

instances of science without implying that there is any essence which

they share; the instances are the accumulated outcome of a historical

process of negotiation. Any attempt to eject instances from the cluster, or

to add instances presently rejected, is to employ the term ‘science’ in an

evaluative sense, and to participate in the process of boundary-drawing

which, as sociological observers, we should be describing.

(Barnes 1982: 93)

Gieryn (1994) discusses a variety of sociological approaches to the

boundary problem. His account includes a discussion of the sociology of

professions and of ‘social worlds’ (an idea that places emphasis on the work place

as a site where diverse people meet). Gieryn also distinguishes four broad types of

boundary work: monopolisation, expansion, expulsion and protection (1994: 424).

In Cultural Boundaries of Science, this is refined to three ‘genres’ of boundary

work: expulsion, expansion and protection of autonomy (Gieryn 1999: 15-17).

However, to develop boundary work as a critical concept that can inform a

reading of popular science texts we need to be clearer still about the types of

boundaries that may be addressed and the types of boundary work that may be

undertaken in popularisations. The concept of boundaries is primarily a

cartographic metaphor in which we relate ‘social space’ or ‘intellectual space’ to

physical space. As Shapin and Schaffer note,

The cartographic metaphor is a good one: it reminds us that there are,

indeed, abstract cultural boundaries that exist in social space. Sanctions

can be enforced by community members if the boundaries are

transgressed. (Shapin and Schaffer 1985: 333)

13 Gieryn 1994: 424. This is presumably a reference to Barnes 1974: Chap. 5 – ‘Internal’ and ‘External’ Factors in the History of Science.

170

In The Cultural Boundaries of Science (1999) Gieryn develops the

metaphor to a much greater extent14 and ‘cultural cartography’ is used

synonymously with ‘boundary work’.

[C]ultural maps locate (that is, give a meaning to) white lab coats,

laboratories, technical journals, norms of scientific practice, linear

accelerators, statistical data, and expertise. They provide interpretative

grounds for accepting scientific accounts of reality as the most truthful or

reliable among the promiscuously unscientific varieties always available.

(Gieryn 1999: x)

But, however useful it is, an analogy with borders between countries is not

sufficient to account for the range of boundary work that we see in popular

science. There are two main problems with the analogy: 1) It suggests that the

relations between two entities (science and politics, say) are defined by just one

boundary. In fact, there may be several boundaries between the two. 2) It assumes

that the entities themselves are relatively stable and just the boundary between

them moves one way or the other. However, boundary work in science can have a

radical effect on the whole entity.

Gieryn recognises the second problem also but seeks a solution in the

cartographic metaphor anyway. When discussing the Hobbes/Boyle debate as an

example of ‘monopolisation’ he gets around the problem by declaring the

existence of two maps rather than one – what was ‘inside’ and ‘outside’ for Boyle

did not necessarily correspond to any features of Hobbes’ map of the intellectual

landscape (1994: 424-425). However, as the idea of maps includes the

independent reality to which they refer, this is an unsatisfactory fudge. The

cartographic metaphor is over-burdened when Gieryn tries to account for how

boundary work affects the entities themselves.

Beyond The Cartographic Metaphor: ‘Dependent Entities’ Because ideas about maps restrict the notion of boundary work and can, in

some cases, lead to confusion between the referent and the representation, there is

a case for augmenting the cartographic metaphor with alternative conceptions of

boundaries. The most important addition is the idea of a ‘dependent entity’. In

14 Gieryn tells us that he has always been fond of maps and has a collection of over 300 of them (1999: vii)

171

considering the boundary between two entities such as physics and politics, we

need to consider the intermediate entity that links the two. One intermediate entity

in this case might be radiological protection.

The rapid growth of mobile phone use in 1999 made radiological

protection a fractious and controversial issue. There were concerns about whether

the microwave radiation from handsets is harmful to users and whether it is safe to

site base stations close to schools. At the same time, profits and jobs depended on

increased growth. All the actors recognise that physical science has a significant

role to play in determining the correct course of action but not all would agree on

what that role should be. The issues involved in radiological protection can

‘belong’ either to physicists or to politicians (or the officials empowered by

politicians). Boundary work between politics and physics, then, happens at an

intermediate site that is dependent on both of them. When the boundary work is

over, the intermediate entity (radiological protection) will still be dependent on

both physics and politics to have any meaning but will be sited within the (newly

established) boundaries of one or the other.

Rather than thinking of physics and politics as two dominions with a

common border, we should concentrate on the entities that depend on both of

them to be meaningful. Thus, there are many boundaries between physics and

politics – one defined by radiological protection; another defined by war

technology; yet another defined by science education, etc. In addition to the set of

boundaries between physics and politics, there are many others between physics

and art, physics and social science, physics and the penal system, etc. All the

boundaries make reference to a third entity that all the actors involved have some

sort of interest in. By way of illustration, in the case of physics and art, possible

dependent entities include: aesthetics; ideas about light; a technology used by

artists that depends on physicists for its development; a technology used by both

physicists and artists such as neon lights or lasers; etc.

The cartographic metaphor is still important – it is hard to think of

questions of intellectual hegemony in other terms – but we should not limit our

account of popular science to just those aspects that fit within it. We can list some

of their properties of boundaries that may be important when we apply the concept

to science texts. In no particular order then, we note:

172

• Between principal entities there can be areas that are defined by their

boundaries but not belong exclusively to either of them. There are two ways

this can happen: 1) The intermediate area can be contested, that is, both groups

involved seek to gain the area for themselves. 2) The two groups can share the

intermediate territory. This is similar to the distinction between

‘common-land’ (agreed) and ‘no-man’s land’ (disputed).

• If an intermediate area is shared (as in the case of common-land) then rules

must be established (through negotiation: proposal and counter-proposal) as to

how it is shared. The rules are thus an important defining feature of the

intermediate area.

• Boundaries do not overlap. Although in figurative speech we often use

expressions such as ‘blurring the boundaries’, it is more useful to think of

boundaries between principal entities defining clear and distinct categories. In

particular, boundaries distinguish four categories: elements belong to one or

other principal entity, to both or to neither. (Which corresponds to whether

they are found within a boundary, within an agreed intermediate area or within

a disputed intermediate area.)

• Boundaries themselves can have significant characteristics. For example, they

can be virtual, practical, dynamic, fixed, or hostile. They can allow influence

to travel one way or both ways or not at all. For a geographical analogy, we

can turn to a boundary between areas of farmland. There may be a fence

between the areas or the boundary may be a ‘virtual boundary’ that exists only

in council records. A fence may have a practical purpose such as keeping

animals in or it may be purely symbolic. It may allow people to cross the

boundary but prevent animals from doing so or vice versa.

• Of particular interest is the question of whether the boundaries are stable or

dynamic. A dynamic boundary is one that is maintained by constant boundary

work. An example is a front-line in a war. The front may move very slowly or

not at all but this is only because both sides maintain it continually. As a

boundary, it would cease to exist as soon as the boundary work on either side

ceased. In contrast, a fixed boundary is one that does not require continual

work to exist (such as fence between two fields on a farm). Some of the

173

boundaries between science and non-science are dynamic and some stable

(though still disposed to modification).

• Two principal entities can have more than one boundary between them

determined by different dependent entities or areas of common interest. (This

is one of the ways in which boundaries in science differ from geographical

boundaries.)

• In general, boundary disputes are not about the principal entities. They are

about which principal entity ‘owns’ an intermediate entity.

• There are usually good, practical, reasons for division. (Good fences make

good neighbours.)

With these points as a guide we can return to the question of boundary

work and identify four activities: 1) establishing a new boundary; 2) moving a

boundary; 3) changing the characteristics of a boundary; and 4) changing the rules

that govern agreed intersections. Gieryn’s three genres of boundary work –

expulsion, expansion and protection of autonomy – can be understood within this

scheme. Expulsion can involve establishing a new boundary, changing the rules

that govern agreed intersections or changing the characteristics of a boundary or a

combination of all three. Expansion involves moving a boundary. Protecting

autonomy involves changing the characteristics of a boundary or changing the

rules that govern agreed intersections. The scheme may seem elaborate but, by

breaking ‘boundary work’ down into combinations of distinct activities, it greatly

simplifies descriptions of the process of negotiation.

Another point to note about boundary work is that groups from each of the

principal entities are involved. The demarcation of science is not necessarily

driven by scientists and their interests. This is a point that Gieryn does not make

clearly enough, for instance when he is explaining the cartographic metaphor,

Maps of science get drawn by knowledge makers hoping to have their

claims accepted as valid and influential downstream, their practices

esteemed and supported financially, their culture sustained as the home

of objectivity, reason, truth, or utility. Maps of science get unfolded and

read by those of us not so sure about reality, or about which accounts of

it we should trust and act upon. (Gieryn 1999: x)

174

Whilst this is true, it is a mistake to assume that the cartographers are

always ‘upstream’ and the map users always ‘downstream’. It is also a mistake to

assume that scientists are always keen to expand the boundaries of science and

that competing groups are always keen to stop them.

The major ‘scientific’ issues that have featured in public discourse over the

past decade have involved ‘cultural cartography’ or boundary work from many

different groups. Scientists qua scientists often had little either to lose or gain

from boundary work associated with, for instance, controversies over the

management of bovine spongiform encephalopathy (BSE) and the safety of

genetically modified organisms (GMOs). As a result, it was often not ‘knowledge

makers’ involved in drawing up ‘maps of science’ but other interested groups for

whom the way the maps were drawn made a material difference.

After the announcement of a probable link between BSE (also known as

‘mad cow disease’) and 10 cases of a new variant of Creutzfeldt-Jakob disease

(CJD) in March 1996, Stephen Dorrell (the minister for health) sought to distance

his government from the controversy that followed15. One way to do this was to

redraw the boundaries between science and politics in such a way that both the

cause of the problem and responsibility for its solution became ‘scientific’.

Dorrell deflected difficult questions by explaining that they were questions for

scientists, not questions for him as a minister. This response was applied even to

questions about restoring confidence in foreign markets and whether the

government was considering slaughtering the entire national herd and

compensating farmers.

Deflecting questions this way represents boundary work on very specific

boundaries between politics and science. The dependent entities involved were

agriculture and public health policy. Previously these had ‘belonged’ to politics

even though scientists have an interest in each of them and they require both

entities to have meaning. Dorrell’s response was, essentially, a proposal to expand

the boundaries of science in these two areas at the expense of politics. In many

areas scientists already have important advisory and arbitration roles and

15 Jasanoff 1997 provides a good account of the controversy from the perspective of the public understanding of science.

175

occasionally some real power so extending the boundaries of science in this case

is not a particularly radical proposal.

Even though the realm of science is expanded in this proposal, scientists

themselves stood to lose rather than gain from the move. However, scientists did

not stand to lose as much as the government had to gain so their defence of the

existing boundaries was not very forceful and certainly not very co-ordinated. One

response was from biologist and populariser Lewis Wolpert who was invited to

comment on the affair in the week after the initial announcement16. Wolpert

explicitly criticised Stephen Dorrell’s concept of science and forcefully rejected

(on behalf of his colleagues) any responsibility for the tough decision that needed

to be made. His rejection of Dorrell’s boundary work was summed up in a slogan

that he repeated several times: “science is descriptive, not prescriptive”. This

slogan serves the same purpose as a pithy definition of science (see page 159) – it

suggests that a competing conception of science is logically flawed and can thus

be rejected without reference to any merits it may have. However, it was not

Lewis Wolpert or other scientists who were important when it came to

maintaining the existing boundaries of science in this instance.

Even though scientists did not have a strong interest in the outcome of the

boundary work, other groups did and they ensured that the government was held

to account. It was important to farmers for instance to prevent questions in which

they had a direct interest being dubbed ‘scientific’ and therefore totally out of

their control. Thus it was farmers, grocers, consumers, and opposition parties who

maintained the boundaries of science rather than scientists. They had a material

interest in doing so that was greater than scientists’ own interest in the particular

boundaries in question. As the controversy developed it became clear that

Dorrell’s proposal was not going to be accepted by anybody. The newspaper

journalists continued to address their questions to the government.

Returning briefly to the question of essentialist conceptions of science – it

may be the case that there is a ‘right’ answer to the question of whether Stephen

Dorrell was right or wrong. That is, there may be a way to arbitrate on such

matters unequivocally much as Wolpert tried to do. The important point is that it

16 The World This Weekend, BBC Radio 4, Sunday 26 March 1996.

176

does not matter whether there is a right answer or not. A true (essentialist)

definition of science would not be sufficient to explain the outcome of the BSE

controversy and the division of responsibility that emerged. The explanation lies

instead in the interests that different groups had in maintaining or moving the

conventional boundaries. The putative true boundary was irrelevant.

‘Science’ and ‘The Public’: Boundary Work and the Origin of Two Categories

Earlier (page 149), it was noted that the conception of boundaries

challenges two assumptions that underlie many accounts of science. The second

of these was that that the boundaries are not disputed. The discussion of

negotiation in popular science texts highlighted some of the reasons that we are

unaware that authors are engaged in boundary work. It is worth also looking at

how the apparently stable boundaries that we recognise today have come about.

This will reveal that even our strongest held beliefs about science and its relation

to adjacent entities were established relatively recently. To see how the modern

categories ‘science’ and ‘the public’ emerged we can draw on Steven Shapin’s

account, ‘Science and the Public’, 1989.

Two Categories: A Tempting Teleological Account According to contemporary wisdom, the modern categories represent a

‘correct’ or ‘natural’ relation between science and the public. In the past, it may

have been different but as time has gone on mistakes have been rectified. For

instance, in the seventeenth and eighteenth centuries, there was a close association

between science and the church and today the two are very separate. This is

explained with reference to the essential qualities of science itself. In the past

people were mistaken about the correct boundaries between scientists and clerics;

as science has progressed, the close association between science and the church

has been revealed as inappropriate.

In general, then, we note that public structures were once powerful with

respect to science and that public concerns could influence the direction of

research and even the content of scientific knowledge. But now the public’s role

consists solely in acceding to scientific judgements and rendering support. The

last three centuries have seen an inversion of the power relations between science

177

and the public. The scientific community now controls its own proceedings and

even extends its influence to the arena of public affairs. The usual explanation for

the change is that, as science has progressed, the ‘natural’ relation between

science and the public has simply ‘emerged’ or ‘been unveiled’. That is, we

assume that the modern relations between science and the public had always been

somehow encoded in science itself. This explanation of the modern categories

‘science’ and ‘the public’ is dubbed the ‘canonical account’ by Shapin. It is,

however, fundamentally flawed. As Shapin explains,

There was nothing ‘natural’, ‘inevitable’ or ‘immanent’ in these

developments; they were massive historical achievements. The work that

allows us to apportion items to ‘science’ and to ‘the public’ was done in

specific historical settings, for specific purposes. (Shapin 1989: 992)

The weakness of this teleological account is that it tends to equate

description with explanation. The most effective antidote to these ‘teleological

temptations’ is to display the enormous labour that went into the construction of

these categories. The task Shapin sets himself is to,

[D]escribe and explain aspects of the historical construction of these

categories. On what bases, and for what purposes, have boundaries been

drawn between scientific and other forms of culture, between the social

role of the practitioner of natural knowledge and other social roles?

(Shapin 1989: 990)

Beyond Teleological Temptations: A Historical Account Shapin provides a systematic examination of dimensions along which

scientists and other cultural workers have historically been discriminated. In

particular, he argues that Scientific Naturalism in the mid to late nineteenth-

century provided a vehicle for establishing the modern social and cultural

boundaries between science and the public. Until the emergence of Scientific

Naturalism, ‘natural theology’ provided a bridge between scientific and lay

culture. With natural theology, scientists, clerics and lay people were linked by

what Robert M. Young has called a ‘common cultural context’ (Young 1985,

Chaps. 2 & 5) and the boundaries between them were more nebulous.

The first dimension that Shapin examines is ‘cultural competence’:

scientists are distinguished from lay people by the skills they have acquired. The

178

discontinuity of competencies between the generally educated public and

scientists is an historical phenomenon. Kuhn has shown that mathematical

sciences were the first to develop a gap of comprehensibility, which was evident

even in ancient times (Kuhn 1977: 311-365). Thus, the phenomenon is far older

than the social institution of professional science yet it is the professional state of

modern science that accounts for the widespread agreement in society about who

is an expert. This does not mean that natural knowledge is solely located in the

minds and texts of accredited scientists, nor is ‘what the public think’ just a

dilution of expert knowledge. Nevertheless, the boundaries between experts and

the public are easy to identify.

Distinguishing scientists and non-scientists according to whether they are

scientifically competent or not seems natural and unremarkable today. However,

even this most straightforward demarcation was contested. The cultural gap

between science and other knowledge was not always considered inevitable or

acceptable. For Paracelsus in the sixteenth century and his followers in the

seventeenth century proper knowledge was properly public knowledge, generated

by using the knowledge-acquiring techniques of ordinary members of society. To

Paracelsus, science that was inaccessible to ordinary people ceased being science.

Shapin cites many other seventeenth-century attacks on the gulf between science

and public culture (Shapin 1989: 995-996). Most of these revolve around the need

for scientific knowledge to be open to public scrutiny. (In the seventeenth and

eighteenth centuries, ‘public scrutiny’ meant genuine scrutiny by a broadly

defined public. In contrast, a much narrower definition of ‘public’ is adequate

today. It is enough that scientific knowledge is open to the scrutiny of other

specialists). What we learn from such examples is that the state of affairs we have

come to take for granted (cultural competence as a distinguishing characteristic of

scientists) is by no means inevitable. It is not a feature inherent in the very essence

of science itself but, rather, the result of identifiable historical processes.

The gap in competence lead to a significant incongruity in seventeenth-

century science: the rhetoric of experimental science stressed the public character

of knowledge claims yet, at the same time, the ‘public’ who could be considered

qualified arbiters was diminishing. However, tensions between the rhetoric and

the reality only occasionally manifested themselves in the seventeenth-century.

179

The complete dislocation between scientists and public did not come until the late

Victorian period.

The Scientific Naturalist movement of the late Victorian period was key to

the process of dislocation. This was a systematic ejection of those elements that

had linked public and scientific culture from what had counted as scientific

thinking.

Anthropomorphic, anthropocentric and teleological views of nature were

identified by writers such as Huxley and Tyndall as fallacies of the public

(or clerical) mind...In the early modern period, the idea that man was the

measure of all things formed a heavily-trafficked bridge between science

and other forms of culture and between science and public discourse.

That bridge was dismantled by the triumph of Darwin and other

Naturalist scientists. (Shapin 1989: 996-997)

Robert Young has argued (1985: Chaps. 2 & 5) that the result of this was a

fragmentation of a previously common cultural context linking scientists, clerics

and lay people. Shapin suggests speculatively that this allowed the submergence

of lay perceptions of nature. Competing conceptions did not have to be

eliminated; it was sufficient, F M Turner argues, that they have no public forum

and no political purchase (Turner 1978).

Before the ‘power inversion’, scientists sought and required public

acknowledgement of their legitimacy. Robert Merton (Merton 1973: 228-253)17

has showed how in the seventeenth century the value of science was established

by and with respect to an alliance with Puritanism.

Puritanism ... imputed a threefold utility to science. Natural philosophy

was instrumental first, in establishing practical proofs of the scientist’s

state of grace; second, in enlarging control of nature; and third, in

glorifying God. (Merton 1973: 232)

As a consequence of its relations with Puritanism, natural theology was

one of the main vehicles by which scientists addressed the public. From the Boyle

Lectures (starting in 1692) to the Bridgewater Treaties of the 1830s, scientific

17 Originally published as Motive Forces of the New Science, chap. 5, Robert K. Merton, Science, Technology and Society in Seventeenth-Century England, Bruges, Belgium, Saint Catherine Press, 1938; with a new preface, New York: Howard Fertig and Harper and Row, 1970.

180

culture was entrenched in religion. The situation was challenged by the Scientific

Naturalism movement, which eventually managed to re-invent science and its

relationship to its public. Scientific Naturalism transformed the boundaries of

science, destroying the conceptual link between the scientists and the clergy.

Natural theology had ensured that authority over a broad range of issues was

shared between scientists and the clergy. The rise of Scientific Naturalism was

characterised by the eviction of the clergy from these areas. Where once there was

no distinction, a clear line was drawn between questions of theology on the one

hand and questions of science on the other with much of the original landscape

falling squarely in the realm of science.

The lack of a common cultural context has important implications for

communication. Shapin sees the divorce of scientific culture from public culture

as the significant moment in the genesis of our modern boundaries. From Shapin’s

account, the foundations of modern relations between science and the public are

crystallised into existence with the dismantling of the common cultural context

and the rise of Scientific Naturalism.

In considering scientific testimony and the problem of public trust, Shapin

again contrasts the ideal of seventeenth century empirical science with its

manifestation in reality. The trustworthiness of scientific testimony was judged

using general social criteria rather than by criteria peculiar to science (for

example, you would trust the word of a gentleman). By the eighteenth-century,

science was sufficiently institutionalised to decide what reports were to count as

scientific knowledge based on whether they came from within or outside the

scientific community. For instance, in the late eighteenth-century the Paris

Académie refused to credit reports of meteorites from the public. This was

essentially a pragmatic move. If reports from the public conflict with what

scientists reckon is plausible then the costs to the scientific community of

addressing them are huge. This can be seen today for instance with reports from

the public of unidentified flying objects – how could they be investigated

‘scientifically’?

Steven Weinberg makes a similar point (Weinberg 1993: 38) when writing

about the wholeness of science and the impossibility of autonomous sciences.

Both he and Shapin recognise that it is essentially pragmatic considerations that

force science to limit the range of knowledge claims it will engage in. It is

181

interesting though to compare how each of them sees the criteria by which

significant claims are demarcated. The reason that Weinberg gives for not

applying himself to questions about astrology and divination, etc. makes no

explicit reference to the communities from which the claims are made. Instead, he

argues that the claimants themselves stress the ‘otherness’ of their reports but do

so without realising that this precludes them from being scientific. Their mistake,

Weinberg would argue, is to assume that ‘scientific’ refers only to a method and a

rhetoric, that is, as long as you look and sound like a scientist you are a scientist.

To be properly scientific, he would continue, claims have to be fully integrated.

‘Separate’ sciences are just different ways to look at the same thing and are never

autonomous.

Shapin thinks that it really is as simple as the adoption by science of a

principle: novel claims to knowledge that stem from the public are not science.

The general point for him is that membership and non-membership in the

scientific community is continually being negotiated and in the course of this

process what counts as knowledge is defined. In this light, Weinberg’s argument

looks simply like an effort to exclude the public from science – part of the

continuing struggle. The problem of this account of how trust is negotiated is that

it does not distinguish the different levels on which the negotiation is taking place.

The distinction made earlier between philosophical and polemical accounts of

science is valuable here. An account of ‘science’ can be used both to exclude

alternatives and to offer a rational point of departure for further exploration. These

functions are very different but are, nevertheless, difficult to distinguish.

Similarly, the rationale for choosing topics for scientific investigation incorporates

both defensive and ‘philosophical’ elements.

Another key dimension determining modern relations between science and

the public is what the public wants from science and how it tries to get it

(Shapin 1989: 1002-1005). The questions addressed by scientists have never been

randomly distributed across all disciplines. Work is concentrated, unsurprisingly,

in theoretical areas closest to pressing technical and economic problems. The

public’s various imperatives are brought together with those of scientists. A

programme for science emerges from the ensuing negotiation. The process

involves establishing boundaries between what science can and can not be

182

expected to do; establishing science as a uniquely valuable cultural activity; and

establishing limits on scientists’ responsibilities.

The way science is distinguished from other activities has changed

significantly as the channels of public influence on science have changed. One of

the most important aspects of the modern relations between science and the public

is the dependence of science on public support in the form of state funding. Again,

this was not always the case. From the Renaissance until the beginning of the

nineteenth-century the dominant means by which science was supported by the

public was patronage (Shapin 1989: 1003). One of the most important problems

facing scientists is how to maintain autonomy at the same time as presenting their

work as useful to society. There was considerable variation in relations between

scientists and their patrons. However, as the burden of funding shifted onto

democratic governments the pressure to justify pure science on utilitarian grounds

increased.

Shapin identifies the United States as the place where this modern feature

of the relations between science and the public emerged most clearly.

The scientific community up to and including the nineteenth century had

argued for public support largely on utilitarian grounds. Pure science, it

was repeatedly said, would ultimately yield applied science and

economic benefits. In a democratic society, the state was justified in

spending public money on these grounds and on no others. The recipients

of public monies had to be publicly accountable. (Shapin 1989: 1004)

In the nineteenth-century there were even constitutional impediments to

Federal support for any enterprise that could not guarantee contributions to

national welfare (Shapin 1989: 1004). This requirement for accountability remains

difficult to reconcile with scientists’ demands for autonomy. The freedom of

scientists to determine their own programmes of research is related in a complex

way to their responsibility to the society that funds them.

Scientists have to get the balance right. On the one hand, they want

autonomy and on the other, they want a central role in society. The boundaries

that define the limits of scientists’ autonomy and responsibility to society are

particularly important and particularly finely balanced. As we have seen in the

case of BSE (page 174), both scientists and rival groups (for example

industrialists, legislators, consumers) will sometimes want the boundary to move

183

one way and sometimes the other. The deft boundary work required is evident in

some of the examples in the next chapter.

The power of the ‘canonical account’ comes from its naturalness. Not only

does Shapin demonstrate that this naturalness is based on fundamental but

ultimately arbitrary assumptions, but he provides an account of the events and

social forces that brought the assumptions about in the first place. Shapin’s

account explains the modern relations between science and the public with

reference to the Scientific Naturalist movement in the nineteenth-century. The

secularisation of nature was the key development as far as Shapin is concerned. It

is this that allows, for the first time, a radical distinction between the programme

of professional science and the moral concerns of the general public.

The modern public were to have no business with the framing of

scientific representations and with the conceptual content of scientists’

work. The converse did not, of course, apply. (Shapin 1989: 1005)

The eugenics movement, sociobiology and “much, if not the whole, of

modern social science” are cited to support these claims for the hegemony of

secular science. It is tempting to make this the end of the story – science has

triumphed – but this would be to ignore an important aspect of the changing

relations between science and the public. As Shapin has stressed, this has not been

a simple power struggle between two identifiable groups with clearly articulated

interests. Instead, various different groups with various different interests have

engaged at various different boundaries between science and non-science.

The situation has changed radically since the seventeenth-century in that

the distinction between a scientist and a layperson is much clearer today.

However, this is the only way in which the categories are coherent. We would

have difficulty distinguishing ‘the interests of scientists’ from ‘the interests of the

public’ so easily. There are simply too many different interests involved at once.

In addition, individual interests are articulated in an ideological environment that

affects the significance they take on. Shapin warns against talking of the public’s

interest. As with economic goods, the conception of moral goods inevitably

divides the public into groups with differing interests. The roles that science takes

on in society are more various than one would think, given the ease with which

we speak about ‘science’ and ‘the public’. The significance science takes on for

184

the public is much richer and more complex than if it were simply seen as source

of economic wealth.

Historically, the public have wanted much more from natural knowledge

than technical and economic utility...A socially (as well as technically)

usable nature has been demanded of those entrusted with the task of

producing representations of it. (Shapin 1989: 1005)

Shapin’s account concentrates on boundary work to distinguish scientists

from non-scientists. The emphasis Thomas Gieryn places on the boundaries is

subtly different. He concentrates on how scientists negotiate with other groups in

society over what counts as science. In both cases, it is the social authority of

scientists that is at stake.

The examples of boundary work in popular science discussed below

demonstrate a much greater sophistication than is apparent from Shapin’s account.

The existence of a boundary between science and public culture leads both to

“problems of translation” (now understood in relation to contexts rather than

languages) and unstable metaphors and analogies. However, it also serves a

creative function.

As we shall see, the physicists interviewed by Paul Davies in The Ghost in

the Atom (Davies and Brown 1986) interact with each other by reinterpreting rival

accounts of quantum mechanics using the boundaries as an axis. This is one way

the boundaries between science and non-science are employed within scientific

discourse. In public culture, the boundaries are also used creatively. Public

language, far from suffering from arrested development, self-consciously employs

several ways to speak about natural phenomena. In examples from literature,

theatre and politics, we see the “resonances” between these discourses are fully

controlled and employed to dramatic effect. In short, the dislocation between two

contexts does not necessarily make communication between scientists and

non-scientists more difficult (though in some cases it can). It does, however, offer

creative possibilities within scientific discourse, within public discourse and

within popularisations of science.

185

5 Popular Sc ience as a Forum for Negot ia t ion

Scientists and Non-Scientists, Science and Non-Science

To the extent that the authority of scientists is questioned in everyday

discourse, it is generally assumed to stem from their privileged access to reliable

knowledge. Again, epistemological considerations merely distract us from how

authority is established in practice. (Epistemology is a red herring – scientists’

access to reliable knowledge is a different issue altogether.) We can see the nature

of the authority changing over time independently of scientists’ success or failure

in understanding nature. An example of the changing nature of authority is

discussed below with reference to (‘father of the atom bomb’) Robert

Oppenheimer’s 1953 Reith Lectures (page 190).

In this chapter, a variety of illustrative examples are outlined, which are

intended to explore the range of ways the existence of a boundary manifests itself

in popular texts. These examples draw on the theoretical approach to the

boundaries of science and to boundary work developed in the previous chapter.

Recent controversies such as those over BSE and genetically modified organisms

can fruitfully be approached from within the framework sketched here. In

addition, issues such as ‘creation science’ and the teaching of evolution in US

schools18 and the conflict between scientists and other intellectuals known as the

‘Science Wars’ as well as the older ‘Two Cultures’ debate can be understood as

boundary disputes. However, it is not my intention to deal with any episode

comprehensively. Only The Ghost in the Atom (Davies and Brown 1986) is

considered in any depth. On the other hand, the analysis here draws on several

detailed accounts of boundary work elsewhere. In particular, Thomas Gieryn in

formulating his conception of boundaries includes detailed accounts of the

negotiation of boundaries between phrenologists and anatomists, scientists and the

18 Simon Locke (1994) provides a particularly insightful account of creation science from the point of view of the public understanding of science.

186

clergy, science and mechanics, and between science and the military over

questions of autonomy and national security (Gieryn 1983)19.

For any scientific issue that impacts on society, the distribution of

authority between scientists and other interested groups seems natural. It is

generally not questioned until, a) we understand how the authority has changed

over time and b) we understand the range of ways a scientific topic impacts on

society. When we address these issues, the distribution of authority seems more

arbitrary and tenuous.

For instance, there are many sites where nuclear physics is an issue to

some extent: The policy and diplomacy of discharging waste into the Irish Sea;

the safety of nuclear powered space-craft; energy policy; the use of uranium in

artillery shells; the efficacy of test-ban treaties to name just a few sites. For each

of these, the distribution of authority between interested groups including

physicists, the government, the military, non-governmental organisations, etc.

seems natural. We expect the government to defer to physicists in formulating

policy on discharges from Sellafield for instance but physicists have much less

authority when it comes to decisions about the deployment of uranium tipped

shells. In decisions about the use of uranium in shells, we defer to military

experts. The military aspects of the question seem more pertinent than other

aspects and the authority of physicists is limited to purely technical questions

about half-lives and activity.

The (functional) categories of science and non-science are neither fixed

nor even rigid. They are in many cases dynamic. That is, the boundaries between

the categories (whether they change or not) require constant work to maintain

them. (see page 172). There is no single boundary between nuclear physics and

society as a whole. Instead, there are intermediate dependent entities (see

page 171) where ‘boundary work’ occurs. In the case of nuclear physics, these

dependent entities include those listed above (the safety of nuclear powered space-

craft, etc.)

Before returning to nuclear physics, we shall turn to an example of

boundary work that to modern eyes may seem extreme. This will set the scene –

allowing us to see boundary work in more subtle examples.

19 See also Gieryn 1994 and Gieryn 1999.

187

A Victorian Vision: Physicists as Torturers I have argued that the job description of physicists, their place in society,

is subject to constant negotiation. At any moment, the terms of the negotiation

appear to be entirely natural; whether one resists or supports change, shifts in the

boundaries rarely seem radical. What it means to be a physicist changes gradually

even if, in a Kuhnian sense, scientific knowledge is subject to revolutionary

change from time to time. However, proposals that seem natural in one context

can seem bizarre in another. With the passage of time, a proposal that would have

been unremarkable to contemporaries is revealed as a major transgression of

boundaries and redefinition of ‘scientist’. One such proposal from the late

nineteenth-century is that the civic duty of physicists should include torturing

criminals.

The proposal was made in a book entitled The Unseen Universe (Stewart

and Tait 1886), which deals simultaneously with question of social organisation,

economics, theology and thermodynamics. The Unseen Universe (originally

published anonymously) presents a physical argument for a belief in the

immortality of the soul based on thermodynamics. The book attracted critical

acclaim from such prominent physicists as James Clerk Maxwell and went

through fourteen editions in thirteen years (Myers 1985: 50, 55). The authors

revealed themselves by the time the fourth edition was published. Balfour Stewart

and Peter Guthrie Tait were both famous physicists and popularisers. Two years

earlier Stewart had published a textbook on thermodynamics (Stewart 1873).

Before that he had, with Norman Lockyer (the first editor of Nature) published

popular articles on thermodynamics and theology (Myers 1985: 50).

The Unseen Universe is an interesting example to look at from the point of

view of boundary work because it makes claims about theology, social order,

economics and thermodynamics all at once. Imagery from one field is employed

to naturalise a description in another field and the boundaries are transformed in

the process. Greg Myers has traced the way commonplaces of social rhetoric were

reified in the language of thermodynamics as physicists struggled to articulate the

new theory in the early nineteenth-century. Later, after thermodynamics was well

established, the theory itself appeared to support the ideology that gave rise to the

commonplaces employed in its formulation (Myers 1985). The section of The

188

Unseen Universe examined here is just one extract that is itself an aside and does

not deal with the main subjects of the book20.

The extract speaks both about the role of physics in providing

technological solutions to social problems and the role of the physicists. That is, it

establishes physics as a subject that is valuable to society and proposes an

extension of the role of the physicist. The authors are addressing the rising levels

of violent crime and lamenting the transition from “merry England” with its “jolly

and chivalrous wrestlers and boxers” to the modern “hell of running-kicks,

garrotting, gouging and stabbing”. The subject is raised as if it owes its place in

the book not so much to its relevance as to its importance. The rise of violent

crime seems to be a problem that demands immediate attention and this is why it

is allowed to interrupt the flow of the argument. The subject and the way it is

invoked look very much like a ‘moral panic’.

Now creatures in the likeness of men vent their despicable passions in

murderous assaults upon women and children. But science hints at an

effectual cure. It is probable that before many years have passed,

electricity...will be called upon by an enlightened legislature to solve this

desperate social problem. Imprisonment has been tried in vain, and,

besides, it involves great and needless expense. The ‘cat’ though

thoroughly appropriate, is objected to as tending to brutalise the patient

and render murder not unlikely. No such objections can be urged against

the use of electricity...For it can easily be applied so as to produce for the

requisite time, and for that only, and under the direction of skilled

physicists and physiologists, absolutely indescribable torture

(unaccompanied by wound or even bruise), thrilling through every fibre

of the frame of such miscreants. (Stewart and Tait 1886: 143-144, italics

in original)

During the period that The Unseen Universe was enjoying popularity,

physics itself was undergoing a process of professionalisation. This required

support from society at large, which meant physicists had to present their

discipline as worth funding. As Myers points out however, the value of physics

was not immediately apparent. It had not been employed in natural theology as

20 For a more detailed account of The Unseen Universe see P. M. Heimann 1972, The Unseen Universe: Physics and the Philosophy of Nature in Victorian Britain.

189

much as natural history or geology, and, unlike mathematics, it had no long

tradition in the university. (Indeed, the subject was still often called ‘natural

philosophy’, so lacking was ‘physics’ in heritage.) In particular, though, physics

had not yet proved its practical value in the way chemistry had (Myers 1985: 40).

Electricity was something of a watershed for physics – its first unequivocal

triumph. It was the first time that theory had pre-empted any technological

applications. Nobody knew what would come from the development of electricity

or how physics would be transformed. In giving their assessment of the potential

for electricity, the authors were going much further than an objective description.

They were suggesting an extension of the areas where physicists’ expertise is

applicable and presenting this as a natural development.

As it turned out, the authority of the physicist was not extended into the

field of correctional policy and, far from being seen as the protectors of public

morality, torturers are universally despised. Electricity is used in torture today

(Amnesty International reports the use of cattle prods and similar tools on victims)

but nobody would feel that physicists qua physicists have any special interest in

the matter. The punishment of offenders, especially their torture can no longer be

considered even remotely relevant to physicists. A similar proposal to extend the

role of physicists in this way would not and could not be made today.

At the time Stewart and Tait were writing, special circumstances made

such a suggestion reasonable. Firstly, a moral panic demanded attention from all

sections of society because it was treated as a priority. Secondly, electricity was a

new technology and it was impossible to predict the social relations of its

application. Thirdly, electricity was the first technology to emerge wholly from

pure science and it was reasonable to assume that physicists would keep control of

all its applications (or, rather, it was hard to imagine them losing control). Finally,

there was still much more overlap between science and morality, economics,

theology etc. than there is now. The border regions (or ‘common ground’) were

broader in the 1870s because the ‘common cultural context’ (see page 179) was

still partially in place. Stewart’s writing in particular fits into a natural theology

tradition rather than reflecting the ethos of Scientific Naturalism. However, the

hegemony of Scientific Naturalism is reflected in his insistence that his arguments

are physical arguments as here in an article in Macmillan’s magazine written with

Norman Lockyer:

190

It is desirable to state clearly, and once for all, that our standpoint in what

follows is that of students of physical science. (Stewart and

Lockyer 1868, Quoted in Myers 1985: 52)21

The circumstances outlined above notwithstanding, the only place where a

proposal such as Stewart and Tait’s could make sense is in a popularisation.

Suggesting that physicists involve themselves in the punishment of offenders is a

proposal to move the boundaries of physicists’ authority into a new area. (It is a

claim to ownership of a dependent entity.) Within the confines of existing

disciplinary boundaries, there was no scope to discuss such a proposal. In

professional scientific literature, the authors would have had enormous trouble

demonstrating the relevance of their point. In a popularisation however,

(especially one like Stewart and Tait 1886) the scope is much greater. Special

circumstances such as a moral panic can legitimately make an impact in a

popularisation but generally not in professional discourse (though moral panics

could under some circumstances influence funding).

Nuclear Politics and Nuclear Physics: Robert Oppenheimer’s Reith Lectures22

In the years following the Second World War the authority of nuclear

physicists extended into realm of Cold War politics. Physicists were expected to

intervene – expected to fight the Cold War or fight against it. In a sense,

international politics was part of their ‘job description’ (there was a perception

that the Cold War was actually being fought on blackboards by scientists). If

nuclear physicists intervene in such issues today, they are seen to be acting

outside their realm of authority even though they are intimately involved in

creating and monitoring crucial technologies. The job description of nuclear

physicists no longer extends into questions of diplomacy, international politics, or

ethics. Put another way, in the years following the Second World War, the power

of nuclear weapons was associated with physicists as well as with governments. A

more modern perspective would reduce the status of physicists to that of other

21 The reference provided by Myers is incorrect. The correct issue is: Stewart, Balfour and Lockyer, J. Norman 1868 August, ‘The Place of Life in a Universe of Energy’, Macmillan’s, Vol. 20 p. 319. 22 The Reith Lectures is an annual series in which a prominent intellectual figure is invited to deliver lectures on a topic of his or her choice on successive nights over a week. The lectures are broadcast on BBC Radio.

191

workers with a strictly delimited role to play. Consequently, we do not look to

physicists for insight to modern threats to our security. Rather, we ask only

technical questions of them.

A case in point that can be contrasted with the situation in the 1950s was

concern in the 1990s about the export of fissile material from the former Soviet

Union. The easy access that terrorists and unstable nations have to nuclear bombs

is frightening but the kinds of questions physicists were asked were only technical

ones such as ‘how much Plutonium do you need to make a bomb?’ etc. Insight

into what this all means and what can be done about it was sought from

politicians, political commentators and security experts23. The question that faces

us here is: how did the change come about? In other words, how did physicists

lose their central role and authority in these issues? Before addressing these

questions though, I will discuss two opposing orientations towards the question of

the relation of science and politics:

One could argue that physicists’ understanding of nuclei, impressive

though it is, should never have been seen as giving physicists authority when it

comes to the use of nuclear physics. That is, questions about the application of

physics have always been in the realm of politics even if for a brief moment such

questions seemed to ‘belong’ to physics. Alternatively, one could argue that any

distinction between the production of knowledge and its application is arbitrary –

resulting, for instance, from our alienation from the ‘means of production and the

fruits of our labour’. According to this view, questions about the use of physics

have always properly belonged in the realm of physics; physics is itself political

and the present situation is the aberration, not the former. Amongst modern

commentators, Lewis Wolpert might be expected to adopt the former position

whilst Hilary Rose might be more inclined to adopt the latter (though in both

23 We can also see the change reflected in the more downstream context of popular fiction. In The Sum of All Fears (Tom Clancy 1993/1991), terrorists obtain sufficient fissile material to build a hydrogen bomb, which they use to try to provoke a nuclear war between the United States and Russia. The account of the construction of the bomb is rich in technical details but these are contrasted with what we take the real issues to be (international politics, internal politics at the White House, etc.) The world is saved not by physicists but by Clancy’s hero Jack Ryan of the CIA. The book itself illustrates the creative use that can be made of the various boundaries between science and non-science (discussed in greater detail below in relation to ‘appropriations’ of science, page 210).

192

cases their actual positions would be more sophisticated than the pastiches

presented here).

What both positions share is a conviction that there is a natural relation

between physics and politics. The change in patterns of authority reflects the fact

that the perceived relation is moving towards or away from this ideal depending

on which position we adopt. A more radical approach would be to argue that there

is no natural relation between the two fields and the ‘proper’ boundary between

them is whatever it is taken to be. This provides us with several choices: do we

take the boundary to be natural or normative? If natural, what is the correct

relation between the two fields?

The answers to these questions will inform how we go about constructing

an explanation. If we feel that the current pattern of authority is closer to the ideal

than the situation in the 1940s then this will determine the structure of our

explanation. Our job in this case is to explain how the historical circumstances

could have produced such an aberration. If we argue that there is no natural

boundary then our account will be constructed to explain that any relation is

historically contingent.

Neither option is adopted here. Questions about the ‘true’ nature of the

boundaries belong to the realm of metaphysics. The account of popularisation in

this chapter does not support or refute any of the above positions. Here, the

answer to the question ‘how was the change in patterns of authority brought

about?’ remains at a pragmatic level. The question is not ‘what is science?’ but

‘given that people disagree about what science is, how is the answer decided and

what is the source of the conflict in the first place?’ Such an analysis has

application within several metaphysical frameworks.

Having affirmed the question as practical rather than metaphysical, we can

now pragmatically adopt a normative account of the boundaries. This amounts to

claiming that conflict over the boundaries of science can be understood as if there

are no natural boundaries but falls short of claiming that this is actually the case.

Such an analysis of the changing job description of nuclear physicists could be

incorporated, at a higher level of explanation, into one of the accounts above.

It has, over time, been generally agreed that questions of international

politics do not fall within a physicist’s remit – physicists qua physicists are not

reckoned to have any special right to intervene in questions of nuclear

193

proliferation for instance. The process by which this was agreed relied heavily on

popularisation. So, despite their key role, physicists have lost any authority they

had in these areas. In many cases, physicists shed the authority as fast as it was

being taken away from them.

Robert Oppenheimer is best known for having led the Manhattan Project;

he is often dubbed ‘the father of the atom bomb’. After the Second World War, he

intervened both publicly and privately in establishing international controls on

atomic energy and resisting the development of the hydrogen bomb. His security

clearances were revoked in 1953 (the same year that he delivered the Reith

Lectures). Writing in 1989, Freeman Dyson tells us that Oppenheimer was, “after

Einstein, the second most famous living scientist“ (Dyson 1989: vii). Dyson’s

preface to the collected lectures goes on to paint a picture of a gentle polymath,

“equally at home in the world of literature and the world of science, in the

eighteenth century and the twentieth” (Oppenheimer 1989: xi). More intriguingly

though, Dyson tells us that listeners in 1953 were scornful and the Reith lectures

(which dealt with quantum mechanics) were not perceived to be a success.

The listeners in England expected hot news. They expected dramatic

statements about the great events and great issues of the day. They

expected a personal message from the man who in those days was widely

proclaimed to be the conscience of humanity. Instead they got these

lectures. They got a scholarly and impersonal discussion of the history of

science. They got a rarefied and philosophical view of the mysteries of

quantum mechanics. [...] No answers to any of the urgent political

questions of the 1950s. No glimpse of that inner world of action and

power in which Oppenheimer had been living for the previous ten years.

No wonder the listeners were scornful. (Oppenheimer 1989: vii).

To explain the expectations and reactions of the audience we need an

account that places due emphasis on how the boundaries between science and

society were understood in 1953. One explanation for the disappointment of the

audience was that Oppenheimer omitted to deal with pressing issues that his

audience believed to be within his remit as a nuclear physicist. That is, he gave an

incomplete account of the atom. We can rephrase this point in terms of the

dependent entities by which the boundaries of physics were determined.

According to his audience, intermediate entities such as the morality of war,

194

nuclear proliferation, diplomacy, etc. ‘belonged’ as much (if not more) to

physicists as they did to any other group. Oppenheimer, as a senior figure in

physics, had (in the minds of his audience) authority over these areas as surely as

he had authority over questions about the mass of the neutron. It seems perverse

for Oppenheimer to set out to undermine the authority that physicists commanded,

but that is effectively what he does by failing to speak about these issues. In

essence, he shifts the boundary of physics backwards – giving up territory (that is,

giving up control of dependent entities). The question is, why does he not

consolidate his authority instead of passing it on to other groups?

Dyson explains Oppenheimer’s reticence on the subject of the Cold War

with reference to his desire, “to speak to the ages, to say something of permanent

value” (Oppenheimer 1989: viii). This perhaps explains his boundary work: in

part, it is an attempt to reclaim for himself and other physicists some of the

territory they had lost when the boundaries with the dependent entities mentioned

above shifted. The authority that was (at that time) important to Oppenheimer was

authority with respect to timeless issues such as the structure of matter. An over

emphasis on ephemeral matters such as politics undermined the unique status of

physical knowledge. Thus, we can see Oppenheimer’s reluctance to indulge his

audience’s desire for “hot news” as an attempt to reposition physical knowledge.

At the same time, we should note that the authority over dependent entities

that physicists had accrued since the War carried responsibilities with it. Just as

Stephen Dorrell was ready to give up authority over issues of public health

(page 174), so physicists in the 1950s may have been keen to concede territory

(control of dependent entities) to whoever was ready to accept responsibility for it.

Whether, such a move was what Oppenheimer was trying to achieve is not clear.

Given his record of intervention in nuclear politics and his other involvement in

politics, it is unlikely. Nevertheless, such boundary work has been conducted at

various moments by physicists since the War24.

The point to note from this example is that questions about the boundaries

addressed in popularisations play a role in explaining how ‘successful’ they are

perceived to be. Analysis of boundary work in popular texts can thus be applied to

24 For an account of boundary work to distinguish the ‘production’ of science from its ‘consumption’ and so protect the autonomy of science from censure by government over questions of national security see Gieryn 1983: 789-791.

195

assessment of public understanding of science initiatives. However, in addition to

disrupting expectations and comprehension, boundary work can also play an

important role in structuring expositions as we see below.

Empowering and Disempowering Public Understanding of Science: Consensus Conferences

In 1995, Walter Bodmer argued that to understand genetics is to

understand that it is necessary to patent genes25. The implications of this assertion

are two-fold: firstly, it suggests that the shape of good legislation emerges

naturally from the science itself; and secondly, it implies that failure to support the

patenting of genes indicates that genetics has not been fully understood. In the

light of the analysis of boundary issues in this account, we can see this as an

attempt to modify the rules governing an intermediate entity (intellectual property

rights) – passing ownership of it to biologists. Thus, questions about patenting

become ‘scientific’ questions. As we have seen, ‘getting across’ Bodmer’s

message about genetics is only empowering if recipients of it are in a position to

challenge Bodmer’s preferred reading. If not, it is simply a case of scientific

knowledge being invoked to naturalise a position that stems from specific

individual and institutional interests.

There are genuine alternatives, at least in the case of public understanding

of genetics. Just six months before Bodmer made his boundary claim, there was

an experiment in empowering ‘the public’ by allowing the public to set the agenda

(rather than ‘empowering’ them by telling them the ‘scientific’ answer to

controversial issues). The UK National Consensus Conference on Plant

Biotechnology26 organised by the Science Museum, London and funded by the

Biotechnology and Biological Sciences Research Council conceived the public

understanding of science as a two way process rather than the delivery of technical

knowledge. The model for the consensus conference was developed in Denmark

in the 1980s. A panel of lay volunteers questioned expert witnesses and produced

a written report on plant biotechnology addressing issues such as the risks and

benefits, the possible impact on the environment and on developing countries, the

25 Building Bridges to Science conference organised by COPUS as part of the Edinburgh Science Festival, April 1995. 26 2-4 November 1994, Tuke Hall, Regent’s College, Regent’s Park, London. Final Report of the Lay Panel, http://www.ncbe.reading.ac.uk/NCBE/GMFOOD/ccreport.html (August 2000)

196

intellectual property issues raised, the prospects for effective regulation, and other

issues. Significantly, the report addresses the issues as they were perceived by the

panel itself, rather than working to somebody else’s agenda.

As with all scientific boundaries, the boundaries of biotechnology are

established in popular contexts. That is, the aspects of biotechnology that are

considered ‘science’ as opposed to ‘policy’ or ‘business’ or ‘ethics’ or some other

adjacent entity, and the limits of biotechnologists’ authority and responsibility are

determined through a process of negotiation – much of which is mediated through

popular texts. Generally, with regard to the boundaries of biotechnology, it will be

difficult to challenge the dominant or hegemonic position creatively within the

forums at which it is rehearsed. In the late 1990s for instance, when plant

biotechnology became a source of considerable controversy in the United

Kingdom, its boundaries were subject to modification. That is, various interested

parties engaged in ‘boundary work’. However, the forums in which this boundary

work took place (such as proposals and counter-proposals in the tabloid press)

were ones that generally did not facilitate a sophisticated response to the

hegemonic position. The options were purely oppositional responses or clumsy

and inconsistent negotiated responses (see the comment on the positions from

which decodings can be constructed, page 40). Such responses can be dismissed

as ‘non-scientific’ (and therefore, not worth engaging with) by those with an

interest in the established construction of the field and its relations to adjacent

fields.

On the other hand, the 1994 consensus conference presented an

opportunity to genuinely and effectively challenge the dominant conception of the

boundaries. Although a dominant code (inevitably) structured the debate, the

volunteers were empowered to articulate oppositional codes more effectively and

(almost) to make the subject meaningful for themselves. For this reason, the

conference represents the best example of a ‘democratically empowering’ public

understanding of science initiative as opposed to a mere public relations exercise.

Interestingly, the conference was also an opportunity for biotechnologists

to shift boundaries themselves. Sceptics at the time suggested that it was a cynical

attempt by the Biotechnology and Biological Sciences Research Council to off-

load biotechnologists’ responsibility for controversial issues (such as the ethical

conduct of their own discipline) whilst maintaining their authority over any

197

question with biological content. Whether this was the aim of the BBSRC or not

(and there is no reason to believe it actually was) it does not alter the fact that the

communication was genuinely bi-directional and it was the ‘public’ side that set

the agenda. The practical problem for the organisers remains to work out how

such consensus conferences can set the agenda for mainstream discussion of the

issues.

Controversy in Popular Science: The Ghost in the Atom Greg Myers’ distinction between popularisations and professional science

communication (Myers 1990: 142) was discussed in chapter 2 (page 57). To

reiterate: popularisations offer the reader a ‘narrative of nature’ – the actors are

objects of study (in the case of physics: electrons or stars or materials, etc.).

Scientific papers on the other hand offer the reader a ‘narrative of science’ – the

actors are the scientists and their equipment. Controversy poses particular

problems for popularisers. Having to report on a controversy forces popularisers

to provide a narrative of science alongside their narrative of nature. They must

orient their explanation with respect to rival accounts (preferably without

damaging the authority of scientists as a whole). The conception of science that

needs to be articulated is thus a much more sophisticated view of science and its

place in society than it would be otherwise.

There are two main types of controversy that get reported in popular

science27. The first are controversies in which a scientific expertise impacts on

society. Examples of this type include some of the nuclear physics issues

discussed above (page 190) and the BSE crisis in the United Kingdom in the

1990s (page 174). Controversies that impact on society are usually dealt with by

shifting the boundaries between science and non-science in one direction or the

other. For instance, we saw above (page 174) how during the BSE controversy the

government minister responsible tried to extend the realm of science to include

responsibility for policy and a ‘spokesman for science’ tried to limit the

responsibility of scientists with the slogan, “Science is descriptive, not

prescriptive”.

27 For more detailed accounts of specific controversies in science see Engelhardt and Caplan (Eds.) 1987. See also Roy Wallis (Ed.) 1979.

198

The second type of controversy includes those that are internal to science

such as controversies over the correct interpretation of quantum mechanics. In

controversies internal to science, the boundaries between science and non-science

are maintained in their original positions and we are invited to see individual

theories as belonging naturally to one category or another. That is, the theory is

subject to modification rather than the boundary. There is a grey area between

these groupings. For instance, disputes between evolutionary scientists over the

development of human behaviour are conducted in both ways. (The outcome of

the dispute has political implications and ideology influences how data are

collected and interpreted but the debate is, nevertheless, highly technical and

mostly confined to specialists).

The Ghost in the Atom (Davies and Brown 1986) invokes several

boundaries (and therefore, several intermediate dependent entities) in an

exposition of quantum mechanics. One of the most important is the relation

between physics and philosophy. More generally, both the text as a whole and the

individual participants need to establish who can legitimately intervene in

questions about quantum mechanics. How the participants exclude rival

intellectuals from ‘their’ field is discussed in greater depth below. First, some

general issues concerning controversy in popularisations are addressed.

The Ghost in the Atom handles controversy and the boundary problems it

creates in a unique way. Because of its singular approach, it sheds light on more

common ways of reporting controversy and negotiating boundaries of authority.

Like most popularisations, The Ghost in the Atom is primarily about ‘the world’

i.e. the physical universe. The departure it represented in 1986 was to offer the

reader conflicting views of the world and apparently not choose between them.

Also, much of the discussion centres on the philosophy and practice of physics

rather than articulating the description physics gives us of the world. John

Wheeler for instance argues that, “we need to find a different outlook in which the

primary concept is to make meaning out of observation and, from that derive the

formalism of quantum theory.” (Davies and Brown 1986: 60); and David Bohm

laments, “The only insight available now is through mathematics: that’s the only

place people allow themselves any freedom.” (Davies and Brown 1986: 134).

The dominant view of popularisation (page 7) understands books like The

Ghost in the Atom as concessions by scientists to non-scientists. The direction of

199

communication is assumed to be in one direction from specialists to lay public and

the goal of communication is education. But popularisations like this also serve an

important role amongst specialists. The necessary strictness of formal

communication in science leaves little room to express impressions of a field or to

speculate without strong justification and demonstrable relevance. In this respect,

physicists are much more constrained than other intellectuals are. But this is the

price they pay for other privileges (for instance, the chance to gain universal

assent for their work almost immediately by the peer review process). Magazines

such as Physics World and Physics Today offer slightly more room for speculation

and rumination but the main place for this kind of talk is still informal

conversations between colleagues. Quantum mechanics has been shaped as much

by this informal discourse as by the formal exchange of papers at conferences or

in journals. The history of quantum mechanics is littered with letters Einstein

wrote to Schrödinger, comments Bohr made to Heisenberg and long

conversations. For ‘on the record’ rumination of this sort, the main forum is

popular science books.

The Ghost in the Atom gives a bit of both sides to this informal talk. The

interview format is not unlike an informal conversation between peers. The fact

that both the original radio programmes and the book are not intended to

contribute to the greater body of science in the way a paper in a journal would

allows the physicists involved to speak on the record with impunity. As well as

allowing non-scientists insight into the way scientists actually think, this increases

the number of their own colleagues they can address at one time and provides a

framework for the disparate discussions going on in all parts of the physics

community. But there is an important balance to meet in popularisations. The

informality means they can record a type of scientific discourse that is otherwise

ephemeral but this must be balanced with the need to speak authoritatively about

their subject.

Science is characterised by an intolerance of disunity. This is one of the

most important features that distinguish natural science from the humanities and

social sciences28. When scientists fail to speak with a single voice, when there are

28 For popular accounts that adopt this position see Wilson 1998a Consilience: The Unity of Knowledge and Weinberg 1993, Dreams of a Final Theory.

200

alternatives to choose between, effort is directed with urgency to resolve the

situation. The Ghost in the Atom presents a selection of fundamentally different

interpretations of quantum mechanics. Because science is so strongly associated

with unity, disunity amongst quantum theorists is something the authors need to

explain carefully. One reason Davies and Brown must account for the disunity

over quantum mechanics is that it disturbs their non-scientific audience’s image of

science as much as it upsets physicists. In particular, it subverts the authority with

which physicists make claims about the world.

The Ghost in the Atom is ultimately about the world and not about

physicists or the practice of physics and this is the key to understanding how the

threat to the authority of physics is alleviated. What emerges from Davies’

introduction is a framework in which the different views of quantum mechanics

can support a single ‘narrative of nature’. Despite being a story about the world

‘out there’, physicists have a clear role – the very contrariness of the interviewees

is woven into the story. At the heart of the conflict in the story The Ghost in the

Atom weaves is not the social relations of science or the intransigence of certain

physicists but the world itself. So even the controversy is understood in physical

terms. Before discussing The Ghost in the Atom however, we shall examine how

the controversy is dealt with in other popularisations of quantum mechanics.

Most popularisations attribute the ‘problem’ of quantum mechanics to its

counter-intuitive nature and The Ghost in the Atom is no different in this respect.

On the other hand, many ‘standard’ accounts (those which adopt the consensus

view of quantum mechanics) diffuse the controversy over its interpretation by

casting it in social terms. In this way, they establish a boundary between

objective, disinterested science on the one hand and social factors on the other.

The purpose of this boundary is to settle disputes within physics – not to confer

any special status upon physicists.

Alternative interpretations and the lack of consensus are shown in standard

popular accounts to result from an inability to face the implications of the

‘correct’ understanding of quantum mechanics. This intransigence is accounted

for in many ways but most commonly it is interpreted within two narratives. The

first is that an emotional attachment to pre-quantum physics prevents physicists

from coming to terms with the implications of quantum mechanics – they are just

‘stuck in their ways’ and ‘you can’t teach an old dog new tricks’. The second is

201

that ‘ideology’ (dogmatism) gets in the way of them seeing the truth clearly. The

diffusion of controversy is an integral part of standard accounts. It has the effect

of naturalising whatever view of quantum mechanics is being offered at the same

time as explaining any failure to adopt it in terms of human frailty or social

pressures as the examples below demonstrate. The implication is that the

interpretation on offer is just simple physics but alternative views reflect the

historical and social context in which quantum mechanics finds itself. Alternative

positions are transformed rhetorically from a physical argument to social or

psychological perversity.

The failure of many of the founders of quantum mechanics to endorse the

Copenhagen interpretation is dismissed in this way. Thus, we hear that despite

instigating the research that lead to quantum theory, Max Planck was too

committed to a classical world-view to embrace it fully and too old to change.

This story may be true for Planck but it is repeated in the same form for anyone

who failed to embrace the Copenhagen interpretation. Even Louis de Broglie, who

was only an infant when Planck introduced quantum oscillators and who made

startling counter-intuitive leaps of his own, is cast in this way. De Broglie’s

doctoral thesis introduced the notion of ‘matter waves’. However, his later work is

dismissed by Steven Weinberg thus,

With such a doctoral thesis behind him, it might have been expected that

de Broglie would go on to solve all the problems of physics. In fact he

did virtually nothing else of scientific importance throughout his life.

(Weinberg 1993: 55)

This casts his failure to conform as a failure to perform. The reader may

conclude that de Broglie ceased to be active in science but he in fact continued to

have influence in some circles and published a great many papers. The extract

implicitly defines work of ‘scientific importance’ as that which contributes to the

emerging consensus. Again, this is a perfectly reasonable definition, especially for

a scientist, but nevertheless it is a little disingenuous. In particular, a definition

like this makes the equation of ‘consensus’ with ‘reality’ appear natural (i.e.

disguises the need for further justification).

Einstein’s critique of the Copenhagen interpretation is also interpreted

within this narrative. His much quoted comment, “I cannot believe that God plays

202

dice” is used to represent him as an ageing physicist struggling but failing to come

to terms with the implications of quantum mechanics. An alternative interpretation

of this remark casts Einstein in the more heroic role of someone fully conversant

with the implications of quantum mechanics as a calculus arguing boldly that it is

incomplete as a description of nature29.

The second way of diffusing controversy involves attributing the existence

of dissenting voices to the constrictions of ‘ideology’ or equally to ‘philosophy’

(both of these are used synonymously with ‘dogma’). Max Born’s The Concept of

Reality in Physics (In Born 1962: 14-37) dismisses the non-Copenhagen

interpretation of a Russian colleague in terms of dogmatism. In two chapters of

his Dreams of a Final Theory (1993) Steven Weinberg achieves a similar result.

In Against Philosophy he argues convincingly that it is counter productive for

physics to be too firmly rooted to any philosophical perspective and cites

mechanism and positivism (amongst others) as cases in point. In his chapter on

quantum mechanics there is the suggestion (or at least the implication) that

unhappiness with the standard account is the result of this kind of philosophical

dogmatism. His argument (to paraphrase it crudely) suggests that ‘proper’ physics

is pragmatic. The problems with understanding quantum mechanics, however, are

not practical problems but philosophical ones. (It is not the application of the

calculus that is problematic but deciding what we learn about the world in doing

so.) Therefore, Weinberg would continue, the ‘problems’ of quantum mechanics

are not really problems of physics, do not really matter anyway and are certainly

not worth a physicist’s time. Any physicist who does seem unhappy with the

consensus is failing to distinguish physical questions from their dogmatic

attachment to a philosophical perspective.

So irrelevant is the philosophy of quantum mechanics to its use, that one

begins to suspect that all the deep questions about the meaning of

measurement are really empty, forced on us by our language, a language

that evolved in a world governed very nearly by classical physics.

(Weinberg 1993: 66)

29 Such an interpretation is offered in a popular context in Gillott and Kumar 1995 and in a more technical context in Fine 1996 (especially chapter 2) and Cushing 1994.

203

This extract is also an example of the essential ‘reflexive element’ in

standard accounts of quantum mechanics – ‘if we were different then quantum

mechanics would be understood like any other “narrative of nature”‘. In standard

accounts, the diffusion of controversy and the explanation for the ‘problem’ of

quantum mechanics work in much the same way: Quantum mechanics fails to

give us a picture of the world and scientists fail to conform to the standard

interpretation because of human frailties.

We now turn to the controversy in The Ghost in the Atom. As this book

provides many different views of quantum mechanics, the absence of consensus is

dealt with differently. However, on the level of the interviews the rhetoric is the

same as standard accounts. The interviews are polemical and each interviewee is

an advocate of a particular view of quantum mechanics (though some are more

equivocal than others). In addition to explaining their own point of view, the

interviewees also need to explain how other eminent physicists can disagree so

fundamentally with it. Despite being pressed for time, the contributors cannot

speak wholly of their own interpretation because they are, at all times, aware that

they have competition.

The contributors compete by using a different kind of speech to discuss

alternative accounts. The physics of their colleagues is interpreted within a ‘social

narrative’. That is to say, the arguments and objections of each interviewee’s

opponents are discussed on a social level (on which the opponent’s failure to see

the truth clearly is interpreted socially or psychologically) rather than on the same,

physical level that he speaks about his own science. This is what Greg Myers calls

‘ironic reinterpretation’ (Myers 1990: 107-108)30. However, the book as a whole

does not orient itself with respect to the controversy. Davies exhibits a

preoccupation with consciousness and clearly has a point of view but his rhetoric

does not diffuse the controversy in the way Born, Weinberg and his own

interviewees do. Instead, controversy is actually celebrated as a triumph. This is a

bold move tantamount to arguing that ‘it is because physicists disagree so

fundamentally that they speak with authority’.

30 See also Mulkay 1985.

204

Davies own point of view is reflected in the choices he makes in

articulating the alternatives on offer. For instance, this is how he speaks about the

Causal Interpretation,

In spite of [the causal paradoxes associated with non-locality] some

researchers, most notably David Bohm and Basil Hiley [...] have pursued

the idea of a non-local hidden variables theory, inventing something they

call the ‘quantum potential’. (Davies and Brown 1986: 39)

To say that the quantum potential was ‘invented’ is entirely accurate but in

this context carries connotations of arbitrariness. The nature of this choice is

clearer if we consider some of the alternative verbs he did not choose. He could

have said that Bohm ‘discovered’ the quantum potential or discovered that a

mathematical object in the formalism of quantum mechanics could be

‘represented’ as a potential. He could have said that Bohm ‘interpreted’ the

mathematical representation as a potential or that the quantum potential ‘emerges’

if you look at the problem in the right way. He could also have said that the

quantum potential has no physical basis and was ‘plucked’ from the air or he

could have let the quantum potential slip into the exposition with a preface like,

‘this has the result that...’

Despite the polemic of the interviews, the book as a whole does not ask

the reader to choose between interpretations. Instead, the different points of view

are brought together to tell a story about mystery. The book as a whole is about the

world and the place of mind in the scheme of things. What we learn is that the

world is strange and contrary and so the controversy over quantum mechanics is

just a reflection of nature – the world itself causes conflict and consternation

amongst physicists. Thus the conflict between the interviewees itself forms part of

the description of nature. The lack of consensus is attributed to the failure of the

world to be comprehensible rather than any failure of physicists to comprehend it

coherently – it is a by-product not a deficiency. Disunity amongst physicists is a

metaphor for the discreteness of physical phenomena – the different

interpretations analogous to a ‘superposition of states’. What we learn about the

world is that it can be characterised in mutually exclusive ways. This is the core of

the mystery.

205

To summarise, we have seen how the boundary problems created by

reporting controversy are usually dealt with by diffusing the controversy. The

main way this happens is to ironically reinterpret a rival account within a social

narrative. Thus, ‘proper’, ‘objective’, ‘straightforward ’ science is distinguished

from science that has been debased by human frailty or vanity or by any other

means. By ironically reinterpreting a rival theory, a scientist can explain how the

controversy arose in a way that does not threaten the epistemological status of

scientific knowledge or the social authority of scientists as a community. The

boundaries between science and non-science are maintained and employed by

disputing scientists. Rival theories are relegated to the category of ‘non-science’

by inviting us to consider the social imperatives of scientists that support them. In

this way, controversy is presented as a normal part of scientific practice without

affecting the status of scientific knowledge.

In The Ghost in the Atom, the interviewees reinterpret each other’s

positions in the way described above (though both John Bell and David Bohm

take a rather pluralistic approach). They play their roles in the controversy just as

if they were writing their own popularisations. However, bringing them together

challenges the epistemological status of physics (or, at least, its success) in a way

that an individual popularisation would not. This is because the total effect of all

the ironic reinterpretation is that everyone ends up in the ‘non-science’ category.

The status of scientific knowledge is rescued by recasting the controversy as a

feature of the natural world rather than as a feature of the practice of physics.

This move brings an important distinction to the fore: that between nature

and scientific descriptions of nature. Again, emphasising the distinction between

these two categories in a popular context could pose a threat to the status of

scientific knowledge. This time, the threat is dissipated by casting physicists into a

new role. They become ‘heroic mediators’ or ‘shamans’ who alone can bridge the

gap between our blinkered everyday existence and the bizarre reality of nature in

her naked state. The role of the physicist in popularisations is discussed in more

detail below (page 209).

Negotiating Expertise: The Ghost in the Atom As seen above (page 203 ff) rather than being diffused, controversy is

incorporated into a physical description of nature. However, the heterogeneous

206

view of physics this requires potentially undermines the special authority of

physicists by removing the consensus, which is such an important aspect of it.

Davies and Brown could try to salvage the authority by stressing how provisional

each of the interpretations is and highlighting the expectation that the controversy

will one day be resolved. They do this to a certain extent but to put too much

stress on the provisional nature of interpretations would make physics seem less

successful than it is. More importantly, it would impair their metaphorical power

of the diversity of interpretations (outlined above). That is, the lack of consensus

could not itself be used as a way of describing quantum mechanics.

In any case, Davies and Brown do not have to salvage anything because

the authority of this account does not rely on consensus. The status of the

interviewees alone establishes the legitimacy of each of the interpretations and

their credentials rather than the ‘scientificity’ of their account signify their

scientific authority. A short paragraph before each interview introduces each of

the physicists – their interests, the interpretation of quantum mechanics they

advocate and, most importantly, their job title. The job title in this paragraph is not

merely informative – it labels the interviewee as ‘scientist’. It signifies that he

commands respect and is trustworthy however outlandish and unconvincing he

sounds.

Again, this represents a choice. Standard accounts of quantum mechanics

do not have to say ‘you must take this seriously, outlandish as it is, because it is

advocated by an eminent scientist’; they can say, ‘outlandish as it seems, this is

how scientists agree nature is, but don’t take their word for it, anyone who looked

would see this for themselves’. This latter mode of argument has played a key role

in scientific rhetoric since the Enlightenment. Indeed, it is a central tenet of the

Enlightenment ethos: ideally, arguments should not have to rely on personal or

institutional authority. Of course, this does not prevent personal authority from

playing an important role in the development of science. Standard popular

accounts of quantum mechanics at least resemble the Enlightenment ideal. The

Ghost in the Atom as a whole relies on the seniority of the interviewees, though

the interviewees’ own rhetoric is modelled on the ideal.

All interpretations of quantum mechanics are outlandish and counter-

intuitive in some respects. Only familiarity can ‘naturalise’ them. Scientists are

uncomfortable with theories that draw on the authority of proponents but the

207

popularity of interpretations is closely related to who takes them seriously – they

need some ‘weight’ behind them if they are going to take off. This is one of the

reasons that the status of the interviewees is important to The Ghost in the Atom,

viz. that it is important to physicists themselves. For specialists and the public

alike, status allows the reader to judge how seriously to consider speculative talk

and how much effort to invest in thinking about it. However, for non-specialists,

the most important critical faculty they have at their disposal in accessing the

reliability of scientific knowledge is to decide whether the source is trust-worthy.

The interviewees can be disrespectful or even contemptuous of each

other’s claims either directly refuting them or reinterpreting them ironically.

Opinions from elsewhere are dealt with more ambiguously. There is the

suggestion that philosophy, psychology and even Eastern mysticism may be

relevant to quantum mechanics but there is no suggestion that approaches from

outside physics offer competition to physical accounts in the sense that rival

interpretations of quantum mechanics compete. They may earn respect for their

imagery and the language they use may be appropriated for use in physical

arguments but they attract the ultimate contempt from scientists for being ‘not

even wrong’31. Davies speaks of parallels with Eastern mysticism but stops short

of commenting on how significant he thinks these are. The reference to Eastern

religions forms part of an explanation of holism and so is metaphorical – the

reader is not intended to dwell on the parallels.

... In other words, the part has no meaning except in relation to the whole.

This holistic character of quantum physics has found considerable favour

among followers of Eastern mysticism, the philosophy embodied in such

oriental religions as Hinduism, Buddhism and Taoism. Indeed, in the

early days of quantum theory many physicists, including Schrödinger,

were quick to draw parallels between the quantum concept of part and

whole, and the traditional oriental concept of the unity and harmony of

nature. (Davies and Brown 1986: 12).

The reference to physicists who have embraced Eastern mysticism is done

in such a way as to suggest ‘this is an easy mistake to make’ rather than to suggest

31 The first use of the phrase ‘not even wrong’ is often attributed to Wolfgang Pauli. (See, for example, Weinberg 1993: 206.)

208

that it is legitimate for them to do so. The point to note here is that Davies does

not defer to expertise from outside even where he notes its relevance. For

instance, he does not argue that psychologists have more insight into the

measurement problem than physicists because they have greater expertise with

concepts like consciousness. He concedes that the solution to the measurement

problem may be found in a coherent idea of consciousness but as far as Davies is

concerned, it is up to physicists and not psychologists to find it. In other words,

consciousness emerges as an intermediate entity dependent on both physics and

psychology but the corresponding boundary that Davies attempts to establish is

permeable in one direction only – from physics to psychology. Consciousness,

mind and many other issues emerge as intermediate entities between physics and

philosophy also but the boundaries in these cases are established in much the same

way as they are with psychology.

The question of who can and cannot speak about quantum mechanics –

who has to be taken seriously and who is ‘not even wrong’ – is intimately related

to the narrative Davies constructs. The picture we get is that the actual world is a

strange and alien place bubbling just beneath the surface of our everyday

experience. This is not an unfamiliar notion; we understand it the way we do The

Twilight Zone, Alice in Wonderland, Plato’s realm of ideal forms, Dreamtime, the

Little People, the Norse legends, the Holy Ghost, the diabolical work of Satan,

etc. The Ghost in the Atom presents quantum theory as a domain in which big

questions reside. The nature of the big questions is mapped out in a very particular

way though. The mediators for this strange land with its alien language and mores

are the physicists. This means that the big questions have to become ‘physical’

questions. The story we get is that physicists’ endeavours to understand the

physical world have been so startlingly successful that they are now in a position

to address, as physicists, many of the big questions about reality and

consciousness and even social relations that have traditionally belonged to other

researchers like philosophers and psychologists. We can compare this with the

way Edward O. Wilson’s orients himself to other fields in the humanities and

social sciences (see page 161).

Again, to understand the nature of the choice that this structure represents

it is worth considering the alternatives. One of the alternative narratives available

would present physicists pushing beyond their competence into areas in which

209

other researchers have authority. In this story, the big questions do not become

physical questions and physicists have to defer to the experts in other fields to

make progress. To construct the former narrative careful use has to be made of

those elements of explanations which come from ‘outside’ – the metaphors and

appeals to authority.

The Role of the Scientist in Popular Science: The Physicist as Shaman The Ghost in the Atom characterises the topic to be popularised as the

ultimate impenetrable world and hence the ultimate frontier – a world where all

the mysteries of all times are located. Davies and the other contributors are sort of

‘brokers’ between this world and the reader. Any other claims to brokerage, from

Eastern religions or from philosophy, are put in their place. The motives of these

other brokers are questioned implicitly if not explicitly. In The Ghost in the Atom,

nature emerges as mysterious and occasionally malevolent. Physicists are the

heroes who venture into this mysterious, alien world. In this way, they are not

unlike Shamans and quantum mechanics is presented like ‘Dreamtime’.32

Shamanism is the belief that the world is divided into two realms, which

exist concurrently. There is the everyday world we experience and a shadowy

spirit realm, which underlies our experience. A shaman is someone who can

mediate between these two realms. The Ghost in the Atom portrays the world in a

very shamanistic way and, again, this represents a choice.

‘Shamanism’ is not the way that any physicist would explicitly

characterise quantum mechanics, not even Paul Davies, yet the overall effect of

the imagery adopted is this distinctly shamanistic outlook. In an un-shamanistic

manner, Davies argues that the job of physics is to produce models and test them

with observations, “is there ever explanation in physics? I mean, don’t we simply

make models and invent language for them?” (Davies and Brown 1986: 131,

speaking to David Bohm). However, he is keen on the idea of physics having

something to say about the role of consciousness, “I’ve always found it curious

that scientists should want to displace the mind or the observer from the centre of

things because it seems appealing to have us there. Why do you think that there is

32 ‘Dreamtime’ is an Australian aboriginal concept referring both to an ancient time when spirits walked the Earth and to the spacetime ‘adjacent’ to our own that is still inhabited by spirits. For a broad ranging introduction to shamanism (though not dreamtime) including a comprehensive bibliography see Vitebsky 1995.

210

this restless search...to find some sort of vestige of Einstein’s vision of objective

reality which doesn’t depend on the mind?” (Davies and Brown 1986: 76,

speaking to Rudolf Peierls). Like the spirit world of the shaman, the quantum

world is inhabited by wilful and capricious spirits. That is to say, the actors –

electrons, photons, interactions, etc. – are characterised in an anthropomorphic

way and imbued with complex psychologies. “Nature out manoeuvres the wily

physicist... the interference pattern defiantly vanishes!” (Davies and

Brown 1986: 9, Davies’ introduction).

The choice this shamanistic outlook represents can be illustrated by

looking at a subtly different alternative. Instead of the physicist as shaman, Davies

could have constructed a ‘guru’ like identity for physicists. The role of a guru is

quite different to the role of a medium. (The word’s etymology is Sanskrit and it

means ‘teacher’. In particular, it is applied to Hindu spiritual teachers.) The Hindu

conception of the world is also different to a shamanistic one though both can

serve as metaphors for quantum mechanics. To understand the ‘ultimate reality’ in

the Hindu conception also requires a complete shift from everyday human

perception. It is holistic and stresses processes over and above objects. To

experience the world as it actually is, rather than as it appears, requires effort and

the guru is the one who helps you get there (more like a trainer than the Western

conception of a teacher). Richard Feynman characterises the role of the physicist

in a ‘guru’ way rather than a ‘shamanistic’ way in QED: The Strange Story of

Light and Matter (Feynman, 1990).

The Other Side of the Fence: ‘Appropriations’ of Science

The dominant view of popularisation (see page 7) spontaneously creates a

boundary of its own. When science appears in popular culture it apparently comes

in two forms: on the one hand there are accredited popularisations that emanate

from within the scientific community and are directed out at the masses; on the

other hand there are various ways of appropriating scientific ideas.

‘Appropriations’ include art that incorporates scientific knowledge; entertainment

that deals with scientific subjects; and social or political discourse that

incorporates arguments from natural science.

211

According to the dominant view, it is accredited popularisations that have

the most significant impact on the public’s understanding of science and the goal

of popularisation is education. There are various motives posited for

‘appropriating’ scientific knowledge but none of them are entirely legitimate.

Appropriations can be a source of metaphor and analogy; they can present a

caricature of the institution of science; and they can be used as a way of hijacking

the authority of science to apply it inappropriately elsewhere.

The critical view of popular science emerging here, particularly the stream

metaphor developed in chapter 3, blurs the distinction between accredited

popularisations and other examples of public science in several ways. Most

importantly, the functions performed by popularisations are ‘richer’ than the

dominant view gives them credit for. Popularisations often make claims about

society as well as nature and play a role in negotiating the social boundaries of

science.

When a textbook is written for a student the motivation is clearer. The

purpose of the education is fairly well understood by all concerned. The success

of the book can be measured by a student’s ability to pass exams in the subject. In

a popularisation on the other hand, even if the stated aim is to educate the reader,

the purpose of this education is far from clear. Popularisations have an implicit

social function that is usually absent in textbooks.

We should be clear about the distinction. Textbooks have an implicit

social function also in the way they encapsulate what Thomas Kuhn calls a

‘paradigm’ (Kuhn 1970: 10-11). They define a field and in so doing prescribe the

activities of a community of scientists. Textbooks establish boundaries that let

researchers know if they are ‘proper’ scientists or not. However, the boundaries

being established are internal to science. They are not usually concerned with the

relations between science and society. This internal role of textbooks is evident in

popularisations also as we have seen in the example of The Ghost in the Atom.

Popularisations allow these internal boundaries to be negotiated even when a field

is too immature or too unstable for a textbook to serve as the stamp of formal

212

consensus33. However, popularisations implicitly address other boundary

questions and this is the social function that is lacking in textbooks.

Adopting Hilgartner’s stream metaphor (see page 34) obviates the need to

treat ‘appropriations’ of science as a category wholly different from what we have

been calling ‘popularisations’. Nevertheless, the way such texts orient themselves

with respect to existing boundaries may be very different from popularisations

that emanate from the scientific community itself. The further downstream we

find a representation of science, the more likely are we to find creative use being

made of the boundary. We saw in the case of The Ghost in the Atom how

boundaries were used as axes for ‘ironic reinterpretation’ (page 203). When

scientific notions are appropriated in art and literature, we see yet more ways of

employing the boundaries in discourse. The first example we turn to, however,

examines how the boundaries between art and science can be used to discursively

establish a place for science in the social framework.

Art and Science: Inside Information While the function of popularisations can be more complex than our

expectations of the genre may lead us to believe, appropriations of science can be

more straightforward and carry less of an ‘agenda’ than we might expect. An

exhibition of electron micrographs at the Wellcome Trust’s Two10 Gallery in

London34 distinguished itself by being remarkably artless. The pictures were the

very same ones that are routinely used to illustrate textbooks and popularisations.

Placing them in a gallery transforms our expectations of them but the context is

the only difference (see previous chapter). The exhibition (including the captions)

is every bit as ‘educational’ as a straight popularisation of human anatomy but its

goal, mediated by its surroundings, is different. The same reading of the pictures

is available to us through the medium of a popularisation but the ‘gallery reading’

is inhibited by our expectations of the popular science genre. In a popularisation,

we assume the pictures are motivated by utilitarian desire to communicate directly

and efficiently – in other words, artlessly. (Though there is also an expectation of

a popularisation that pictures serve an entertainment role also and so we expect

33 For a detailed account of this process in the case of cosmology see Jane Gregory 1998: Fred Hoyle and the Popularisation of Cosmology. 34 Inside Information, Wellcome Trust gallery, London, Autumn 1996. Catalogue: Ewing 1996.

213

shock or pleasure from viewing them.) In a gallery, we assume a quite different

motivation. If the pictures are evocative and resonant, we assume this is the

intention of the artist and not a by-product. The artist deliberately contrives any

resonances they have for us we assume. That is, we assume that there is a personal

if not ideological agenda behind the exhibition – scientific images have been

appropriated for a specific social purpose.

The distinction between ‘popularisation’ and ‘appropriation’ is not clear in

the case of Inside Information. It depends on how we choose to look at the

exhibition. This ambiguity means that the dominant view’s distinction between

legitimate popularisation and other public science is not very useful. (It serves no

analytical purpose but the distinction itself helps to maintain boundaries between

those who can legitimately call on scientific authority and those who can not.) A

more useful way to begin to understand the distinction between the electron

micrographs in the exhibition and in popularisations is to examine, in each case,

the various boundaries that they impact on. This gives us insight into the role of

science in public culture that is unavailable to us if we insist on distinguishing

popularisations from appropriations. We can look for which boundaries are

invoked, which direction they are moved and how the existence of a boundary is

used creatively. The different boundaries invoked are part of context in which the

‘arbitrary elements’ of the micrographs are made meaningful (see chapter 2).

Placing scientific images in an artistic context (albeit one associated with a

scientific institution – the Wellcome Trust) implicitly addresses common wisdom

about boundaries between science and art. The exhibition could be seen as a claim

on territory previously acknowledged as the sole preserve of artists. Scientists

could be seen as extending their authority to include the aesthetic aspects of

representations of nature rather than their authority being limited to the creation

and validation of representations of nature. Conversely the exhibition could be

seen as an attempt by artists to wrest control of scientific artefacts (electron

micrographs) from scientists and claim a legitimate right to create their own

meanings with them.

Nobody would actually make sense of the exhibition in these black and

white terms but the tension around this particular boundary informs the viewer’s

reading of the pictures. The ‘artists’ and the ‘scientists’ in this case are one and

the same, which also implies that there is no desire to ‘claim territory’ at the

214

expense of a rival group. In this case the deliberate aim is to create common

ground between science and art.

The relations of art and science are often considered in monolithic terms:

science and art are two worldviews with a boundary between them. Thus we

might be tempted to say of the Inside Information exhibition that its aim was to

‘blur the boundaries’ between the two. A more useful approach to the relations

between art and science is to consider the intermediate dependent entities by

which they interact. This allows us to conceive a multitude of boundaries and a

variety of ways of relating the two categories. The way the exhibition impacts on

the boundaries between art and science is to suggest that many intermediate

entities can be shared and offers proposals as to the appropriate covenant between

them. With this more precise conception of boundaries though, the idea of

‘blurring’ them loses meaning.

There are various reasons for wanting to change the boundaries like this

and various material interests at stake. The mission statement for the Wellcome

Trust is, “Helping medical science to flourish”. In mounting the exhibition, the

Trust was seeking to relocate science in the centre of society rather than have it

perceived as independent and peripheral. At the same time, the exhibition served

to extend the ways science is perceived to be socially useful. The alliance with

artists involves some loss of autonomy for scientists but this is not too great a

sacrifice. It also has the benefit of disrupting negative conceptions of scientists as

unimaginative.

There are other voices in debates over the boundaries between art and

science. The exhibition is just one voice and it is open to reinterpretation by rival

points of view (with different vested interests in the boundary). Lewis Wolpert

offers an alternative conception of the boundary between art and science. For him

the boundary is singular and its key characteristic is that it is permeable but in

only one direction: Science may inspire art but art has nothing whatsoever to

contribute to science.

Among the confusions about the nature of science is a widespread

attachment to the idea that arts and sciences are basically similar in that

they are both creative products of the human imagination and that

attempts to draw a dividing line between them are quite wrong. [...] This

view, however, is misleading and possibly sentimental. Scientists are, of

215

course, creative, and do require a “vivid imagination”, but their creativity

is not necessarily related to artistic creations. It is only at a relatively low

level that creativity in the arts and in science may be similar: a level

which includes almost all human activities that involve problem solving,

from accountancy to tennis.35

Wolpert’s conception of science (exemplified in The Unnatural Nature of

Science, Wolpert 1992) places emphasis on its uniqueness as a cultural activity.

An alliance between scientists and artists threatens the unique status of science in

a way that a one-way flow from science to art does not. The balance between

autonomy and centrality is perceived differently by Wolpert than it is by the

Wellcome Trust. He does not argue that scientists have any special authority over

the aesthetic aspects of representations of nature. To do so would dilute scientists’

authority to arbitrate on matters of fact because this depends on the uniqueness of

science.

Wolpert rejects any suggestion that art and science are related arguing the

contrary: that art and science are perhaps the “most different” areas of human

endeavour36. As an example of the difference, he claims that the appreciation of

visual art is totally unintellectual in contrast to the appreciation of science. This

defines the boundary between science and art in two ways. First by defining

appropriate ways to orient oneself to each; and second by reiterating the

commonplace distinction: science is intellectual and art is emotional.

The proposal to distinguish the two categories, if it is accepted, has

implications for how each can operate in society. The conception of the boundary

depends on adopting an orientation towards science that deprives it of certain

social roles. (At its core is the notion that science is primarily understood on an

intellectual level, this deprives it of an aesthetic role because aesthetic

appreciation is unintellectual and characteristic of the opposing category, art.

Other social roles are denied to art with the adoption of this boundary.) Wolpert

dismisses the current vogue for art/science initiatives as snobbery: art and science

35 Abstract to: Lewis Wolpert, The Unnatural Nature of Science, a talk at Art and Science: Two or Three Cultures? Interalia conference, University of Bristol, 27-28 February 1993. 36 Speaking about the Art/Sci initiative of the Wellcome Trust on Front Row, an arts magazine programme on BBC Radio 4, 7.20 pm, 21 July 1998.

216

are seen as the highest forms of culture and therefore snobs would like to see them

united.

Defending the status and authority of scientists for Wolpert involves

reiterating its uniqueness. This requires boundary work to prevent the boundaries

extending into other realms such as art or politics or philosophy. In his view

science has nothing to gain from these imperialistic tendencies but much to lose. It

is also important that cultural influence flows across them in one direction only –

away from science. This trenchant conception of science puts Wolpert at odds

with others concerned with the public understanding of science. The following

extract is from Robin Dunbar responding to Wolpert’s review in The Sunday

Times (1995) of his book (Dunbar 1995a).

Unlike [Lewis] Wolpert, I do not believe that a wider appreciation of

science will come from setting scientists irrevocably apart from the rest

of humanity. My aim is rather to show that science is at root a natural

process, integral to our collective culture, and something that we can all

delight in. To claim otherwise inevitably leaves science a hostage to

fortune at the hands of scaremongers. (Dunbar 1995b: 2)

Dunbar and Wolpert’s primary objectives are the same. Both aim to

defend science from its detractors and create an environment in which science can

thrive. They are in broad agreement about their perception of ‘anti-science’ but

their tactics diverge. The differences between them can be understood with

respect to the priorities they attach to various boundaries between science and

society.

Another reinterpretation of the Wellcome exhibition can be found in a

Forum article (editorial comment) in New Scientist (Burgess 1996). Jeremy

Burgess is one of the artist/microscopists represented in the exhibition. His article

is a tongue-in-cheek suggestion that scientists facing cuts in funding should

produce works of art along the lines of Damien Hirst’s Mother and Child

Divided (1993). (This sculpture uses a cow and a calf that are each cut into two

symmetric pieces. The pieces are placed in four separate tanks of formalin.)

Burgess explains,

... there were two finalists for the [1995] Turner Prize who used scientific

techniques relabelled as art. Aside from the cow in formalin, there was

the endoscopy film. Butchers and doctors are perfectly justified in feeling

217

bemused at the sight of their daily experience laid out in galleries and

presented as worthy of contemplation by the art establishment.

(Burgess 1996)

What Burgess implies with his mock indignation is that much of what

artists do should really be considered as an appropriation of scientists’ labours. He

explains that “art has been dependent upon scientific advance for at least a

century” arguing that the origin of impressionism was the invention of the metal

tube (because it allows paint to be used outdoors). Burgess also argues that the

presentation of science as art is an idea that stems from scientists, not the Turner

Prize contestants. He cites coloured scanning electron micrographs such as his

own in support of this claim.

The point is, all it takes to turn these things into art is the opinion of the

right person. [...] Convincing the art establishment to widen its horizons

is the difficult bit in my plan, but thanks to Hirst, Ewing [editor of Inside

Information, 1996] and others, this donkey-work has already been done.

(Burgess 1996)

The conception of the boundaries that Burgess communicates is that

scientists have a legitimate claim to much of the cultural status enjoyed by artists.

He acknowledges that this goes against current wisdom about who can call

themselves artists but he argues that his conception is nevertheless more natural.

To achieve the naturalness of his argument Burgess is forced to offer a conception

of art as little more than recontextualised artefacts.

Dismiss from your mind any preconceptions about what constitutes art.

Hirst remarked after his award that his preserved cow was a message to

the future. Others have described it as an icon of our times. Well, our

modern research laboratories are jam-packed with icons of the present

that will be messages to the future. (Burgess 1996)

The correct relation between science and art according to Burgess would

shift the current boundaries of science well into the realm of art. The consequence

of living with the current boundaries is that science is missing out on funding to

which it is entitled (money currently going to artists). The boundary itself is

employed in the discourse; it allows Burgess to ironically reinterpret the history of

impressionism for instance. It also allows Burgess to reflect on art from the

218

theoretical perspective of the other side of the boundary and argue that art is

‘nothing but’ that which scientists already do.

Examining the article from the perspective of the labour its author puts in

to shifting the boundary reveals much about its structure. In this case it explains

the article’s carping tone by revealing why Burgess is interested in extending the

remit of science and why, in this case, extending science requires an impoverished

conception of art.

In Burgess’ article, there is both a conception of an implicit boundary

between science and art and an effort to shift it. The implicit boundary is itself

used to reinterpret the categories and offer an alternative conception of the

boundary between them. In this way, it differs from many texts that ‘appropriate’

science for artistic purposes. These make creative use of an implicit conception of

the boundary between art and science but usually do not set out to modify it. They

rely on the perceived stability of the boundary. In the case of the Wellcome

exhibition, Inside Information, the boundaries were deliberately modified but

usually this is not the intention of the artist. The following examples address non-

polemical uses of science in literature, theatre and dance concentrating on the

creative use of existing boundaries.

Boundaries of Science in Literature: Gut Symmetries by Jeanette Winterson

In Gut Symmetries (Winterson 1997), physics is employed to explore new

aspects of a familiar dramatic theme: the ‘eternal triangle’. It is the various

boundaries between science and non-science rather than the scientific ideas

themselves that are used to explore the lives of the characters. Alice (a high

energy physicist) has an affair with Jove (a string theorist) and then falls in love

with Jove’s wife Stella (a poet). Although her characters speak about physics, it is

not nature that Winterson is interested in but the characters themselves. Rather

than retelling what she has picked up from popular science books she appropriates

notions like grand unified theories and string theory and employs them to her own

ends, expressing them in her own voice. This is the first way in which the implicit

boundary between literature and physics is used creatively: Winterson’s audacity

makes the book uncommonly compelling and insightful. Physics is rarely used in

such a way and appropriating it like this seems ‘deviant’. Already our

219

preconceptions about how one should speak about topics such as high energy

physics are challenged.

The boundary could have been used in alternative ways. Often the role of

physics in fiction is to provide a stark backdrop to the emotional journey taken by

the characters. Science adds contrast to bring what is really important into relief.

An example of this use of science is the relationship between Mr Spock and

Dr McCoy in (the original) Star Trek. In most episodes, the real crisis faced by the

crew was a moral one. Dr McCoy responded to this emotionally and Mr Spock

responded ‘logically’ (logic and science were conflated in this aspect of the

series). The two were presented as thesis and antithesis that found synthesis in the

actions of the captain, James Kirk. The moral choices forced on Kirk and his own

emotional response to these was intensified by the contrast between McCoy and

Spock.

Star Trek established a boundary between science and emotion across

which moral arguments could be ‘reflected’. In particular, McCoy could ironically

reinterpret a ‘scientific’ response to a moral crisis. This would make Kirk’s moral

decision even more bold and emotionally charged. So common is this device

(science or logic on the one hand versus emotion on the other) that its absence in

Gut Symmetries is notable.

There is no binary opposition between rational science and the irrational

emotional responses of the three protagonists. The way Alice (the physicist) and

Stella (the poet) respond to their circumstances is contrasted in the book but not in

the way we might expect. Neither character is a symbol or a stereotype like Spock

and McCoy. They differ in the way they articulate their feelings – the imagery

they use and the resonances they pick up. But although each character does stand

on her own, the overall effect is of two women blending into one. We get a single

character with two voices and two histories. The boundary between science and

literature is used in this case to signify parallels rather than contrast.

By rejecting the ‘obvious’ opposition of science and emotion she can

explore the more subtle contrasts between her characters. The reader expects the

boundary between science and poetry to isolate Stella from Alice and Jove.

However, Winterson uses the boundary to unite the two women. Jove inevitably

acts as the counter balance. This is a self conscious manipulation of the boundary

220

that signals a more important distinction. Jove is clumsy, self obsessed and

generally knavish – qualities, it emerges, he has to have for the plot to work.

The boundary that emerges is one between a utilitarian but sterile

conception of science on the one hand and a celebration of experience on the other

that unites poetic and scientific representations. This contrast brings the theme of

meaning and knowledge to the fore. Jove epitomises the sterile view: “The

shifting realities of quantum physics are real enough but not at a level where they

affect our lives”. He reminds us. “I deal in them everyday and I, like you, still

have to wash my underpants” (Winterson 1997).

There are three hurdles to forging a place for science in fiction the least

important of which is that it is hard to understand. The difficulty of science makes

it hard to learn and hard to do but does not stop aspects of it from being taken up

by writers. There are many other arcane subjects addressed successfully in

modern fiction, Stephen Fry’s popular best seller The Liar37 was about philology

for instance. As long as we do not expect to learn any physics by reading Gut

Symmetries, understanding the details of string theory need not be a serious

problem for Winterson.

The two remaining hurdles are firstly that science is seen simply as a

collection of facts and rules and secondly that science remains the property of

scientists who guard it jealously. These are both boundary issues. One popular

way of conceiving the boundary between science and non-science is to stress its

reliability at the expense of its power to evoke an emotional response. That is, the

characteristics of science that make it reliable and important are that it is artless

and direct and it is not evocative. We have already seen the effect of the third

hurdle (that science is the property of scientists) in relation to the use of scientific

images and instruments by artists.

The first hurdle (that science is difficult) is closely related to the division

of scientists from others according to the criterion Shapin called ‘cultural

competence’ (Shapin 1989: 993-994). (This was discussed on page 177 with

respect to how the relations between science and the public have changed over

time.) Readers expect to see representations of science within specialised genres

37 Stephen Fry, 1992, The Liar (London: Mandarin) ISBN: 0749305401

221

only (i.e. not in ordinary novels) because one of the defining characteristics of

science is that it is unintelligible to non-scientists. For the same reason, readers

are distrustful of non-scientific authors who invoke science because if the author

does not understand what she is writing about how can she hope to make it

meaningful? The implicit boundary between science and literature is the reason

we ask this of physics and not other subjects.

The answer to the question in this case is that Winterson is not making

physics meaningful, she is using physics to give meaning to her characters’

experience. She ‘borrows’ physics in the same way as she borrows the experience

of Jewish immigrants in New York and borrows social relations in British

industry. She cannot claim any special expertise with these themes either but the

reader has different expectations of subjects like this. We do not expect to learn

about them so much as to reassess what we feel about them – to think again about

what we already know even if this does not amount to very much. In Gut

Symmetries the characters are what we are really interested in. There is a strong

temptation to think of physics differently.

If the reader thinks of physics as just a collection of facts about the world

and rules governing what goes on, it seems to be of no use unless it is fully

understood. If physics is used metaphorically, one would think, the metaphor must

surely fail because it would be an attempt to articulate a hazy idea in terms of

something even more poorly understood. How helpful is it to the reader when

Alice uses the Standard Model (a theory of the forces of nature) as a metaphor for

her three-way affair? This is a fairly familiar situation that non-physicists are

being asked to look at from a perspective which is totally alien to them (each lover

is equated with the strong, weak or electromagnetic force). If physics is just a

collection of facts that are either understood or not, then bringing these two ideas

together will not lend anything to our understanding of the characters. The fact

that Winterson is prepared to use physics in this way implies a rejection of this

view of physics. For her, ideas from physics are more than facts in the way that

somebody’s life history is more than a chronology.

Physics is evocative even if the rule by which it is often distinguished from

literature says the opposite. On whatever level it is understood, physics is

evocative. To non-physicists maybe only the language is evocative along with

some hazy ideas of what it is for but at least the words are available even if actual

222

knowledge and experience of the natural world is lacking. In other words,

Winterson explores the way the physics is evocative even to non-physicists who

may be totally mystified by it. Again, the only reason one would have to question

the legitimacy of this is that it is science and not some other subject. Whenever

science is dealt with in this way something causes us to assume that the book is

addressing what we know about the subject and not what we feel about it. This is a

big problem for Winterson, she cannot rely on her readers to approach the physics

in the same way as they do her other themes though they are written in the same

spirit. So how could Alice’s Standard Model metaphor be helpful to a reader

unversed in physics? First of all, by making the comparison, Alice (Winterson’s

protagonist) demonstrates that the Standard Model is available to her, if not to

anyone else, as a source of insight into her condition. This tells us about the kind

of person she is and also upsets the binary opposition between science and

emotion. To somebody familiar with it, the Standard Model could be a rich source

of imagery with its associated notions of coupling constants, symmetries and

spontaneous symmetry breaking. But for Winterson it is enough to simply hint

that this richness is available to Alice. Thus the demarcation of scientists from

non-scientists according to cultural competence is not challenged by this use of

science. On the other hand, the demarcation of science from literature by the

inability of science to evoke an emotional response is challenged.

A more interesting boundary that Winterson faces is the question of who is

morally entitled to represent physics. The ‘stories’ of science are considered the

property of scientists and scientists protect their moral rights. This is occasionally

true of other peoples’ stories and for good reason. At times, for instance, it is

necessary for the Jewish community to seek to control the way they are

represented by others. Gut Symmetries does not demand any such censure,

Winterson can write about the experience of New York Jews even though she is a

gentile from a small English town. The story of Stella’s father belongs to us all to

the extent that Jewish culture is an integral part of everyone’s heritage in the

West. But this is true also of science – if not of the stories of scientists themselves.

In general though, when we find science in fiction it is an authorised

version we get – one that mimics scientists’ own stories. The source of this

science is popular science books (a little further upstream). Upstream

popularisations include ‘instructions for use’: From Paul Davies and Stephen

223

Hawking we not only get insight into grand unified theories and black holes we

also learn what we are supposed to find interesting about them and how we are

supposed to think about them. The theories belong to physicists and we are

offered a glimpse of what they mean to them. We are also told, ‘you can look but

don’t touch. Only we can play with these.’

Were Winterson not successful, the whole enterprise could be dismissed as

so much self indulgence. Because she pulls it off, when she erroneously tells us

that anti-quarks have not yet been discovered (Winterson 1997) the reaction of

physicists is likely to be disappointment rather than irritation. Once Winterson has

pointed the way, a physicist can see further creative possibilities in a more

complete picture of the physics.

Winterson writes as if science was just another source of metaphor but she

is nevertheless aware of pre-existing boundaries. She can use these boundaries to

‘reflect’ ideas or to construct parallels or she can subvert her readers’ expectations

of them to dramatic effect. Whatever use is made of them, the boundaries can not

be ignored.

Boundaries and Collaborations Between Artists and Scientists: Talking of the Sex of Angels

Implicit boundaries between science and non-science are employed

creatively in collaborations between artists and scientists. Existing boundaries are

also challenged by such collaborations, for instance they can be renegotiated as

the Inside Information exhibition attempted to do (page 212). Again, there are

various ways of using the boundaries and various reasons for wanting to redefine

them. Many commentators on the relation between the arts and sciences lament

the gap between them. They go further and advocate a union of art and science

and the establishment of a ‘common culture’. The goal of a common culture is

invoked as if it is a desire shared by a broad section of society. However, the

actual nature of the putative union and its implications vary widely in different

accounts. The debate about the division between arts and sciences was enlivened

by C.P. Snow’s intervention in 1959 (Snow 1969).

In recent years the term ‘third culture’ meaning a union of arts and

sciences has gained popularity. For instance in 1993 a conference entitled, Art and

224

Science: Two or Three Cultures?38 brought several commentators from the arts

and sciences together to examine the origins of the division between them. The

conference itself questioned the value of a third culture. In the programme, the

conference organiser, Richard Bright, asks of the relations between art and

science, “Is this mutually exclusive duality in any way culturally beneficial?” and

goes on to suggest that it might be. In Bright’s conception, a third culture should

not represent a complete union of arts and science (or an encroachment of science

into the arts) but the product of a productive dialogue between two separate

entities.

[T]he conference aims to act as a catalyst for mutual correspondence and

understanding. An understanding which is based not on the mere

acquisition of artistic or scientific knowledge, but on the understanding

of respective modes of thought, method and meaning. (Richard Bright,

Art and Science: Two or Three Cultures? 1993, Programme, p 3)

The first use of the term ‘the third culture’ was in an essay added to

Snow’s second edition of The Two Cultures (Snow 1969/1964). The term has

cropped up in other contexts also with different connotations. Most notably, The

Third Culture was the title of a popular book by John Brockman (1995). The

cover of the book tells us:

The Third Culture is an eye opening look at the intellectual culture today

– a culture in which literature and philosophy have shifted into the

background. For, as John Brockman now argues, it is scientists – not

literary intellectuals – who have the most to say on the important

questions facing mankind. (Brockman 1995, back cover)

An American edition of the book39 speaks of a “common desire to be

recognised as today’s intellectual leaders” amongst the 23 scientists interviewed

by Brockman.

Brockman’s conception of the third culture is aggressively imperialistic in

comparison with Richard Bright’s (perhaps because he is a writers’ agent and

publisher specialising in popular science). This is how he defines the third culture:

38 Art and Science: Two or Three Cultures?, 27-28 February 1993, University of Bristol, Organised by Interalia. 39 Touchstone Books, ISBN: 0684823446, May 1996

225

The third culture consists of those scientists and other thinkers in the

empirical world who, through their work and expository writing, are

taking the place of the traditional intellectual in rendering visible the

deeper meanings of our lives, redefining who and what we are.

(Brockman 1995: 17)

And this is what he has to say about the ‘traditional’ culture of ‘literary

intellectuals’:

In the past few years, the playing field of American intellectual life has shifted, and the traditional intellectual has become increasingly marginalised. ... [Traditional culture] uses its own jargon and washes its own laundry. It is chiefly characterised by comment on comments, the swelling spiral of commentary eventually reaching the point where the real world gets lost. (Brockman 1995: 17)

Brockman advocates an extension of the boundaries. In contrast, Richard

Bright (following C.P. Snow) advocates modifying the boundaries to allow more

communication across them. Brockman’s initiative is much more likely to meet

with resistance. The threat was already anticipated by Michael Yudkin at

C.P. Snow’s Rede Lecture in 1959 (Snow 1969):

There is a real danger that the problem of the ‘two cultures’ may

gradually cease to exist. There will be no building of a bridge across the

gap, no appearance of modern Leonardos, no migration of scientists to

literature. Instead there will be the atrophy of the traditional culture, and

its gradual annexation by the scientific – annexation not of territory but

of men. It may not be long before only a single culture remains. (Quoted

in Richard Bright’s introduction to the Art and Science conference

programme, emphasis added)

Brockman has fewer reservations about this possibility. His conception of

a third culture is just as Yudkin mapped out forty years ago. The third culture is a

single, exclusive group:

[C.P. Snow] optimistically suggested that a new culture, a ‘third culture,’

would emerge and close the communications gap between the literary

intellectuals and the scientists. In Snow’s third culture, the literary

intellectuals would be on speaking terms with the scientists. Although I

borrow Snow’s phrase, it does not describe the third culture he predicted.

226

Literary intellectuals are not communicating with scientists. Scientists are

communicating directly with the general public. (Brockman 1995: 18).

In recent years several groups have emerged that aim to forge links

between artists and scientists. Two influential groups in this area are Interalia and

Arts Catalyst. Interalia was founded in 1990. It organises events such as the Art

and Science: Two or Three Cultures? conference with the aim of establishing

dialogue between distinct cultures. For example at a conference entitled

Connections in Space40, the sculptor Antony Gormley and the cosmologist (and

populariser) John Barrow gave talks about ‘space’ from very different

perspectives. The intention is that the Barrow’s conception of space will inspire

Gormley and vice versa.

Gormley and Barrow (cosmologists and sculptors) are invited to

collaborate by sharing the different insights they have on the same subject. Thus

Interalia aims to ‘bridge the gap’ between art and science by assisting in the

‘process of translation’ between sculptors and cosmologists. However, there is no

intention to dismantle the boundary between them. On the contrary, the boundary

is perceived as a valuable entity – the transmission of cultural objects across the

boundary transforms them creatively.

The Arts Catalyst, founded in 1993, seeks funding for theatre, dance and

visual art projects that involve collaborations between artists and scientists. They

also see their role as facilitating mutual understanding. One of the founders

explains,

Artists and scientists may use the same words, but they speak a different

language. It can take more than a few taps to break the ice at an initial

meeting. (Vivienne Glance 1995)

The motivation for starting the agency is a belief that there is untapped

potential in science. Nicola Triscott, the other founder, believes, “science is

hugely under-represented in the arts” and attributes this fact to the early

divergence of the British education system (private communication, 1995).

However, Arts Catalyst has attracted funding from the British Association for the

Advancement of Science (BA) and the Committee for the Public Understanding of

40 Connections in Space: From the Cosmological to the Personal, 22 March 1997, Royal Botanic Garden, Edinburgh. Organised by Interalia

227

Science (COPUS). The interest these two bodies have in Arts Catalyst projects are

not the same as the other funding bodies involved such as The Arts Council. In

seeking funding from COPUS, The Arts Catalyst stresses the collaborative nature

of the work (Rob Le Frenais, curator, personal communication June 1998). The

primary way the Arts Catalyst contributes to the public understanding of science

is by the communication between scientists and artists. By itself, this relatively

limited communication would not be enough to attract funding. An important

element of the relationship is that the artists are themselves influential and attract

interested audiences ready to engage with their work. Arts Catalyst projects are

not primarily designed to communicate technical knowledge so we still have not

explained the willingness of COPUS to fund their development.

The aim, perhaps, is to reposition science in the popular imagination –

making it more central and extending the ways in which it is seen to be socially

useful. An allied aim may be to provide a ‘gateway’ into science. The public

understanding of science movement places much emphasis on the concept of

gateways and ‘bridges’ to ‘reach out’ to ‘new audiences’41. By appealing to

existing interests or concerns, public understanding of science practitioners wish

to transform people into the natural audience for popularisations of science and

thus acquire scientific knowledge as well as a supportive stance towards the

scientific community. The Arts Catalyst offers unique ‘gateway’ opportunities.

The position of the boundaries between art and science are maintained in

such collaboration. What is modified are the channels of communication between

the two realms. Unlike the Inside Information exhibition (see page 212), the rules

governing the dependent entities involved are not challenged because crossing the

boundary is the significant act. Moving from one realm to the other provides a

creative tension that is exploited. This is a conception of the boundary that Lewis

Wolpert would be happier with except for the fact that parallels are also drawn

between the way scientists and artists make sense of the world. A clear boundary

is important for these parallels to have any impact. The premise is that science and

art are very different but there are striking symmetries. The existence of

41 For instance a joint BA/COPUS conference in Edinburgh was entitled, Building Bridges to Science: New Audiences – New Initiatives (1st and 2nd April 1995, Lister Postgraduate Institute). The conference concentrated on innovative methods to reach unwilling audiences including outreach programmes and inviting groups of women to spend the night in the science museum.

228

symmetries implies a high level union without claiming that science and art are

‘the same thing’.

Unifying human experience in this non-imperialistic manner legitimates

both realms by giving each a new claim to relevance. With reference to parallels

drawn with art, science is understood as a means to address central questions of

human experience. This threatens a rival conception of science that renders it

either esoteric or utilitarian but always peripheral. Therefore, without shifting its

boundaries with art, the place of science in culture becomes more central. The

cultural position of art is similarly made more central by establishing parallels

across a well defined boundary. Arts Catalyst literature, in common with that of

Interalia, stresses the negative aspects of barriers rather than a serious desire to

shift the boundaries:

There still seems to be an invisible wall between what scientists actually

do and the public’s perception and understanding of science. Since its

formation in 1993, The Arts Catalyst has been working to break down

some of those walls – setting up dialogues between scientists and artists,

and commissioning new art projects emerging from these dialogues.

(Arts Catalyst Web site, http://www.artscat.demon.co.uk/tacind.htm)

An early collaboration involved Nikky Smedley (a choreographer) and

physicists from Imperial College, London. The result was a show called Talking

of the Sex of Angels that ran during National Science Week42 in 1995 at the Place

Theatre in Euston, London. The work was a portrayal in dance of aspects of

quantum mechanics and high energy physics (the branch of physics that deals

with sub-atomic particles). Five dancers dressed in childish costumes decorated

with large polka-dots used props including bouncy balls, alarm clocks and bizarre

sculptures. Sequences determined by throwing a pair of fluffy dice were

interspersed with a simple spoken text that was mirrored by the dancers. At one

point the dancers wrestle from the musicians the white coats they are wearing only

to find more coats underneath. Eventually a television was wheeled in and the

performers settled down with the audience to watch a large mouth painted with

stars talking about cosmology.

42 Also known as Science, Engineering and Technology week – SET95. A week in March dedicated to ‘public understanding of science’ activities, sponsored by the Office of Science and Technology.

229

The use of props was suggestive of certain ideas in quantum mechanics

but there was nothing didactic about the performance. It was humorous and

irreverent rather than educational or instructive. Without the programme notes you

would not guess that one sequence represents electrons and positrons being

created and annihilated and that another represents a double-slit experiment (see

page 119). The performance begins with one of the dancers exclaiming,

My name is God Pass me the dice I’m bored And some action will be nice.

The use of dice is an allusion to both indeterminacy in quantum mechanics

and to Einstein’s often quoted claim that “God does not play dice”. The sequence

involving the musicians’ coats is suggestive of collisions between hadrons43 but

only if one is sufficiently familiar with high energy physics – it makes no attempt

to explain particle interactions. The episode is amusing whether one recognises

the ideas that inspired it or not. The movement itself is the important aspect of the

performance, not the physics it alludes to. The allusion is in any case rather

ambiguous.

The video sequence feels much more like an explanation. The audience

listens attentively as it would to a lecture. However, the sequence is a parody of a

mode of speech rather than an explanation. Terms such as ‘big bang’, ‘inflation’

and ‘antimatter’ are used in a playful way that apes the way scientists speak rather

than adopting their language. At the same time, the ‘everyday’ connotations of

technical language are exploited. At one point the big mouth asks, “What the

fuck’s the antimatter?” This got a laugh of recognition from the audience for

simultaneously alluding to feelings of mystification in the face of esoteric science

and to the baser connotations of scientific language that strike non-physicists more

forcefully than its technical meaning.

One of the main sources of humour in the performance is its irreverence.

Nikky Smedley self-consciously orients herself with respect to professional

science. The show is not a diluted interpretation of elite science rendered for mass

43 Hadrons are particles composed of two or three quarks (e.g. protons and neutrons each contain three quarks). Hadrons cannot be broken into their constituent quarks because quarks cannot exist singularly. However, if there is enough energy, a collision can result in new hadrons being created.

230

consumption; it is, rather, a radical reinvention of science by somebody quite

defiantly on the ‘other side of the fence’. In the video sequence especially there is

a strong sense of ‘us and them’.

The show implicitly makes several claims about the boundaries of science.

Firstly it makes it clear that scientific ideas such as the interaction of hadrons (or

whatever it was that inspired Smedley) are available as a source of inspiration

whether they are accurately represented in a performance or not. Secondly,

scientists’ language and their own accounts of physical processes are available in

much the same way that the basic ideas are available. It is not necessary to

accurately represent a scientific account – it can instead act as a source of

inspiration and thus be used in a way for which it was not designed. Both types of

appropriation refer to a definite boundary and to clearly defined areas of authority.

However, whilst deferring to physicists’ authority over questions about nature,

Talking of the Sex of Angels legitimises ‘unauthorised’ accounts so long as they

make no definite claims about nature.

Boundaries of Science in Drama: Hapgood by Tom Stoppard Hapgood is a play about British, Russian and American special agents that

was first performed in 1988. It made an impact by incorporating ideas from

quantum mechanics. Just as Jeanette Winterson is unable to ignore the boundaries

between science and non-science so too is Tom Stoppard unable to treat science as

‘just another subject’. Again, he mines physics for metaphors and, like Winterson,

he consciously manipulates the implicit boundary between science and drama. Gut

Symmetries is about the emotional turmoil of three characters rather than being

about physics. Hapgood is not about physics either, though it includes

descriptions of quantum mechanics as detailed and lucid as one would expect to

see in a popular book on the subject. It is a spy thriller and so the principal aspect

of the play is its intricate plot involving trust and betrayal. However, Stoppard

extends the genre and subverts our expectations of it to dramatic effect. He uses

physics to achieve this.

The story revolves around Kerner, a Russian physicist who has been a

double agent since being ‘turned’ by Hapgood, a brilliant British agent and single

mother (played by Felicity Kendal). When doubts arise as to his loyalty (is he a

‘joe’ or a ‘sleeper’?) Kerner himself is unable to decide which side he is on. He

231

explains this to Hapgood’s boss (Nigel Hawthorne), a classics scholar whose

knowledge of atoms stops at Democritus, by describing a double slit experiment

(see page 119). Kerner concludes that the act of observing determines the reality.

This, apparently, applies to spies also.

There is a general uneasiness throughout the audience when this is

understood. Kerner compounds this discomfort with his own enthusiasm for

thrillers. “I only read spy stories” he declares. “Well they’re different, you know.

Not from each other naturally...” (Stoppard 1988: 47). Ordinary thrillers involve a

series of motives and clues that the audience can piece together like a jigsaw

puzzle. Hapgood is not like an ordinary thriller. The clues will not slot neatly in

place; the pieces seem to belong to more than one puzzle but they are

interchangeable nevertheless. Disrupting the genre in this way means that

Stoppard can examine issues surrounding the Cold War with greater subtlety than

he could otherwise. In Hapgood’s world (as in quantum mechanics), the answers

depend on how you ask the questions and why. This resonates with the audience’s

actual experience of the Cold War. Other spy thrillers, James Bond films for

instance, seem naïve in comparison.

Stoppard’s claim is that the machinations of the Cold War are essentially

paradoxical, not just opaque. It is not that the truth is hidden from us by the

complexity of international politics; there is no coherent way to make sense of the

situation. Quantum mechanics, though unfamiliar to most audiences, provides a

way to explain the significance of this new, stronger claim. The relation between

the observer and the reality in quantum mechanics also provides a novel way to

examine relations between the personal and the political.

Quantum mechanics is familiar enough to be a resource for writers but (for

the reasons discussed above with respect to Gut Symmetries) applying quantum

mechanics to a plot about spies goes against a common conception of the

‘otherness’ of science. Stoppard can not simply drop scientific metaphors into the

dialogue – they need justification. All the physics in the play is explained by

Kerner. It is he (Kerner the character rather than Stoppard the author) who

invokes quantum mechanical metaphors. Again, this gets Stoppard over the

problem of cultural competence as the distinguishing characteristic of scientists

(see page 177). The fact that such imagery is available to Kerner in his efforts to

make sense of his condition helps to distinguish him from the other characters.

232

The plot also involves a complex exchange of briefcases that Kerner makes sense

of with reference to Euler’s solution to the Königsberg Bridges problem44. Here

too Kerner distinguishes himself by his access to a source of insight that is not

available to the other characters.

Physical insight thus takes on two roles: Firstly it distinguishes Kerner

from the other characters and marks him out as the axis about which the whole

plot revolves. Secondly it gives the audience a new perspective on the subjects at

hand – the Cold War, betrayal, social values, etc. The implicit boundary between

science and drama means that physical insight can only be mediated in a way that

marks Kerner as an outsider. Physical insight can not play the second role unless it

is made to perform the first role also.

Kerner himself stands at the boundary and is simultaneously the victim of

circumstance and the only character with any insight into his predicament. He is a

paradox yet he is also our only means of reconciling the paradoxes in the plot.

These are also the twin roles of quantum mechanics in the plot. We see parallels

between Kerner and quantum mechanics on the level of both the character and the

narrative.

Physics is used in the play to articulate the strong claim about the nature of

the Cold War (that it is bewildering because it is bizarre, not because it is

complicated). It also acts as a pivot about which a familiar genre is transformed

into a sophisticated medium more suited to ‘shades of grey’ than ‘black and

white’. These are novel uses of science in drama – ones that do not conform to the

expectations of an audience. The role of physics in Hapgood challenges the

common dichotomy of physics as an exact science on the one hand and politics as

a spectrum of opinion with no ‘right’ answers on the other. Hapgood denies the

validity of this dichotomy and instead brings out parallels between the behaviour

of electrons and the capriciousness of international relations. Thus Stoppard

obliterates one of the boundaries between science and politics (as well as some

boundaries between science and drama). However, Hapgood also makes use of

science in a way that does conform to the audience’s expectations for the genre.

44 Euler’s solution to this traditional problem heralds the birth of the mathematical theory of topology. In the Prussian city of Königsberg there were seven bridges; the problem was try to cross all the bridges without crossing any of them more than once. A description of the problem can be found in Eastaway and Wyndham 1999: 12-15.

233

In ordinary spy stories, science is understood as the source of a

technological advance that gives one side leverage over the other and so threatens

the equilibrium. Hapgood is no different in this respect. Kerner’s physics is top

secret. He works at CERN (the European particle physics laboratory in Geneva)

on a project for the Lawrence Livermore National Laboratory (A US Department

of Energy laboratory in California that, in the late eighties, played a key role in

developing space weapons for the multi-billion dollar ‘Star Wars’ Strategic

Defence Initiative).

In this way, physics takes on a very traditional role in the plot. The

representation of physics fits into a traditional narrative: cutting edge physics is

justified principally by national security and is therefore secretive; physicists carry

much of the responsibility for the frightening situation the world finds itself in but

are naïve and easily manipulated by cynical military officials. This is a well-

established conception of physics and its relation to governments and the military.

The implicit conception of science is a commonplace that Stoppard can rely on

(i.e. it needs no justification and does not even need to be fully articulated to be

understood).

The novel uses of the boundaries between physics and society are

intensified by reinforcing this established conception of the relation of physics to

society in the plot (and thus reinforcing the traditional role of physics in drama).

Because the traditional conception is represented, Stoppard can be more

adventurous with his novel uses of physics. That is, he is able to manipulate one

of the boundaries between physics and drama because he keeps other boundaries

constant. The other, established boundaries legitimate the novel conception

offered in the play.

The way authors orient themselves to particular boundaries of science is

the most important creative choice they make when addressing scientific issues.

Anyone appropriating scientific knowledge for artistic purposes must choose

which boundaries to reinforce and which boundaries to challenge. Stoppard could

have chosen to challenge our common wisdom about the role of physics in the

Cold War; he could also have chosen to present a view of science that encoded a

dichotomy between exact science on the one hand and opinion on the other; but

Hapgood would have been a different play if he had chosen either course of

action.

234

It is a mark of it’s author’s insight that the play reflects the period at the

end of the Cold War with even more poignancy now than it did in 1988. This was

a time that left people bewildered and Star Wars was an important component of

it. The double slit experiment – indeterminacy as a physical reality – is a very apt

metaphor for this political indeterminacy. There has always been a two-way

traffic between popular conceptual resources and those of science (as we saw in

the case of visual idioms in chapter 2) but Hapgood makes this explicit in a way

that reflects a shift in attitude towards science. The cover of an issue of Granta

devoted to science promises that it’s contents, “...invite us to understand ‘science’

not simply as the study of fact but also as another way, not unlike the novel, of

describing the mystery of the world.” (Granta Issue 16, Summer 1985) Reflecting

this move away from the perception from science as ‘the writing down of reality’

may be a conscious device. Stoppard’s major theme is the loss of absolute values

and the shift in perception this causes.

What is Science? We have seen that the question ‘what is science?’ can be addressed in two

ways. On the one hand, we can try to objectively discern the independent

characteristics of scientific knowledge and the scientific community. On the other

hand, we can ask: given that people disagree about a) what constitutes scientific

knowledge and b) who can legitimately call themselves ‘scientist’ and c) what the

job of a scientist entails, how do people actually go about distinguishing science

from non-science? Loosely we can dub the first approach ‘philosophical’ and the

second approach ‘sociological’. (This should not be confused with distinction

made earlier between the rhetorical strategies of invoking the question ‘what is

science?’ ‘philosophically’ or polemically’ – see page 160.)

Rather than being a rival to attempts at philosophical approaches to

science, the concept of ‘boundary work’ offers a way around the problems of

addressing an essentially social activity metaphysically. It does this by providing a

clear way to distinguish the question, ‘what is science?’ from ‘what is science

taken to be?’

235

6 Conclus ion As we have seen, popular science texts are especially problematic for two

reasons: 1) there are fundamental philosophical issues involved in what they

represent prior to questions of how they represent, and 2) the resolution of these

philosophical issues has material implications for scientists and others. The

material implications of the philosophy of science relate to the origin of scientists’

authority. In some circumstances, it is in scientists’ interests to emphasise the

social origin of their authority (for instance, in distancing themselves from matters

with both political and technical aspects such as the BSE debacle – here scientists

disputed the authority they had acquired through politicians’ attempts to evade

responsibility, see page 174). At other times, it is in scientists’ interest to

emphasise the natural origin of their authority – that is, to claim that any authority

they have comes solely from a unique relationship with the real world.

Confusion arises from the conflict between the objective (asocial) aspects

of scientific knowledge and the social processes by which they are validated. In

popular science, the asocial aspects of scientific knowledge are emphasised. The

fact that the social construction of scientific knowledge is less evident in popular

accounts than in, say, papers in journals, is one reason why popular science itself

takes on a role in the social process of constructing scientific knowledge. (Papers

in journals explain why knowledge is reliable rather than simply describing the

world. See the comment about ‘narratives of nature’ and ‘narratives of science’ on

page 57.) Popular science, because of its ‘natural rhetoric’ can act to naturalise

ideas about the place of science and scientists in society. When scientific rhetoric

is appropriated to naturalise other social ideas we call this ‘scientism’. Popular

science approaches the social nature of science in a scientistic manner. Because of

circularity of this notion of scientism, it is worth explaining how it works (once

more) and carefully disentangling questions about scientism from questions about

knowledge.

‘Science’ is often seen as the arbiter or ultimate authority in disputes, itself

dispassionate and beyond the sordid business of social interests, negotiation and

competition. Scientific knowledge can take on this role in certain disputes but the

negotiation of social interests is itself a fundamental part of science. (We need

here to be careful about the distinction between a priori definitions of science and

236

how scientific knowledge is validated in practice. Questions about what is ‘taken

to be’ scientific knowledge are too easily conflated with metaphysical questions

about truth, see page 149.)

It is by and through and because of social, discursive, practices that

scientific knowledge comes into being (whatever its exact relation to a putative

‘real world’ or to ‘truth’). For instance, the validity of Big Bang cosmology and

the speciousness of cold fusion were each established by social processes

(described by Jane Gregory 1998 and Bruce Lewenstein 1995 respectively). The

fact that these concepts are socially contingent does not affect their relation to the

real world – they may be true or false. The reason we can be confident that they

probably are true (the reason science gives us ‘reliable knowledge’) is because

they are conclusions reached through this particular social process. (See also the

discussion of the ‘degree zero in language’ on page 39).

The principal feature of the approach to popular science adopted here has

been a propensity to reject ‘common-sense’ notions of how the subject should be

approached. This propensity comes not (just) from bloody-mindedness but from

the recognition that common-sense is itself a key issue in popularisation and is

manifest in a range of ways. The problem with science that Lewis Wolpert

identifies (1992) is its failure to conform to common-sense – this is what makes it

hard to understand and even harder to do. The problem with Wolpert’s view is

that it locates in both common-sense and in science an immutable permanence

rather than a dynamic relation. What we actually see, especially in popular

science, is a struggle to modify common-sense. The result of intervening is

(generally) either that a scientific account is incorporated into what counts as

common-sense, or common-sense notions are further naturalised by association

with apparently indifferent and objective knowledge.

We see this ‘dialogue’ between common-sense and scientific knowledge in

Greg Myers’ account of the development and subsequent popularisation of

thermodynamics. The struggle to develop thermodynamics as a meaningful theory

led physicists to adopt commonplaces of Victorian society and build them into the

language they used to describe and explain natural phenomena. Thus, concepts

such as ‘work’ and ‘energy’, having been appropriated by physicists, now have

rigorous definitions and precise applications in physics. When thermodynamics

237

was subsequently invoked in popular contexts, it had the potential to naturalise

those same commonplaces (Greg Myers 1985).

The present account has explored the subject of popular science by making

common-sense notions of the subject itself the object of study. Thus, in addition to

subjecting popular science texts to analysis, we have additionally looked at how

the whole ‘project’ of popularisation is shaped by preconceptions about its

purpose; about its audience; about the coherence of science itself; and the way

communication operates. As we have seen, the subject is characterised by a

remarkably dominant perception of the purpose of popularisation and of the two

categories between which knowledge is assumed to ‘flow’.

Going back to first principles (taking nothing for granted) with respect to

both science and communication has affected this account of popular science in a

number of ways. It is the origin of the stress on theories of communication; it

explains the careful attention to the limits of metaphysical explanations; and it is

what allowed the adoption of an eclectic approach to existing models of

popularisation. Adopting fewer assumptions at the outset contributed immensely

to the insight gained. It facilitated the development of a new, robust approach to

‘boundary work’ that sheds light on the creative use of boundaries as well as

boundary disputes. It also led to a new understanding of the visual culture of

science and the effect of changing the context of a visual text.

The dominant view of popularisation, in contrast, is distinctly restrictive. It

is prescriptive about research in the field and it obstructs meaningful dialogue

with alternative approaches – effectively delegitimising them. It naturalises one

model of the popularisation process (that of knowledge diffusing from science to

the public) and conceals alternatives. We can see the value of an approach that

takes nothing for granted by looking at the detrimental effect of the assumptions

inherent in the dominant view of popularisation.

One particularly obstructive element of the dominant view is the built-in

assumption that the principal role of popularisation is pedagogical. In adopting

this assumption, the dominant view effectively conceals (and denigrates)

alternative roles taken on by science in popular contexts. The pedagogy

assumption also results in the spontaneous emergence of a hierarchy of popular

texts: popular books written by scientists seem more important and more central to

questions of science in popular culture than, say, allusions to scientific subjects in

238

contexts such as literature or advertising. According to the dominant view, the

latter barely count as popularisations at all. Although this perceived hierarchy has

a strong influence in structuring discussion of the public understanding of science,

it has no real foundation. Thus, both the hierarchy and the assumption that the

principal purpose of popular science is pedagogical have been rejected here.

Another assumption built in to the dominant view is that the difference

between the categories ‘scientist’ and ‘non-scientist’ has a purely metaphysical

explanation. Social or cultural explanations for any differences are overlooked.

On the basis of the putative metaphysical explanation for the difference between

scientists and non-scientists, two categories of scientific knowledge emerge:

authentic knowledge and popularised knowledge. However, the boundary between

these is entirely arbitrary. Rather than positing a priori criteria by which

categories of scientific knowledge can be distinguished, a more powerful

approach is to place emphasis instead on the uses to which the knowledge is put.

As we have seen in this account, it is the perception of two coherent categories

(authentic and popularised science) rather than the categories themselves (which

have little substantive foundation) that is important to understanding

popularisation.

As with the categories ‘scientist’ and ‘non-scientist’, it is built in to the

dominant view that the difference between ‘science’ and ‘non-science’ has a

purely metaphysical explanation. Social or cultural explanations for any

differences are overlooked. Whether or not there is a metaphysical explanation for

the difference between science and non-science, there is, nevertheless, significant

cultural influences at work in defining the categories. The dominant view of

popularisation obscures these. A more powerful approach is to explore the cultural

influences on the constitution of science whilst carefully ring-fencing the

metaphysical issues involved. For the purpose of understanding popularisation, it

is better to study how science is defined rhetorically in popular contexts (and

whose interests this definition serves) than to study, a) how the word is employed

in elite forums; or b) what science ‘must’ be in order to give us reliable

knowledge of the world.

Yet another assumption of the dominant view (that stems from the

assumption of a metaphysical explanation for the difference between science and

non-science) is that ‘science’ is a coherent term that can, in principle, be defined

239

objectively and coherently. In practice, we find this not to be the case. The

meaning of ‘science’ depends on the context in which it is invoked. That is, there

are multiple, conflicting notions of ‘science’. As a result, basing an account of

popular science on a single definition of science would be a mistake (and, in any

case, the choice would be rather arbitrary if it was popularisation rather than

epistemology we were interested in). Instead, we need a way of analysing the

various ways science is defined in popular texts without privileging one definition

above all others.

A set of assumptions about media and audiences restricts many accounts

of popular science. The dominant view of popularisation conceives the audience

for popular science as an undifferentiated mass: ‘the public’. Such a category

offers little scope for analysis – being so general as to be all but meaningless. To

the extent that ‘the public’ is meaningful at all, this is almost entirely due to

assumptions built-in to the category without explicit justification. Raymond

Williams’ observation that, “[t]here are in fact no masses; there are only ways of

seeing people as masses” (1958: 300) has taken a long time to filter down to

mainstream research into the public understanding of science. Much discussion

about the pubic understanding of science has centred on the question of who the

public is. This may be a red herring. We should concentrate instead on how ‘ways

of seeing people as masses’ frame discussion of popular science. This shift in

emphasis would be more fruitful than identifying either an all-purpose definition

of ‘the public’ or a definitive taxonomy of ‘publics’.

Perhaps the most restrictive aspect of the dominant view of popularisation

is the way it presupposes a linear, behavioural model of communication without

justifying it. While such models have their place in a repertoire of approaches to

communication, imposing a particular model uncritically curtails the types of

questions about popularisation that can be addressed. The way the dominant view

naturalises one approach to communication and delegitimises alternative ones has

a deleterious effect on conclusions drawn by mainstream research in the public

understanding of science as well as reducing the scope for productive dialogue in

the field. In the present account, a ‘transmission’ model of communication would

not have provided insight into the particular issues addressed. However, as this

account was at pains to explain the choices made rather than relying in built-in

240

assumptions about communication, its relation to alternative accounts, its

limitations and its strengths should all be apparent.

To summarise, the main problem with the dominant view of popularisation

(and, by extension, much mainstream discussion of the public understanding of

science) is its claim to universality. It is insidious because it leaves no room for

alternatives. It arbitrarily exalts some popular science texts and analytical

techniques at the expense of others. In so doing, it stymies meaningful dialogue

about popular science and the public understanding of science.

The remedy for this situation is not a wholesale rejection of the dominant

view of popularisation. (Under limited circumstances, it can provide a useful

framework for both interventions and quantitative research into the public

understanding of science.) The remedy is, instead, more effort towards making the

assumptions on which the dominant view is built explicit. This would both make

the conclusions of mainstream research more robust and allow fruitful dialogue

with alternative approaches (such as those developed in this account).

One consequence of adopting fewer assumptions (and being explicit about

any assumptions that are adopted) is increased flexibility over theoretical

approaches. In the present account, three different models of popularisation have

been combined coherently. Together they offer a more powerful alternative to the

binary distinction between popular and professional that is a feature of many

accounts of popular science. The first model was developed from Hilgartner’s

critique of the dominant view of popularisation: popular science can be

understood as a stream or spectrum where texts with narrow audiences are located

at one and texts with broad audiences at the other end (Hilgartner 1990). The

questions this raises include what happens to texts that move up or down stream

(for instance, how does placing an image produced in the course of scientific

research in a popular context affect the way the image is made meaningful?)

The second model emerges from Lewenstein’s response to Hilgartner’s

critique: all sites for communication in science can be understood as forming a

web of which both ‘popular’ contexts and ‘professional’ contexts form nodes. The

centrality of any particular text or genre depends on a variety of factors – the

pattern of the web can change. Thus, ‘downstream’ (popular) genres can at times

play a central role in the development of cutting-edge science and ‘upstream’

(professional) genres can be the principal source of information for groups of non-

241

scientists (Lewenstein 1995). Whilst the stream metaphor gives us a handle on the

effect of changing the context of a science text, the web model is a more realistic

view of the ‘landscape’ of science texts.

The third model invoked in the present account emerged from the

realisation that popular science plays an important role in defining science itself.

The view of popular science as a site for negotiation (the view of popular texts as

boundary proposals) complements and reconciles these two models as well as

explaining the appeal of the binary distinction between professional and popular

(reliable and authoritative vs. inauthentic and corrupted science).

The strength of the stream metaphor is that it gives us a handle on the

difference between contexts for science: it distinguishes contexts according to the

broadness or narrowness of their implied audiences. As we saw in

chapters 2 and 3, this organising principle has considerable value in the analysis

of scientific images. On the other hand, it fails to describe the actual significance

of texts either to the research process or to broader audiences. The hierarchy of

contexts in the stream view does not necessarily reflect their role in any particular

endeavour (scientific or non-scientific).

For an understanding of the actual role of a text, we can turn to

Lewenstein’s web model with its recognition that the significance of a text or its

context is determined by local rather than universal criteria. This model is a good

foundation for many questions about the role of popular science at particular

moments – particularly as it prompts us to understand the specific circumstances

under which a text is made meaningful. However, although it reflects the fluidity

of contexts for science, the web model does not adequately describe the role of

media in distinguishing scientific knowledge from everything else and how

science can be invoked to ‘naturalise’ discourse. For an understanding of these

issues, we turn to the model of popular science that understands it as a forum for

negotiation. This draws on the concept of boundaries developed in

chapters 4 and 5 and prompts us to understand science texts as essentially

dialogical – that is, as addressing existing proposals and anticipating responses.

The Public Understanding of Science This thesis has not attempted to find, “the most effective methods to use to

get messages across to a wide variety of target audiences” (Bodmer and

242

Wilkins 1992: 7). Indeed, this account of popular science has challenged the very

assumptions on which such a project makes sense. For this reason, it is not public

understanding of science research as it would be understood by the majority of

public understanding of science activists. Increasing interest in the public

understanding of science since the late 1980s was one of the most important

factors motivating the present research into the popularisation of physics. On the

subject of the public understanding of science, the aim of this account was not to

provide arguments for or against it (for which see Durant 1997 Why Should we

Promote the Public Understanding of Science? and Trachtman 1981 The Public

Understanding of Science Effort: A Critique). Rather, the intention was to

simultaneously broaden and demystify the concept. As it is currently constituted,

the public understanding of science movement is too parochial to provide an

intellectual framework for research into popular science. This is mainly because

of the relations between research and intervention in the public understanding of

science.

Walter Bodmer and Janice Wilkins believe that research into the public

understanding of science should be conducted as a service to people, “actively

[…] trying to improve the public understanding of science” (Bodmer and

Wilkins 1992: 7, see page 29). The implication of this assertion is that the aims

and outcomes of such activity are well worked out and unproblematic. The public

understanding of science movement sees the communication of science as the

distribution of a universal unit of exchange (reliable, universal knowledge). This

unit of exchange is perceived to operate much like money – the more common

currency people have the better the flow of trade and the more an individual or

institution has, the more powerful they will be. But scientific knowledge is not

necessarily empowering in the straightforward way that money is. It generally

comes with a prescription for use – legitimate and illegitimate ways of making it

meaningful. Unless the recipients can make it meaningful for themselves, they are

not genuinely empowered by scientific knowledge.

This is rather like financial aid a developing country may receive. If the

aid were tied to particular purchases or conditional on economic or political

restructuring then we would not say that the recipient country was empowered by

the aid (though perhaps we would still expect it to be grateful). Nor should we

assume that all communication of scientific knowledge is empowering. It can

243

instead serve simply to naturalise a particular point of view. This is, of course, one

of the functions of science in popular culture. Whilst scientific knowledge can act

as an impartial arbiter in disputes (a common currency) this is generally not how it

is invoked.

With our increased insight into the role of popular science texts as forums

for negotiation, we are able to question the apparent munificence of public

understanding of science initiatives: who wants us to have which view of science

and why? The pedagogic assumption disguises all motivation – it prevents us from

even asking these questions. Being empowered by scientific knowledge requires

that we can challenge (or at least recognise) the ‘preferred reading’ it comes with.

The ‘scientific knowledge as universal unit of exchange’ assumption,

along with, what we called above, the ‘pedagogic’ assumption of the dominant

view of popularisation, explains why public understanding of science activists

may perceive their aims as unproblematic. It also explains why they fail to

recognise tensions and contradictions in the outcomes they seek (particularly with

respect to the tension between public relations and democratic empowerment –

see page 3). The movement cannot help but perceive any public understanding of

science activity as intrinsically good and certainly beyond individual or

institutional interests.

By choosing to make the assumptions inherent in the modern conception

of public understanding of science themselves the objects of study, this account

has been able to reveal a far richer view of popular science. Although a collection

of facts may be an aspect of science, the meaning of science is far more complex

and the uses to which scientific facts are put more various than the public

understanding of science movement generally acknowledges. The present research

provides a framework in which the complexity of science can be explored.

Perhaps even more importantly, the present research reveals pedagogy to be just

one amongst many functions of popular science.

The relation of public understanding of science research and actual

intervention in public understanding has always been problematic. The problems

stem from two sources: 1) a naïve conception of both communication and science;

and 2) confusion about what the aims and outcomes of interventions should be.

These twin sets of problems compound each other – an impoverished conception

of science may be sufficient to conceive a highly specific outcome and this

244

outcome is itself justified with reference to the conception of science adopted.

Durant, Evans and Thomas make an additional point about the public

understanding of science,

The phrase ‘public understanding of science’ is multiply ambiguous.

Who are ‘the public’? What is ‘understanding’? What is ‘science’?

Durant, Evans and Thomas (1992: 161)

Perhaps we should now distinguish research into ‘effectiveness’ problems

in the public understanding of science, which involve clearly delimited aims and

models (that is, research related to individual projects) from more general research

into public understanding of science1. In each case, explicit attention to

foundational issues is essential for the reasons outlined in the introduction.

General research into popular science can inform effectiveness problems but has a

much broader remit. Understanding popular science requires an eclectic approach

as we are forced to reconcile explanations and analyses from diverse fields.

Indeed, popular science is the site where diverse fields – sociology, philosophy,

history, linguistics, and others – intermingle most productively. In this way,

research into popular science is the apotheosis of the cultural turn in science

studies. And the heart of science studies (or media studies) is where such research

belongs; not in the service of a poorly thought-out public relations exercise.

The relationship between public understanding of science research and

public understanding of science interventions should be like the relationship

between media research and media practice. In both cases, the range of research

questions is broad, with marketing exercises at one end and critique at the other.

What distinguishes media studies in general from the public understanding of

science movement in particular is clarity of purpose. (As the aims of the public

understanding of science movement are taken for granted, there is no perceived

need for clarity about the purpose of public understanding of science research.)

When we do not start with the assumption that the principal role of popular

science texts is pedagogy, questions about the public understanding of science are

transformed. This is especially true when we examine how the meaning of

1 John Durant made a related point about “PUS as a Field of Critical Enquiry” during an ‘electronic conference’ on public understanding of science (conducted by e-mail and through a Web site) in February 1998. The full list of topics discussed is available at PUS Forum Topics – General Discussion, http://www.counterbalance.org/pusforum (August 2000).

245

‘science’ is actually determined in a social setting. As we have seen, whether or

not there is a ‘correct’ (objective) definition of science and the limits of scientists’

authority and responsibility, establishing the boundaries of science is still a

practical problem for both scientists and other groups.

Efforts to modify a view of science (for instance, to change the perceived

role of scientists during food-scares) mobilise non-pedagogical texts more readily

than pedagogical ones. That is, the public’s understanding of a scientific issue is

affected more by the texts that frame the debate than those that deliver facts

within an already established framework. The public understanding of genetically

modified organisms (GMOs), for instance, comes not from a technical

understanding of how genes are expressed but from understanding the boundaries

of responsibility and the types of questions for which scientific knowledge is

relevant. These issues are subject to continual modification (even if, at any

moment, they seem to be fixed). It is not the case that widespread knowledge of

the technical issues involved would, by itself, resolve questions with scientific

content. The controversy over GMOs would not disappear if everybody in the

country had a degree in biochemistry.

The pedagogic assumption trips even sophisticated advocates of the public

understanding of science into conflating facts about the world with the

institutional imperatives of science or other interested groups. As we have seen

(page 195), Walter Bodmer argued that to understand genetics is to understand

that it is necessary to patent genes.

Towards a Dialogical Account This account has addressed two related but largely independent themes.

On one hand, there were questions about visual communication in science and on

the other hand, there were questions about the relations of science itself to

adjacent institutions and interests (which were discussed in terms of ‘boundaries’).

Both themes emerged from the shift in emphasis away from the content of

individual texts towards the role of context in making content meaningful.

In the case of visual texts, this shift allowed us to see how images can take

on very different meanings in ‘upstream’ and ‘downstream’ contexts. It also gave

us insight into the significance of the difference. Further, the emphasis on context

246

revealed a dialogue between the visual culture of science and the wider visual

culture in which scientific practices are embedded.

In the case of boundaries, the emphasis on context also allowed us to

recognise roles that popularisations take on that are disguised if we concentrate

wholly on content. In particular, by studying their ‘dialogical’ nature we saw how

popular texts play a role in defining science itself and thus established their

significance to philosophical and sociological questions about scientific

knowledge. The ways texts intervene in questions about what science is taken to

be was dubbed ‘boundary work’ – we studied the various ways popular texts are

employed to modify the boundaries of science and how the boundaries operate.

Reciprocally, we were able to examine how the (dynamic) boundaries of science

are themselves used in popular texts to shape the narrative.

Whilst these two main themes are related by their emphasis on context,

they have diverged in the present account. This was necessary to fully explore the

implications of each for our understanding of popular science. However, having

successfully developed a coherent set of approaches for each of the two themes,

there is now scope for bringing the two strands together. What this would involve

has yet to be fully worked out but a starting point is the notion of dialogue

borrowed mainly from Vološinov in this account (see page 26) and articulated in

more depth elsewhere in works of the ‘Bakhtin Circle’2.

The account of boundary work revealed the wider debates that popular

texts address themselves to (whilst, ostensibly, their scope is limited to an account

of ‘nature’). The account of scientific imaging revealed (amongst other things)

how visual texts are shaped by the ‘global’ and ‘local’ imperatives of an imaging

community. Reading a scientific image involves orienting oneself to the

community in addition to the text itself. That is, as with boundary work, reading

scientific images involves a process of demarcation – reading images (implicitly

or explicitly) raises questions such as, ‘what counts as visual evidence?’ and ‘how

does one distinguish ‘information’ from ‘aesthetics’?’ For both boundary work

and the visual culture of science, a dialogical account is appropriate. In addition,

2 See Bakhtin 1990. See also Barker’s ‘dialogical account of ideology’ (In Barker 1989). Vološinov is (probably) a nom de plume of Bakhtin. The Bakhtin Centre at the University of Sheffield is a useful resource for bibliographical and critical work on the Bakhtin Circle. See: Bakhtin Centre Home Page: http://www.shef.ac.uk/uni/academic/A-C/bakh/bakhtin.html (November 2000).

247

both themes, as they have been explored here, support the intuition that there is

something inherently dialogical about science. Whilst this account was at pains to

disentangle various strands of science studies and communication theory, the

insight from literary approaches like that of Bakhtin may prove to be important in

weaving them back together.

In chapters 4 and 5 we distinguished the question, ‘what is science?’ from

‘what is science taken to be?’ (see page 234). The latter question is more relevant

to the issues raised in this account. Both the account of the visual culture of

science and the account of boundary work support the idea that it may be the more

philosophically important question also. Should there be any doubt, I suspect that

emphasis on the dialogical nature of science will quash it.

A comprehensively dialogical account of science in popular culture is

some way off (as indeed is understanding what such an account would entail).

There are, however, more direct ways of integrating the principal themes in this

thesis (which may offer a first step towards the greater goal of a dialogical

account). In several places, the twin accounts almost meet. One such place is the

discussion in chapter 5 on the relations of art and science (in particular

pages 212-218). This involved a careful analysis of how exhibitions such as Inside

Information at the Two 10 gallery address boundary issues. There is scope for

such analysis to be further augmented by considering the issues raised in

connection with the available readings of scientific images – the way viewers are

empowered or not to distinguish arbitrary from independent elements for instance.

In chapter 3, the whole discussion under the heading Towards Taxonomy

(pages 91-115), especially the discussion of visualisation and the discussion of the

relations of art and science, could be extended by applying the ideas developed in

chapters 4 and 5. Understanding types of scientific image and the roles they take

on in various contexts is closely related to understanding the roles science itself

takes on and its relation to non-science.

Summary Having discussed the account and the conclusions drawn from it in general

terms, we are now in a position to summarise the contribution the thesis makes to

questions about the popularisation of science. Firstly, the thesis untangled many of

the problems that regularly stymie accounts of communication in science. This

248

was achieved by recognising the importance of clarity about theoretical

assumptions built into an account. That is, the thesis identifies the central place of

epistemological questions and theories of communication in questions about

popularisation and the public understanding of science.

Clarity about fundamental issues facilitated a coherent account of the

visual culture of science and what it means to popularise scientific images.

Accounting for non-verbal communication in science is not trivial. Images have

received little attention in the literature up to now (see note 38, page 50) because

the questions they raise have appeared intractable. Combining historical insight

with a model that relies on a distinction between arbitrary and independent

elements of pictures along with a distinction between primary and secondary

qualities was the basis of a remarkably productive framework for addressing a

whole range of questions. The sheer amount of insight obtained (summarised on

pages 54-57) is testimony to the robustness of the framework developed here. In

particular, the account has given us an understanding of what it means to

popularise a scientific image – it is not a process of translation but a modification

of the available readings on offer. The account also revealed how scientific

communication and scientific practices (and indeed, scientific knowledge) are

deeply embedded in wider culture.

Clarity about fundamental issues also revealed many functions of popular

science that would otherwise be overlooked. In particular, we saw how popular

science texts act as the forums in which what is to count as ‘science’ and who is

allowed to call themselves ‘scientist’ are negotiated. Indeed, careful readings of

popular science texts revealed these categories to be dynamic in the first place.

Concentrating on popular texts allowed us to reformulate the sociological

concept of ‘boundary work’, placing it within a more robust theoretical

framework. In turn, concentrating on boundaries gave us a handle on popular texts

that deal with science but do not fall into the usual definition of ‘popularisation’

(see page 210ff). Thus, the thesis expands the category ‘popular science’,

providing much more scope for insight into the relations of science and the culture

in which it is embedded.

Another consequence of the approaches to popular science developed here

concerns the topic itself. With clarity about fundamental issues, questions about

popular science seem less peripheral. The thesis has effectively relocated the

249

study of popular science, placing it at the heart of science studies. That is, it has

revealed the centrality of these issues in relation to science studies as a whole.

This is discussed in the final section.

Images, Scientists, Culture and the World Popular science is a much richer topic than the public understanding of

science movement gives it credit for. It is not just the place where scientists have

the opportunity to share their intellectual wealth with their impoverished brethren

or the means by which pure scientific knowledge gets horribly mangled. As we

have seen, it is one of the most important sites for determining what science

actually is. It is also the place where many of the principal concerns addressed by

science studies come together coherently. For instance, questions about the

relation of scientific knowledge to a putative real world and questions about its

validation by social processes come together even more naturally in studies of

popular science than they do in laboratory studies, say. The construction of

popular science as a ‘problem’ by the public understanding of science movement

is a red herring that obscures the topic’s centrality in science studies.

Scientists (considered as a whole) are extremely proficient and articulate

communicators. They have to be. The arguments they have to make and the

subjects they address are often subtle and usually a long way from everyday

experience. Scientists have to be conversant with a wide variety of techniques

(most of them unique to science) that are used to represent data and construct

explanations. Of these, most impressive are the graphical techniques for

presenting data.

We have seen how scientists adopt visual idioms from the visual culture in

which they are embedded, apply them to new problems of representation and, in

so doing, expand the repertoire of visual techniques available. The rhetorical

power of scientific images comes from a combination of their reference to an

objective real world and judicious use of ‘arbitrary elements’. In modern science,

pictures form part of an argument rather than being the independent evidence on

which an argument is based. Their role is well understood in professional contexts

thus judicious use of ‘arbitrary elements’ and subtle connotations are not

disingenuous. However, so complex are the resulting texts that they provide

enormous scope for interpretation. The ways they are made meaningful depend

250

(crucially) on the context they appear in. Alternative readings of scientific images

are not (necessarily) mis-readings. As complex polysemic texts, scientific images

can take on a wide variety of roles.

We have seen how the relation of imaging techniques to human vision

determines both the flexibility scientists have in representation and the scope for

utilising them. This raises interesting epistemological questions itself. As

scientists develop yet more sophisticated ways to collect and render data, and the

ingenuity with which they appropriate cultural norms and practices to their own

ends increases, data become increasingly ‘mediated’. What then are the

implications for scientific knowledge? And what will be the role of human

physiology and visual culture? Is their centrality to scientific visual rhetoric

limiting? How much of the world can we really know ?

This account has stressed the futility of ‘final words’ on science or

communication in science. We have also seen how such prescriptions or ‘pithy

definitions’ of science are used rhetorically in pursuing social goals rather than

opening up questions about science for analysis. The account has emphasised the

importance of reflexivity in all meaningful discussion of science. It denies the

validity of the putative ‘common-sense’ that many commentaries on the public

understanding of science appeal to. The universal validity of scientific knowledge

compounds the analysis of scientific texts rather than simplifying their exegesis.

On a positive note, several strategies for addressing these problems have

been developed. It is in popular science that all the themes explored in science

studies come together. One consequence of this is that it demands theoretical

flexibility and an eclectic approach to science. We are denied the luxury of

understanding popularisation from a single disciplinary perspective but, as is

increasingly evident, single disciplinary approaches to science are untenable

anyway. For instance, it is impossible to address scientific knowledge

philosophically without also understanding it from a social perspective. If looking

at popular science makes this clearer then so much the better. The perennial

conflict between the universality of scientific knowledge and its construction in a

social context has complicated the analysis of popular science texts but the

unexpected consequence has been greater insight into the conflict itself and new

ways to tackle it.

251

Appendix 1 : Imaging and V isua l isat ion in the

Sc ience Ci ta t ion Index Science Citation Index visuali* imag* (not imagination) year journal

s issues articles in title %paper

s in any field

%papers

in title %papers

in any field

%papers

1981 4041 32224 599729 273 0.046% 2088 0.35% 1982 4253 33045 628569 337 0.054% 2748 0.44% 1983 4456 34526 672416 333 0.050% 2811 0.42% 1984 4430 34090 674351 309 0.046% 3267 0.48% 1985 4532 35399 708724 317 0.045% 4190 0.59% 1986 4444 34576 717416 382 0.053% 4266 0.59% 1987 4366 34408 710086 377 0.053% 4483 0.63% 1988 4398 34572 705729 368 0.052% 4921 0.70% 1989 4378 35505 654823 401 0.061% 4435 0.68% 1990 4455 36393 689625 347 0.050% 5176 0.75% 1991 4506 36479 695687 366 0.053% 2502 0.36% 5355 0.77% 14307 2.06%1992 4521 37889 741535 438 0.059% 2830 0.38% 6410 0.86% 17007 2.29%1993 4550 37622 754302 474 0.063% 2904 0.38% 6064 0.80% 17449 2.31%1994 4787 39764 798219 536 0.067% 3276 0.41% 6947 0.87% 20402 2.56%1995 5302 42997 853469 633 0.074% 3769 0.44% 8405 0.98% 23185 2.72%1996 5382 44188 901981 666 0.074% 3981 0.44% 9087 1.01% 25754 2.86%1997 5549 45728 925490 762 0.082% 4228 0.46% 9253 1.00% 27271 2.95%1998 5648 47935 957080 766 0.080% 4403 0.46% 9351 0.98% 28736 3.00%1999 48933 941136 768 0.082% 4590 0.49% 9538.5 1.01% 30569 3.25%

Sept. ‘99 5833 32622 627424 512 0.082% 3060 0.49% 6359 1.01% 20379 3.25%

Table 4 Imaging and Visualisation in the Science Citation Index

Values in red are projected figures. Data was available from January to

August 1999 inclusive (bottom row). The projected values were calculated simply

by multiplying them by 12/8 (since we have data for 8 of 12 months).

The Science Citation Index is a bibliographic database supplied by the

Institute for Scientific Information (ISI). The interface is provided by BIDS (The

Bath Information Database Service). The service is supported by the JISC (Joint

Information Systems Committee) of the Higher Education funding bodies.

The interface allows one to search for words that appear in an article’s title

only or for words that appear in the article’s title, abstract or list of keywords. (For

articles indexed before 1991 it is only possible to search titles). ‘Wildcards’ may

be used in BIDS searches allowing one to search for all articles that contain

‘visualisation’, ‘Visualization’, ‘visualise’, or any other word that begins ‘visuali’.

Words may be excluded from a search. The search for articles with ‘imaging’,

‘image’ or similar words in their title, abstract or keywords but excluded articles

with the word ‘imagination’.

252

The values for the total number of journals, issues and articles indexed was

supplied by the BIDS help-desk.

visuali* in title, abstract or keywords

2502 28

30

2904 32

76

3769 39

81 4228 44

03 4590

0500

10001500

20002500

30003500

40004500

5000

1991 1992 1993 1994 1995 1996 1997 1998 1999

no. o

f pap

ers

0.00%

0.10%

0.20%

0.30%

0.40%

0.50%

0.60%

%ag

e of

all

pape

rs

imag* in title, abstract or keywords

1430

7 1700

7

1744

9 2040

2 2318

5 2575

4

2727

1

2873

6

3056

9

0

5000

10000

15000

20000

25000

30000

35000

1991 1992 1993 1994 1995 1996 1997 1998 1999

no. o

f pap

ers

0.00%

0.50%

1.00%

1.50%

2.00%

2.50%

3.00%

3.50%

%ag

e of

all

pape

rs

Figure 19 Visualisation and Imaging in the Title, Abstract or Keywords (Graph)

Appendix 2 : Scanning Tunnel l ing Microscopy

and Atomic ‘Corra ls ’ Electrons in atoms are bound tightly to the nucleus by electrostatic forces.

In bulk metals, some electrons are free to move between nuclei but they are,

nevertheless, bound tightly to the metal as a whole. For an electron to escape from

an atom it is necessary to give it sufficient energy to get over the ‘barrier’

presented by the electric field of the nucleus. However, under certain

circumstances, an electron can find itself beyond the clutches of the nucleus

without ever having had sufficient energy to get ‘over the barrier’. This

phenomenon is called ‘tunnelling’1. In broad terms, tunnelling can be explained

by the wave-like nature of electrons. Part of the electron wave extends beyond the

influence of the nucleus, which means that there is a finite probability that you can

find the electron there.

Scanning tunnelling microscopy uses a stylus that can be moved in all

directions with extreme precision. The stylus contains a material, known as a

piezoelectric crystal, that changes shape when a voltage is applied across it. The

tip of the stylus is brought down to the surface and a small voltage is applied

between the surface and the stylus. When it is sufficiently close to the surface,

electrons can escape from the surface atoms to the tip by tunnelling2. This results

in a measurable current (flow of electric charge) in the stylus. For the IBM

pictures, the tunnelling current involved was equivalent to about 10 billion

electrons jumping from the surface to the tip every second. The rate at which

tunnelling takes place depends very strongly on the distance between the tip and

the surface (Weisendanger 1994: 109-113). Therefore, by measuring the

tunnelling current, one can calculate the distance between the tip and the edge of

the surface atom. However, what ‘surface’ and ‘edge’ mean on scales of the order

1 The terms ‘barrier’ and ‘tunnelling’ are figurative though the situation is similar to the problem faced by a prisoner confined by a high wall. One way to escape the prison would be to obtain enough energy to be able to jump over the wall. However, tunnelling through the wall would allow the prisoner to escape even without having enough energy to jump. In this case, the field confining the prisoner is gravity. 2 Non-technical accounts of scanning tunnelling microscopy can be found in Binnig and Rohrer 1995 and in Von Baeyer 1993 Chap. 4, Images of Atoms, pp 59-76.

254

of electron wavelengths is hard to define. We should think of the surface of metals

as rather fuzzy. Once past the top layer of atoms, the chance of finding an electron

diminishes rapidly3 yet there is still a finite density of electrons within an

ångstöm4 or so of the top layer.

A scanning tunnelling microscope makes a series of measurements along a

straight path and then another series of measurements along another path just

below the first and so on until a whole patch of surface has been scanned. STMs

are most commonly used in ‘constant current imaging mode’. As the stylus scans

across the surface, the current between the tip and the surface is kept constant by

minutely adjusting the height of the tip. The data that are collected are the heights

of the tip at each point along each line of the complete scan – a collection of

hundreds or thousands of individual height measurements. (The raw data are the

voltages across the piezoelectric crystals in the stylus. From measurements of

these voltages the height of the stylus is inferred.) This collection of height

measurements is then represented graphically and the result is a scanning

tunnelling micrograph.

The IBM pictures show 48 iron atoms arranged on the surface of a crystal

of copper. The researchers placed each iron atom individually using the stylus of

the scanning tunnelling microscope5. Atoms are made up of a positively charged

nucleus surrounded by negatively charged electrons. In metals, electrons can

move in the spaces between nuclei. The ripples in the picture are caused by

electrons at the surface of the copper crystal. Electrons in copper (and other

‘noble’ metals) can become trapped at the surface and behave like a

two-dimensional electron gas (Zangwill 1988). The surface electrons scatter when

they collide with the iron atoms (Crommie, Lutz and Eigler 1993; Heller,

Crommie, Lutz and Eigler 1993). The researchers describe the arrangement of

iron atoms that they built as a ‘corral’ because electrons within the circle are

reflected inwards and (partially) prevented from escaping.

3 The electron wavefunctions extend into the vacuum (above the ‘surface’) but decay exponentially. 4 1 ångstöm = 10-10 metres (= a tenth of a millionth of a millimetre). 5 For details of the manipulation technique see Eigler and Schweizer 1990.

Appendix 3 : Dav id Phi l l ip Anderson

The following are some edited comments from David Anderson on the

problems of visualising side looking radar and altimetry data from the Magellan

mission.

Our critique of the JPL images were of 3 areas:

1. Orange colouring

2. Radar clinometry

3. Vertical exaggeration

As an aesthetic aside, we thought the jet black skies were misleading to a

more general audience.

1. Orange Colouring The Soviets had a series of ‘Venera’ probes prior to the Magellan project,

did some radar mapping of a small part of the Northern Hemisphere, and managed

to land and return some images from the surface. Colour analysis (I believe done

at USGS in Flagstaff, Arizona, memory fades...) suggested that the light reaching

the surface was red-orange as a human might perceive it, and this became the

justification for the colouring of the surface in all subsequent JPL images. The

data are, of course, radar reflections and as such contain no colour information.

There are times of the day and locations here on Earth in which

measurements of the ambient lighting would tend toward the red/orange,

especially at sunrise and sunset. Humans, however, do not see our planet as

candy-orange. Quite the contrary – red objects viewed under the red light of a

photographic darkroom appear not red but rather white. Green objects appear

black, blue objects appear grey, but nothing really looks red under a red light.

The geologists at the time were fairly sure that most of the surface of

Venus is in fact basalt, the most common rock in the solar system and, on Earth at

least, normally grey or brown. Given that the data contain no actual colour

information, a reasonable solution might have been images in black and white and

grey-scale. My own belief is that the JPL team rejected that option because it was

not sufficiently sexy for media consumption.

Our decision was to use a generic brown basalt base colour, and modulate

that tone with the intensity of radar reflections from the planet surface. Thus

256

where the reflections are low intensity the tone is modulated toward black, and

where the reflections are high it is modulated toward tan. In this sense our choice

was as arbitrary as theirs, with the caveat that no serious planetary scientist

believes the surface of Venus is orange. As long as we are guessing, let us make it

an educated guess based on what we know about the visual appearance of basalt.

Otherwise we should probably just go with black and white and leave it to the

imagination of the viewer.

2. Radar clinometry The probe generated several data sets simultaneously from radar

reflections, altimetry measurements, ambient energy, and gravity anomalies.

These sets are all at different resolutions. In particular, each data point from the

altimetry instrument, from which we expect to generate topographic maps, is an

oval covering roughly 20 kilometres. An object the size of the Grand Canyon will

show up as only a few points. In contrast, the radar data has a resolution of about

75 meters, depending on where in the orbit the readings were taken, (the probe

was closest to the surface at the equator and furthest at the poles.)

A standard imaging approach would be to use the altimetry data to

generate a 3D surface, and then ‘texture map’ the radar image onto that surface.

The images produced by this technique show a smoothly rounded surface covered

with sharp high frequency detail. They don’t look particularly real and impart a

strong ‘computer generated’ sense to the landscape.

The JPL team attempted to generate more detailed topography by using the

radar data to deform the topographic surface in a technique which they termed

‘radar- clinometry,’ from the photo-clinometry techniques which are well

established for terrestrial data. The standard photo method uses shadows cast by

the sun to attempt to deduce elevations, and this works pretty well. Taller objects

produce longer shadows, all else being equal. This method is problematic for

radar data for two reasons. First, radar reflectivity is a complex combination of the

incidence angle, surface roughness, and chemical composition of the reflecting

surface. Untangling these effects is the Holy Grail of radar analysis. The JPL

technique makes the simplifying assumption that incidence angle alone is

responsible for the strength of the radar reflections. Second, the radar itself is the

257

‘sun’ in this approach, so all shadows lay away from the probe, and the ability to

use a little trig to determine the elevations is lost.

Basically what this does is to increase the local topographic elevations

based on the point-by-point brightness of the radar signal. Dark areas are lower

than the local average, bright areas are higher. This method cannot be applied

globally, but must be hand tweaked to fit particular geology. In the case of a

crater, for example, which is typically surrounded by a rough blanket of ejecta that

was thrown up when the impact occurred, the technique would tend to stand the

crater up on it’s end. A series of parallel cracks in the ground would be interpreted

as stair steps, each bright edge increasing the local elevation. This can be quite

misleading.

We developed a very different technique for combining the radar and

altimetry data to produce high-resolution rendering. We reasoned that the radar

reflections for most of the planet were dominated not by the angle of incidence

but rather by the surface roughness. Areas that are very smooth, like the large

open plains that cover most of the planet, are radar dark. Rough areas, like lava

flows, craters, and the heavily folded ‘tessera’ of the highlands, are radar bright.

Drawing again from terrestrial data, we constructed a set of relationships between

radar brightness and a so-called ‘fractal dimension.’ Dark areas map to lower

fractal dimensions and generally smoother surfaces. Bright areas map to higher

dimensions and rougher and more angular topography. Earth based observations

have been able to categorise different types of lava flows and glacial features from

aerial photographs using similar methods.

We also took advantage of a parameter that is included with all the

altimetry data that is a measure of the spread of the altimetry pulse as it returns to

the probe. Rough areas cause the pulse to spread, while smooth surfaces return a

sharper pulse. This ‘rho’ parameter was used as a control on the fractal generator

driven by the radar brightness. Thus if the radar signal was strong, mapping to a

high dimension, but the rho parameter for that sample indicated a sharp pulse, the

dimension would be reduced.

We then built an interpolator which takes as its input the smooth

topographic surface generated from the altimetry, the radar reflections as a map of

local fractal dimensions, and the rho parameters as a control on those dimensions.

This gave us a technique, which we could apply to the whole planet to produce

258

fine-scale elevation information, and this is the technique that produced the

images.

I knew we were on the right track when the first image emerged after

about 20 hours of CPU time. One of the planetary people came into the lab and,

looking over my shoulder declared, “Damn! That looks so GEOLOGIC!”

3. Vertical Exaggeration This is perhaps the slipperiest area with which we had to deal. The Jet

Propulsion Laboratory received a good deal of criticism for the initial round of

Venus images they generated, and the 25:1 vertical resolution bore the brunt of

those complaints. In their defence, this is a very common technique for rendering

large-scale geologic structures. Their chief sin was probably that they did not print

the scale directly on the images. I have here in front of me a USGS global

topographic map of the Earth showing the ocean floors, tectonic boundaries, and

mid-ocean ridges. The legend in the lower right proudly proclaims, “Vertical

Scale 100:1” If the cartographers had chosen 1:1 for this image I would be staring

at a flat, featureless grey surface, hardly useful or interesting to scientists or the

general public.

To date we have generated only one Venus image with 1:1 vertical scale.

This is the image of Maxwell Montes commissioned by Discover Magazine for

their December 1993 issue. All other images have some kind of vertical

exaggeration, usually about 20:1 or 25:1, depending on the scope of the scene.

The larger the area covered, the greater the vertical scale.

If we were on the surface of Venus (ideally in an air-conditioned pressure chamber) would Maat Mons, etc. really look like your pictures? If not, how would the view differ?

Hmmmm.... might not be able to see anything. What does the mid-Atlantic

ridge really ‘look’ like? Assuming we could stand somewhere and see the whole

thing? There’s no light at the bottom of the ocean. I’m not sure such a question is

that meaningful in this context. However, if we could see anything at all, (enough

light, not too much haze and fog, etc) it might look something like our renderings,

though probably more awe-inspiring. We have had to make a lot of assumptions

based on what we know about Earth.

The JPL team evidently ran out of money, and a second round of images

and animations never appeared. I believe that they suffered some embarrassment

259

over the original images and animations. By that time they had declared to the

world, “this is what Venus looks like,” and were evidently loath to go back on that

pronouncement. Those candy-orange pictures are everywhere.

[…] On a trip to JPL in 1993, I learned an important lesson, and I say this

with only part of my tongue-in-cheek. When asked, “Is that really what Venus

looks like?” I now reply “Yes.”

When asked, “Why are your pictures so different from the official NASA

images?” I now say, “Theirs are wrong.” :)

On visualisation and ‘pretty pictures’: I believe that in some ways we were uniquely positioned. We had access

to the emerging scientific visualisation technology, and the Magellan data set. The

engineers had the data but not the rendering technique and experience. The

imaging and animation folks (Lucas Labs, et al) had the tools and talent to create

the imagery, but not the data. We had both, and for a short time, we were alone in

that.

[…] Our images have appeared in over 300 publications since 1993,

including popular and scientific journals, science texts, newspapers, and

encyclopaedias. I still get a check from the Science Photo Library every quarter.

Wish I’d had time and funding to make lots of pictures.

On the way the images invoke not just any dramatic landscape but the landscape of the Western United States in particular:

This is certainly no accident. I travel with my family every spring and

summer to New Mexico and Colorado, Arizona and Utah. I have flown over this

area repeatedly, and the huge vistas and breathtaking rocky desert landscapes are

burned deeply into my experience. It is not surprising that it would emerge when

asked to generate a 3D landscape. Further, I made a special effort to study

landscape paintings and photographs while working on these images, including

the work of Ansell Adams. I took rough drafts to my father-in-law, Bart Hodges,

himself a talented landscape artist and photographer, for criticism. He showed

how artists use light and shadow to draw the eye to certain areas of interest, and

critiqued our choices of colours, framing, and subject from a purely visual

perspective. This was really valuable.

260

We had in mind from the beginning that these images might be widely

published and that we were neither trained nor skilled in the craft of visual

representation. This coupled with the critically harsh environment that is the warp

and woof of science encouraged us to get as much independent input as possible.

As an example, the early images, like those from the JPL program, lacked

a sense of size or scale. The observation was made that the detail in the

background of the images was just as crisp and sharp as the foreground, hence the

entire area appeared small and close-up.

Human experience is that the far extents of very large objects become

diffuse and obscured by atmospheric effects. This is one of the visual cues of size

and depth. We consulted with several planetary scientists and a geochemist at the

Sandia National Laboratory, Patrick Brady, in an effort to determine some

realistic numbers for diffusion, index of refraction, and so forth.

This turned out to be pretty hard to do. Some of the numbers produced

some very bizarre results from our simple-minded rendering software. A

sophisticated atmospheric model, which is really what we needed, was clearly

beyond the range of our landscape modelling capabilities. So we went to a simpler

model of an evenly distributed atmospheric haze. This is probably not an

unreasonable guess, and produces the desired sense of depth and scale without

unduly distorting the topography. The artificially generated cloud cover serves the

same purpose, by projecting a chaotic pattern that is larger in the foreground than

at the horizon. I think the editors cropped that off in the National Geographic

article.

How significant do you believe cultural ideas about landscape are when visualising alien vistas? Do conceptions of ‘dramatic landscape’ get in the way of the science or does our experience with making sense of landscapes and landscape art on Earth help us to make sense of Venus?

Oh gosh, I’m not sure. We’re sort of stuck with it, aren’t we? We have to

guess at what we don’t know, and base those guesses on what we do know. As is

said, data constrains creativity. I’m not so willing to separate out our

understanding of ourselves and the universe into science and art, or philosophy

and religion, or dry calculation and high drama. They seem to all flow together,

and the boundaries are very fluid and ill-defined. Most of the interesting stuff

appears to happen on the boundaries. Computerised scientific visualisation strikes

261

me as one of those boundary phenomena, requiring a cross-breeding of raw

science with computer imaging and software technology (itself a poorly

understood art form), and with an artistic and aesthetic sensibility.

262

Bib l iography Adam, Barbara 2000, ‘The Media Timescapes of BSE News’, Allan, Stuart, Adam, Barbara, and

Carter, Cynthia (Eds.) Environmental Risks and the Media (Chap. 7) pp. 117-129, (London and New York: Routledge) ISBN: 0-415-21447-5

Adcock, Craig 1984 (Fall), ‘Conventionalism in Henri Poincaré and Marcel Duchamp’, Art Journal, Vol. 44 (3) pp. 249-258

Adorno, Theodor Wiesengrund and Horkheimer, Max 1986, Dialectic of Enlightenment, 2nd Edn., (London: Verso) ISBN: 0860917134

Adrion, W. Richards 1988 12 February, ‘Information Technology and the Conduct of Science’, Science, Vol. 239 p. G67-G48

Alaska Synthetic Aperture Radar Facility 1999 ASF’s SAR FAQ, http://www.asf.alaska.edu/user_serv/sar_faq.html (March 1999)

Alberti, Leon Battista 1966 1436, On Painting, (New Haven and London: Yale University Press) ISBN: 0-300-00001-4

Allaby, Michael 1996, Facing the Future: The Case for Science, (London: Bloomsbury) ISBN: 0-7475-2485-8

Allan, Stuart 1995, ‘News, Truth and Postmodernity: Unravelling the Will to Facticity’, Adam, Barbara and Allan, Stuart (Eds.) Theorizing Culture: An Interdisciplinary Critique After Postmodernism (Chap. 8) pp. 129-144, (London: UCL Press) ISBN: 1-85728-329-5

Amann, K. and Knorr Cetina, Karin 1990, ‘The Fixation of (Visual) Evidence’, Lynch, Michael and Woolgar, Steve (Eds.) Representation in Scientific Practice pp. 85-122, (Cambridge, Mass.: MIT Press) ISBN: 0-262-62076-6

Amato, Ivan 1993, ‘Scanning Probe Microscopes Look Into New Territories’, Science, Vol. 262 (8 October) pp. 178-178

Anderson, Alun M 1993, The Fragmenting World of Science Communication, Ackrill, Kate (Ed.) pp. 97-112, (London: Ciba Foundation)

Anderson, Patricia 1991, The Printed Image and the Transformation of Popular Culture 1790-1860, (Oxford: Oxford University Press)

Anderson, Perry 1976, ‘The Antinomies of Antonio Gramsci’, New Left Review, (No. 100) Anninos, Peter, Bajuk, Mark, Bernstein, David, Seidel, Edward, Smarr, Larry, and Hobill, David

1993 January, ‘Visualizing Black Hole Space-times’, IEEE Computer Graphics and Applications, pp. 12-14

(Anonymous) 1997 (13 December) ‘You Can’t Follow the Science Wars Without a Battle Map’ The Economist, pp. 109-111

Ashmore, Malcolm, Myers, Greg, and Potter, Jonathan 1994, ‘Discourse, Rhetoric, Reflexivity: Seven Days in the Library’, Jasanoff, Sheila et al (Eds.) Handbook of Science and Technology Studies (Chap. 15) pp. 321-343, (Thousand Oaks (CA), London, New Delhi: Sage Publications) ISBN: 0-8039-4021-1

Atiyah, Michael 1996 5 January, ‘Science in a Farewell to Arms’, The Times Higher Education Supplement, p. 13,

Atkins, P. W. 1989 (Winter), ‘The Rose, The Lion, and The Ultimate Oyster’, Modern Painters, Vol. 2 (4) pp. 50-55

Bai, Chunli 1995, Scanning Tunneling Microscopy and its Application, (Berlin and London: Springer)

Baigrie, Brian S. (Ed.) 1996, Picturing Knowledge: Historical and Philosophical Problems Concerning the Use of Art in Science, (Toronto, Buffalo, London: University of Toronto Press) ISBN: 0-8020-2985-1

Bakhtin, M. M. 1990, Art and Answerability: Early Philosophical Essays, Holquist, Michael and Liapunov, Vadim (Eds.) (Austin: University of Texas Press) ISBN: 0-292-70411-9

263

Barker, Jane and Downing, Hazel 1985, ‘Word Processing and the Transformation of Patriarchal Relations of Control in the Office’, MacKenzie, Donald and Wajcman, Judy (Eds.) The Social Shaping of Technology: How the Refrigerator got its Hum (Chap. 12) pp. 147-164, (Milton Keynes and Philadelphia: Open University Press) ISBN: 0-335-15026-8

Barker, Martin 1989, Comics: Ideology, Power and the Critics, (Manchester & New York: Manchester University Press)

Barker, Martin 1992, ‘Stuart Hall, Policing the Crisis’, Barker, Martin and Beezer, Anne (Eds.) Readings into Cultural Studies (London & New York: Routledge)

Barker, Martin 1995, ‘Drawing Attention to the Image: Computers and Comics’, Lister, Martin (Ed.) The Photographic Image in Digital Culture (Chap. 9) pp. 188-216, (London and New York: Routledge) ISBN: 0-415-12157-4

Barker, Martin and Beezer, Anne (Eds.) 1992, Reading into Cultural Studies, (London & New York: Routledge)

Barker, Martin and Petley, Julian (Eds.) 1997, Ill Effects: The Media/Violence Debate, (London and New York: Routledge) ISBN: 0-415-14672-0

Barlex, David and Carre, Clive 1985, Visual Communication in Science: Learning Through Sharing Images, 1st Edn, (Cambridge: Cambridge University Press) ISBN: 0 521 27860

Barnes, Barry 1974, Scientific Knowledge and Sociological Theory, (London and Boston: Routledge and Kegan Paul) ISBN: 0-7100-7962-1

Barnes, Barry 1982, T. S. Kuhn and Social Science, (New York: Columbia University Press) ISBN: 0-231-05437-8

Barnes, Barry, Bloor, David, and Henry, John 1996, Scientific Knowledge: A Sociological Analysis, (London: The Athlone Press) ISBN: 0-485-12078-X

Barthes, Roland 1972, Elements of Semiology, (London: Cape) Barthes, Roland 1973, Mythologies, (London: Paladin) ISBN: 0-586-08164-X Barthes, Roland 1977, Image, Music, Text, (London: Fontana Press) ISBN: 0-00-686135-0 Barthes, Roland 1984, Camera Lucida, (London: Fontana Paperbacks) Barthes, Roland 1996, ‘Denotation and Connotation’, Cobley, Paul (Ed.) The Communication

Theory Reader pp. 129-133, (London and New York: Routledge) Bastide, Françoise 1990, ‘The Iconography of Scientific Texts: Principles of Analysis’, Lynch,

Michael and Woolgar, Steve (Eds.) Representation of scientific practice pp. 187-229, (Cambridge Mass.: MIT Press)

Batkin, Norton 1990, Photography and Philosophy, (New York and London: Garland) ISBN: 0-8240-3389-2

Batsleer, Janet, Burkitt, Rob, Carby, Hazel, Davies, Tony, Denning, Michael, Green, Michael, O’Rourke, Rebecca, O’Shaughnessey, Michael, Shannon, Roger, Shortus, Stephen, and Skovmand, Michael 1980, ‘Recent Developments in English Studies at the Centre [for Contemporary Cultural Studies]’, Hall, Stuart et al (Eds.) Culture, Media, Language: Working Papers in Cultural Studies, 1972-79 (Chap. 19) pp. 235-268, (London and New York: Unwin Hyman and Routledge) ISBN: 0-41507906

Beatty, J. Kelly 1993 August, ‘Working Magellan’s Magic’, Sky and Telescope, pp. 16-20 Beer, Gillian 1983, Darwin’s Plots: Evolutionary Narrative in Darwin, George Elliot and

Nineteenth-Century Fiction, (London: ) Beer, Gillian 1986, ‘‘The Face of Nature’: Anthropomorphic Elements in the Language of The

Origin of Species’, Jordanova, Ludmilla (Ed.) Languages of Nature: Critical Essays in Science and Literature pp. 207-243, (London: Free Association Books) ISBN: 0-946960-35-6

Beer, Gillian 1989, ‘Science and Literature’, Olby, R. C. et al (Eds.) Companion to the History of Modern Science (Chap. 51) pp. 783-798, (London: Routledge)

Benjamin, Walter 1982, ‘The Author as Producer’, Burgin, Victor (Ed.) Thinking Photography, 1st Edn., (Chap. 1) pp. 15-31, (London: Macmillan) ISBN: 0-333-27194-7

264

Berger, Arthur Asa 1989, Seeing Is Believing: An Introduction to Visual Communication, (Mountain View, California: Mayfield) ISBN: 0-87484-873-3

Berkeley, George 1975 1734, ‘A Treatise Concerning The Principles of Human Knowledge’, Ayers, M. R. (Ed.) Berkeley: Philosophical Works (Including the Works on Vision) pp. 61-128, (London and Melbourne: Dent) ISBN: 0-460-11483-2

Best, Steven 1992 ?, ‘Chaos and Entropy: Metaphors in Postmodern Science and Social Theory’, Science as culture, pp. 188-226

Binnig, Gerd and Rohrer, Heinrich 1985 August, ‘The Scanning Tunneling Microscope’, Scientific American, Vol. 253 (2) pp. 40-46

Binnig, Gerd, Rohrer, Heinrich, Gerber, Ch., and Weibel, E. 1982 15 January, ‘Tunneling Through a Controllable Vacuum Gap’, Applied Physics Letters, Vol. 40 (2) pp. 178-180

Blackmore, J. S. 1985, Solid State Physics, 2nd Edn., (Cambridge, New York: Cambridge University Press) ISBN: 0-521-31391-0

Blakemore, Colin 1973, ‘The Baffled Brain’, Gregory, R. L. and Gombrich, E. H. (Eds.) Illusion in Nature and Art (Chap. 1) pp. 9-47, (London: Duckworth) ISBN: 0-7156-0759-6

Blaker, Alfred A. 1977, Handbook for Scientific Photography, (W H Freeman and Co.) ISBN: 0-7167-0285-1

Block Jnr, Ed 1986, ‘T.H. Huxley’s Rhetoric and the Popularisation of Victorian Scientific Ideas: 1854-1874’, Victorian Studies, Vol. 29 (3) pp. 363-386

Bloor, David 1991, Knowledge and Social Imagery, 2nd Edn., (Chicago and London: The University of Chicago Press) ISBN: 0-226-06097-7

Bocock, Robert 1986, Hegemony, (Chichester, London, New York: Ellis Horwood, Tavistock) ISBN: 0-7458-0107-2

Bodmer, Walter 1986, The Public Understanding of Science, The Seventeenth J.D. Bernal Lecture, (London: Birkbeck College)

Bodmer, Walter and Wilkins, Janice 1992, ‘Research to Improve Public Understanding Programmes’, Public Understanding of Science, Vol. 1 pp. 7-10

Bond, Richard 1995 November, ‘What About the Counter-Culture, Pet? Reflections on the Science Communicators’ Forum in Newcastle’, Wavelength, (12)

Born, Max 1935, The Restless Universe, 1st Edn., (London: Blackie & Son Ltd) Born, Max 1962, Physics and Politics, (Edinburgh and London: Oliver and Boyd) Bostrom, Robert and Donahew, Lewis 1992, ‘The Case for Empiricism: Clarifying Fundamental

Issues in Communication Theory’, Communication Monographs, Vol. 59 pp. 126-127 Bowie, Jack E. (Ed.) 1995, Data Visualization in Molecular Science: Tools for Insight and

Innovation, (Reading MA, Wokingham England, etc.: Addison-Wesley) ISBN: 0-201-58157-4

Bowie, Jack E. 1995, Data Visualisation in Molecular Science: Tools for Insight and Innovation, (Addison Wesley)

Boyne, Roy 1995, ‘Fractured Subjectivity’, Jenks, Chris (Ed.) Visual Culture (Chap. 4) pp. 58-77, (New York and London: Routledge) ISBN: 0-415-10623-0

Bricmont, Jean and Sokal, Alan D. 1997 17 October, ‘What is All the Fuss About? How French Intellectuals have Responded to Accusations of Science-Abuse’, Times Literary Supplement, (4933) p. 17,

Briscoe, Mary Helen 1996, Preparing Scientific Illustrations: A Guide to Better Posters, Presentations and Publications, 2nd Edn., (New York, London, etc.: Springer) ISBN: 0-387-94581-4

Broad, William J 1982 15 Jan., ‘Science Magazines: The Second Wave Rolls In’, Science, Vol. 215 pp. 272-273

Brockman, John 1995, The Third Culture: Beyond the Scientific Revolution, (New York and London: Simon and Schuster) ISBN: 0684817047

265

Brodie, Ken W. et al (Eds.) 1992, Scientific Visualization, (Heidelberg, New York, Etc.: Springer Verlag) ISBN: 3-540-54565-4

Broks, Peter 1988, Science and the Popular Press: A Cultural Analysis of British Family Magazines 1890-1914, University of Lancaster, PhD

Broks, Peter 1993, ‘Science, Media and Culture: British Magazines, 1890-1914’, Public Understanding of Science, Vol. 2 pp. 123-139

Broks, Peter 1996, Media Science Before the Great War, (London: Macmillan Press) ISBN: 0333656385

Bronner, Stephen Eric and Kellner, Douglas MacKay (Eds.) 1989, Critical Theory and Society: A Reader, (London and New York: Routledge) ISBN: 0-415-90041-7

Brush, Stephen G. 1995, ‘Scientists as Historians’, Osiris, Vol. 10 pp. 215-231 Bube, Richard H. 1981, Electrons in Solids: An Introductory Survey, (London and New York :

Academic Press) ISBN: 0-12-138650-3 Bud, Robert 1995, ‘Science, Meaning and Myth in the Museum’, Public Understanding of

Science, Vol. 4 pp. 1-16 Bull, Malcolm 1994 Sept/Oct, ‘Vision and Totality’, New Left Review, Vol. 207 Bullock, Mark A. and Grinspoon, David H. 1999 March, ‘Global Climate Change on Venus’,

Scientific American, Vol. 280 (3) pp. 34-41 Burgess, Jeremy 1996 21 September, ‘The Fine Art of Funding’, New Scientist, p. 56, Burgin, Victor 1982, ‘Looking at Photographs’, Burgin, Victor (Ed.) Thinking Photography,

1st Edn., (Chap. 6) pp. 142-153, (London: Macmillan) ISBN: 0-333-27194-7 Burgin, Victor 1982, ‘Photographic Practice and Art Theory’, Burgin, Victor (Ed.) Thinking

Photography, 1st Edn., (Chap. 3) pp. 39-83, (London: Macmillan) ISBN: 0-333-27194-7 Burnett, Christopher 1991 February, ‘To Mobilize the Mind’s Eye: Selling Scientific

Visualization’, After Image, Bush, Vannevar 1996 1945, ‘As We May Think’, Druckrey, Timothy (Ed.) Electronic Culture:

Technology and Visual Representation pp. 29-45, (New York: Aperture) ISBN: 0-89381-678-7

Buxton, Bill and Fitzmaurice, George W. 1998 Nov., ‘HMDs, Caves and Chameleon: A Human-Centric Analysis of Interaction in Virtual Space’, ACM SIGGRAPH Computer Graphics, pp. 69-74, 104

Buxton, William A. S. 1994, ‘Human Skills and Interface Design’, MacDonald, Lindsay W. and Vince, John (Eds.) Interacting with Virtual Environments (Chap. 1) pp. 1-12, (Chichester: John Wiley & Sons) ISBN: 0-471-93941-2

Caglioti, Giuseppe 1992, The Dynamics of Ambiguity, (Berlin, Heidelberg and New York: Springer-Verlag) ISBN: 3-540-52020-1

Callon, Michael 1994, ‘Reinventing the Wheel’, Jasanoff, Sheila et al (Eds.) Handbook of Science and Technology Studies (Chap. 2) pp. 29-64, (Thousand Oaks (CA), London, New Delhi: Sage Publications) ISBN: 0-8039-4021-1

Callon, Michel 1986, ‘Some Elements of a Sociology of Translation: Domestication of the Scallops and the Fishermen of St Brieuc Bay’, Law, John (Ed.) Power Action and Belief: A New Sociology of Knowledge? pp. 196-233, (London: Routledge and Kegan Paul)

Cameron, Iain and Edge, David 1979, Scientific Images and their Social Uses: An Introduction to the Concept of Scientism, (London: Butterworth & Co) ISBN: 0-408-71309-7

Cantor, Geoffrey 1985, ‘Light and Enlightenment: An Exploration of Mid-Eighteenth-Century Modes of Discourse’, Lindberg, David C and Cantor, Geoffrey (Eds.) The Discourse of Light from the Middle Ages to the Enlightenment pp. 67-106, (Los Angeles: )

Capra, Fritjof 1989, Uncommon Wisdom: Conversations with Remarkable People, (London: Flamingo) ISBN: 0-00-654341-3

Carson, C. 1995, ‘Who Wants a Postmodern Physics?’, Science in Context, Vol. 8 (4) pp. 635-655 Cassirer, Ernst 1955, The Philosophy of Symbolic Forms, (New Haven: )

266

Chandler, Daniel 1994, Transmission Model of Communication, (http://www.aber.ac.uk/media/Documents/short/trans.html)

Chapman, S. K. 1997, Working With a Scanning Electron Microscope, (Chislehurst: Lodgemark Press) ISBN: 0-850770-93-9

Charlesworth, Kate and Gribbin, John 1990, The Cartoon History of Time, 1st Edn. (London: Macdonald & Co (Publishers) Ltd) ISBN: 07474 06804

Christie, J. R. R. 1989, ‘The Development of the Historiography of Science’, Olby, R. C. et al (Eds.) Companion to the History of Modern Science (Chap. 1) pp. 5-22, (London: Routledge)

Clancy, Tom 1993, The Sum of All Fears, (London: Harper Collins) ISBN: 0-00-647116-1 Clarke, Bruce 1996 (Winter), ‘Allegory and Science’, Configurations, Vol. 4 (1) pp. 33-37 Clarke, Bruce 1996, ‘Allegories of Victorian Thermodynamics’, Configurations, Vol. 4 (1)

pp. 67-90 Cobley, Paul (Ed.) 1996, The Communication Theory Reader, (London and New York: Routledge) Cohan, Steven and Shires, Linda M. 1996, ‘Theorising Language’, Cobley, Paul (Ed.) The

Communication Theory Reader pp. 115-125, (London and New York: Routledge) Coley, Noel G and Hall, Vance M D (Eds.) 1980, Darwin to Einstein: Primary Sources on Science

and Belief, (Harlow and New York: Longman and Open University Press) ISBN: 0-582-49159-2

Collier, James H. and Toomey, David M. (Eds.) 1997, Scientific and Technical Communication: Theory, Practice and Policy, (Thousand Oaks, London, New Delhi: Sage) ISBN: 0-7619-0321-6

Collins, Graham p. 1993 November, ‘STM Rounds Up Electron Waves at the QM Corral’, Physics Today, pp. 17-19

Collins, H. M. 1996, ‘In Praise of Futile Gestures. How Scientific is the Sociology of Scientific Knowledge?’, Social Studies of Science, Vol. 26 pp. 229-244

Collins, Harry M. 1995 May, ‘Cooperation and the Two Cultures: Response to Labinger’, Social Studies of Science, Vol. 25 (2) pp. 306-309

Colonna, Jean-Francois 1994, ‘Scientific Display: A Means of Reconciling Artists and Scientists’, Pickover, Clifford A. and Tewksbury, Stuart K. (Eds.) Frontiers of Scientific Visualization (Chap. 7) pp. 181-212, (New York, Chichester: Wiley) ISBN: 0-471-30972-9

Cooter, Roger 1984, The Cultural Meaning of Popular Science: Phrenology and the Organisation of Consent in Nineteenth-Century Britain, (Cambridge: Cambridge University Press)

Cooter, Roger and Pumfrey, Stephen 1994, ‘Separate spheres and public places: reflections on the history of science popularisation and science in popular culture’, History of Science, Vol. 32 pp. 237-267

Cox, D. 1988, ‘Using the Supercomputer to Visualize Higher Dimensions: An Artist’s Contribution to Scientific Visualization’, Leonardo, Vol. 21 pp. 233-242

Cox, D. 1990 March, ‘The Art of Scientific Visualization’, Academic Computing, pp. 20-40 Crampton Smith, Gillian 1994, ‘The Art of Interaction’, MacDonald, Lindsay W. and Vince, John

(Eds.) Interacting with Virtual Environments (Chap. 6) pp. 79-94, (Chichester: John Wiley & Sons) ISBN: 0-471-93941-2

Crary, Jonathan 1990, Techniques of the Observer: On Vision and Modernity in the Nineteenth-Century, (Cambridge, Mass. & London: MIT Press)

Crommie, M. F., Lutz, C. P., and Eigler, D. M. 1993 10 June, ‘Imaging Standing Waves in a Two-Dimensional Electron Gas’, Nature, Vol. 363 pp. 524-527

Crommie, M. F., Lutz, C. P., and Eigler, D. M. 1993, ‘Confinement of Electrons to Quantum Corrals on a Metal Surface’, Science, Vol. 262 (8 October) pp. 218-220

Crommie, M. F., Lutz, C. P., Eigler, D. M., and Heller, E. J. 1995, ‘Waves on a Metal Surface and Quantum Corrals’, Surface Review and Letters, Vol. 2 (1) pp. 127-137

267

Crommie, M. F., Lutz, C. P., Eigler, D. M., and Heller, E. J. 1996, ‘Quantum Interference in 2D Atomic-Scale Structures’, Surface Science, Vol. 361/362 pp. 864-869

Culture, Society and the Media 1982, Gurevitch, Michael et al (Eds.) (London and New York: Methuen) ISBN: 0-416-73510-X

Curran, James 1996, ‘Rethinking Mass Communications’, Curran, James, Morley, David, and Walkerdine, Valerie (Eds.) Cultural Studies and Communications (London, etc.: Arnold) ISBN: 0 340 61417 X

Curran, James, Gurevitch, Michael, and Woollacott, Janet 1982, ‘The Study of the Media: Theoretical Approaches’, Gurevitch, Michael et al (Eds.) Culture, Society and the Media (Chap. 1) pp. 11-29 (London and New York: Methuen) ISBN: 0-416-73510-X

Curran, James, Morley, David, and Walkerdine, Valerie (Eds.) 1996, Cultural Studies and Communications, (London, etc.: Arnold) ISBN: 0 340 61417 X

Curtis, Ron 1994, ‘Narrative Form and Normative Force: Baconian Story-Telling in Popular Science’, Social Studies of Science, Vol. 24 pp. 419-461

Cushing, James T. 1994, Quantum Mechanics: Historical Contingency and the Copenhagen Interpretation, (Chicago and London: University of Chicago Press) ISBN: 0-226-13204-8

Cushing, James T. and McMullin, Ernan (Eds.) 1989, Philosophical Consequences of Quantum Theory: Reflections on Bell’s Theorem, (Notre Dame, Indiana: University of Notre Dame Press) ISBN: 0-268-01579-1

d’Espagnat, Bernard 1979, ‘The Quantum Theory and Reality’, Scientific American, Vol. 241 (5) pp. 128-141

Darius, Jon 1984, Beyond Vision, (Oxford and New York: Oxford University Press) ISBN: 0-19-853245-8

Darrigol, Olivier 1992, From C-Numbers to Q-Numbers: The Classical Analogy in the History of Quantum Theory., (Vol. 10) (Berkeley/Los Angeles: University of California Press)

Daston, Lorraine and Galison, Peter 1992 (Fall), ‘The Image of Objectivity’, Representations, Vol. 40 pp. 81-128

Davies, Duncan, Bathurst, Diana, and Bathurst, Robin 1990, The Telling Image: The Changing Balance Between Pictures and Words in a Technological Age, (Oxford: Oxford University Press)

Davies, Paul 1984, God and the New Physics, (Harmondsworth, etc.: Pelican) ISBN: 0-14-022550-1

Davies, Paul 1993, The Mind of God: Science and the Search for Ultimate Meaning, (London, etc: Penguin) ISBN: 0-14-015815-4

Davies, Paul and Brown, Julian R. 1986, The Ghost in the Atom: A Discussion of the Mysteries of Quantum Physics, (Cambridge, etc.: Cambridge University Press) ISBN: 0-521-31316-3

Davies, Paul and Gribbin, John 1992, The Matter Myth: Beyond Chaos and Complexity, (London, etc.: Penguin) ISBN: 0-14-013426-3

Dear, Peter 1985, ‘Toitus in Verba: Rhetoric and Authority in the Early Royal Society’, Isis, Vol. 76 pp. 145-161

Dear, Peter 1995 (Spring), ‘Cultural History of Science: An Overview with Reflections’, Science, Technology & Human Values, Vol. 20 (2) pp. 150-169

Deregowski, Jan B. 1973, ‘Illusion and Culture’, Gregory, R. L. and Gombrich, E. H. (Eds.) Illusion in Nature and Art (Chap. 4) pp. 161-191, (London: Duckworth) ISBN: 0-7156-0759-6

Dervin, Brenda et al (Eds.) 1989, Rethinking Communication, (Vol. 1) (Newbury Park, Calif. and London: Sage)

Devlin, Keith 1997 25 September, ‘What You See is What the Computer Doesn’t Get: Keith Devlin on the Uphill Struggle to Make Computers Understand Pictures’, The Guardian pp. 4-5

268

Dingle, Herbert 1980, ‘Science and the Unobservable’, Coley, Noel G. and Hall, Vance M. D. (Eds.) Darwin to Einstein: Primary Sources on Science and Belief (Chap. 4d) pp. 158-169, (Harlow, Essex: Longman & Open University Press) ISBN: 0-582-49159-2

Dixon, Bernard 1980, ‘Telling the People: Science in the Public Press Since the Second World War’, Meadows, Arthur Jack (Ed.) Development of science publishing in Europe (Elsevier)

Dixon, Bernard 1986, ‘Books and Films: Powerful Media for Science Popularization’, Impact of Science on Society (Unesco), Vol. 144 pp. 341-346 (Web of Science source: Vol. 36 (4) pp. 379-385 – Probably more reliable)

Dixon, Bernard 1989, The Science of Science: Changing the Way We Think, (London: Cassell) ISBN: 0 304 31786 1

Doorman, S J (Ed.) 1989, Images of Science: Scientific Practice and the Public, (Aldershot: ) Dornan, Christopher 1988, ‘The “Problem” of Science and the Media: a Few Seminal Texts in

their Context, 1956-65’, Journal of Communication Inquiry, Vol. 12 (2) Dornan, Christopher 1989, ‘Science and Scientism in the Media’, Science as culture, Vol. 1 (7)

pp. 101-121 Dornan, Christopher 1999, ‘Some Problems in Conceptualising the Issue of ‘Science in the

Media’’, Scanlon, Eileen, Whitelegg, Elizabeth, and Yates, Simeon (Eds.) Communicating Science: Contexts and Channels (Reader 2) (Chap. 12) pp. 179-205, (London, New York: Routledge) ISBN: 0-415-19753-8

Driver, Rosalind, Leach, John, Millar, Robin, and Scott, Phil 1996, Young People’s Images of Science, (Buckingham and Philadelphia: Open University Press) ISBN: 0-335-19381-1

Druckrey, Timothy (Ed.) 1996, Electronic Culture: Technology and Visual Representation, (New York: Aperture) ISBN: 0-89381-678-7

Dubeck, Leroy W, Moshier, Suzanne E, and Boss, Judith E 1994, Fantastic Voyages: Learning Science Through Science Fiction Films, (New York: American Institute of Physics Press) ISBN: 1-56396-195-4

Dunbar, Robin 1995 Sunday 2 April, ‘Science Now (Letter)’, The Sunday Times p. 2, Dunbar, Robin 1995, The Trouble With Science, (London: Faber) Dunn, Robert G 1979, ‘Science, Technology and Bureaucratic Domination: Television and the

Ideology of Scientism’, Media, Culture and Society, Vol. 1 (4) Durant, John R. 1992, ‘Editorial’, Public Understanding of Science, Vol. 1 pp. 1-5 Durant, John R. 1993, ‘What is Scientific Literacy?’, Durant, John R. and Gregory, Jane (Eds.)

Science and Culture in Europe pp. 129-138, (London: Science Museum) ISBN: 0-901805-66-1

Durant, John R. 1996 Monday 1April, ‘Once the Men in White Coats Held the Promise of a Better Future. Why Have we Lost our Trust in Them?’, The Independent p. 17,

Durant, John R. 1997 Oct., ‘Editorial’, Public Understanding of Science, Vol. 6 (4) Durant, John R. and Gregory, Jane (Eds.) 1993, Science and Culture in Europe, (London: Science

Museum) ISBN: 0-901805-66-1 Durant, John R. and Thomas, Geoffrey p. 1987, ‘Why Should we Promote the Public

Understanding of Science?’, Shortland, M. (Ed.) Scientific Literacy Papers (Oxford: Rawley House)

Durant, John R. and van den Brul, Caroline 1992 20 Nov., ‘An Atom of Comprehension’, New Statesman and Society, pp. 22-23

Durant, John R., Evans, Geoffrey A., and Thomas, Geoffrey p. 1989, ‘The Public Understanding of Science’, Nature, Vol. 340 (July 6) pp. 11-14

Durant, John R., Evans, Geoffrey, and Thomas, Geoffrey 1992, ‘Public Understanding of Science in Britain: The Role of Medicine in the Popular Representation of Science’, Public Understanding of Science, Vol. 1 pp. 161-182

269

Dyson, Freeman J. 1989, Preface to Oppenheimer, J. Robert Atom and Void: Essays on Science and Community (Princeton, New Jersey: Princeton University Press) ISBN: 0-691-08547-1

Eagleton, Terry (Ed.) 1994, Ideology, (London & New York: Longman) Earnshaw, Rae A. 1997 March/April, ‘3D and Multimedia on the Information Superhighway’,

IEEE Computer Graphics and Applications, pp. 30-31 Eastaway, Rob and Wyndham, Jeremy 1999, Why do Buses Come in Threes? The Hidden

Mathematics of Everyday Life, (London: Robson Books) ISBN: 1-86105-247-2 Eco, Umberto 1982, ‘Critique of the Image’, Burgin, Victor (Ed.) Thinking Photography, 1st Edn.,

(Chap. 2) pp. 32-38, (London: Macmillan) ISBN: 0-333-27194-7 Edge, David 1994, ‘Reinventing the Wheel’, Jasanoff, Sheila et al (Eds.) Handbook of Science and

Technology Studies (Chap. 1) pp. 3-25, (London: Sage Publications) ISBN: 0-8039-4021-1

Edgerton Jr, Samuel Y. 1984 (Fall), ‘Galileo, Florentine “Disegno,” and the “Strange Spottednesse” of the Moon’, Art Journal, Vol. 44 (3) pp. 225-232

Eigler, D. M. and Schweizer, E. K. 1990 5 April, ‘Positioning Single Atoms With a Scanning Tunnelling Microscope’, Nature, Vol. 344 pp. 524-526

Einstein, Albert 1920, Relativity: The Special and General Theory – A Popular Exposition, 2nd Edn., (London: Methuen & Co Ltd)

Einstein, Albert 1952, ‘On the Electrodynamics of Moving Bodies’, Sommerfield, A, Perrett, W, and Jeffery, G B (Eds.) The Principle of Relativity: A Collection of Original Memoirs on the Special and General Theory of Relativity pp. 35-65, (London: Dover)

Einstein, Albert 1980, ‘Comments on a Collection of Essays, Albert Einstein: Philosopher-Scientist’, Coley, Noel G. and Hall, Vance M. D. (Eds.) Darwin to Einstein: Primary sources on science and belief (Chap. 4f) pp. 183-200, ( Harlow, Essex: Longman & Open University Press) ISBN: 0-582-49159-2

Einstein, Albert 1980, ‘On the Method of Theoretical Physics’, Coley, Noel G. and Hall, Vance M. D. (Eds.) Darwin to Einstein: Primary Sources on Science and Belief (Chap. 4b) pp. 143-148, (Harlow, Essex: Longman & Open University Press) ISBN: 0-582-49159-2

Encarnaçao, José, Foley, Jim , Bryson, Steve, Feiner, Steven K., and Gershon, Nahum 1994 March, ‘Research Issues in Perception and User Interfaces’, IEEE Computer Graphics and Applications, Vol. 14 (2) pp. 67-69

Engelhardt, H. Tristram Jr and Caplan, Arthur L. (Eds.) 1987, Scientific Controversies: Case Studies in the Resolution and Closure of Disputes in Science and Technology , (Cambridge, New York and Melbourne: Cambridge University Press) ISBN: 0-521-25565-1

Environmental Risks and the Media2000, Allan, Stuart, Adam, Barbara, and Carter, Cynthia (Eds.) (London and New York: Routledge) ISBN: 0-415-21447-5

Epstein, Lewis Carroll 1986, Thinking Physics: Practical Lessons in Critical Thinking, 2nd Edn., (San Francisco: Insight Press) ISBN: 0-935218-06-8

Eriksson, M. A., Beck, R. G., Topinka, M. A., Katine, J. A., Westervelt, R. M., Chapman, K. L., and Gossard, A. C. 1996, ‘Effect of a Charged Scanned Probe Microscope Tip on a Subsurface Electron Gas’, Superlattices and Microstructures, Vol. 20 (4) pp. 435-440

Evans, Geoffrey and Durant, John R. 1995, ‘The Relationship Between Knowledge and Attitudes in the Public Understanding of Science in Britain.’, Public Understanding of Science, Vol. 4 pp. 57-74

Evans, Harold 1997, Pictures on a Page, (London: Pimlico) ISBN: 0712673881 Evans, Peter 1993, Radio Science: Form and Function, Ackrill, Kate (Ed.) pp. 123-140, (London:

Ciba Foundation) Ewing, William A. 1996, Inside Information: Imaging the Human Body, (London: Thames and

Hudson) ISBN: 0-500-27881-4

270

Felt, Ulrike and Nowotny, Helga 1993, ‘Science Meets the Public: A New Look at an Old Problem’, Public Understanding of Science, Vol. 2 pp. 285-426

Feyerabend, Paul 1981, ‘How To Defend Society Against Science’, Hacking, Ian (Ed.) Scientific Revolutions (Chap. 8) pp. 156-167, (Oxford and New York: Oxford University Press) ISBN: 0-19-875051-X

Feyerabend, Paul 1988, Against Method, (London and New York: Verso) ISBN: 0-86091-934-X Feynman, Richard P 1990, QED: The Strange Story of Light and Matter, (London: Penguin)

ISBN: 0140125051 Feynman, Richard P 1992, The Character of Physical Law, (London: Penguin)

ISBN: 0-14-017505-9 Figlio, Karl M 1976, ‘The Metaphor of Organisation: An Historiographical Perspective on the

Biomedical Sciences of the Early Nineteenth-Century’, History of Science, Vol. 14 pp. 17-53

Fine, Arthur 1996, The Shaky Game: Einstein, Realism and the Quantum Theory, 2nd Edn., (Chicago and London: University of Chicago Press) ISBN: 0-226-24949-2

Fish, Stanley 1980, ‘Is There a Text in this Class?’, Is There a Text in this Class? The Authority of Interpretive Communities (Cambridge, Massachusetts and London: Harvard University Press) ISBN: 0-674-46726-4

Fisher, Len 1992 (Summer), ‘A Scientist Looks at Philosophy: A Discussion of the Different Roles of Models in Scientific and Philosophical Thinking’, Cogito, pp. 96-99

Fiske, John 1990, Introduction to Communication Studies, 2nd Edn., (London & New York: Routledge)

Fodor, Jerry 1998 29 October, ‘Look!’, London Review of Books, pp. 3-4 Ford, Brian J 1992, Images of Science: A History of Scientific Illustration, (London: The British

Library) ISBN: 0-7123-0267-0 Foucault, Michel 1970, The Order of Things: An Archaeology of the Human Sciences, (London : ) Foucault, Michel 1976, The Archaeology of Knowledge and the Discourse on Language, (New

York ) Fournier, Alain and Buchanan, John 1995, ‘An Overview of Computer Graphics for

Visualization’, Gallagher, Richard S (Ed.) Computer Visualization: Graphics Techniques for Scientific and Engineering Analysis (Chap. 2) pp. 17-60, (London and Tokyo: CRC Press) ISBN: 0-8493-9050-8

Fox, Cecil H. 1996 25 January, ‘Letter to the Editor’, Nature, Vol. 379 p. 292, Fox, D and Lawrence, Christopher 1988, Photographing Medicine: Images and Power in Britain

and America since 1840, (New York: Greenwood) Frank, Philipp 1980, ‘Einstein’s Philosophy of Science’, Coley, Noel G. and Hall, Vance M. D.

(Eds.) Darwin to Einstein: Primary Sources on Science and Belief (Chap. 4c) pp. 148-158, (Harlow, Essex: Longman & Open University Press) ISBN: 0-582-49159-2

Franklin, Allan 1994, ‘Review of Oliver Darrigol: From c-Numbers to q-Numbers: The Classical Analogy in the History of Quantum Theory’, Isis, Vol. 85 (3) pp. 546-547

Friedhoff, Richard Mark and Benzon, William 1991, Visualization: The Second Computer Revolution, (New York: W. H. Freeman) ISBN: 0-7167-2231-3

Friedman, Alan J. 1987, ‘The Influence of Pseudoscience, Parascience and Science Fiction’, Evered, David and O’Connor, Maeve (Eds.) Communicating Science to the Public pp. 190-204, (Chichester: John Wiley and Sons) ISBN: 0-471-91511-4

Fuller, Steve 1993, Philosophy of Science and Its Discontents, 2nd Edn., (New York, London: The Guilford Press) ISBN: 0-89862-020-1

Fuller, Steve 1994, ‘Can Science Studies be Spoken in a Civil Tongue?’, Social Studies of Science, Vol. 24 pp. 143-168

Fuller, Steve 1995 May, ‘From Pox to Pax?: Response to Labinger’, Social Studies of Science, Vol. 25 (2) pp. 309-314

271

Fyfe, Gordon and Law, John (Eds.) 1988, Picturing Power: Visual Depiction and Social Relations, (London: Routledge & Kegan Paul)

Fyfe, Gordon and Law, John 1988, ‘On the Invisibility of the Visual: Editors’ Introduction ‘, Fyfe, Gordon and Law, John (Eds.) Picturing Power: Visual Depiction and Social Relations pp. 1-14, (London and New York: Routledge) ISBN: 0-415-03144-3

Gadamer, Hans-Georg 1976, Philosophical Hermeneutics, (Berkeley: ) Gal, Ofer 1994, ‘Tropes and Topics in Scientific Discourse: Galileo’s De Motu ‘, Science in

Context, Vol. 7 (1) pp. 25-52 Galison, Peter 1997, Image and Logic: A Material Culture of Microphysics, (Chicago and London:

University of Chicago Press) ISBN: 0-226-27917-0 Galison, Peter 1998, ‘Judgment Against Objectivity’, Jones, Caroline A. and Galison, Peter (Eds.)

Picturing Science, Producing Art pp. 327-359, (New York and London: Routledge) ISBN: 0-415-91912-6

Gallagher, Richard S (Ed.) 1995, Computer Visualization: Graphics Techniques for Scientific and Engineering Analysis, (London: CRC Press) ISBN: 0-8493-9050-8

Gallagher, Richard S 1995, ‘Future Trends in Scientific Visualization’, Gallagher, Richard S (Ed.) Computer Visualization: Graphics Techniques for Scientific and Engineering Analysis (Chap. 10) pp. 291-304, (London and Tokyo: CRC Press) ISBN: 0-8493-9050-8

Gardner, Carl and Young, Robert 1981, ‘Science on TV: A Critique’, Bennett, Tony et al (Eds.) Popular Television and Film (London: British Film Institute)

Garner, Jane 1998 January, ‘Weather Watcher (Profile of Piers Corbyn)’, Saga Magazine, Geertz, Clifford 1975, The Interpretation of Cultures, ( ) Gershon, Nahum 1996 March, ‘Moving Happily Through the World Wide Web’, IEEE Computer

Graphics and Applications, Vol. 16 (2) pp. 72-75 Gershon, Nahum and Brown, Judith R. 1996 March, ‘Computer Graphics and Visualization in the

Global Information Infrastructure’, IEEE Computer Graphics and Applications, Vol. 16 (2) pp. 60-61

Gershon, Nahum and Brown, Judith R. 1996 March, ‘The Role of Computer Graphics and Visualization in the GII’, IEEE Computer Graphics and Applications, Vol. 16 (2) pp. 61-63

Gershon, Nahum and Eick, Stephen G. 1998 July/August, ‘Scaling to New Heights’, IEEE Computer Graphics and Applications, Vol. 18 (4) pp. 16-17

Gieryn, Thomas F 1983 Dec., ‘Boundary-Work and the Demarcation of Science from Non-Science: Strains and Interests in Professional Ideologies of Scientists’, American Sociological Review, Vol. 48 pp. 781-795

Gieryn, Thomas F 1994, ‘Boundaries of Science’, Jasanoff, Sheila et al (Eds.) Handbook of Science and Technology Studies (Chap. 18) pp. 393-443, (Thousand Oaks (CA), London, New Delhi: Sage) ISBN: 0803940211

Gieryn, Thomas F 1996 (Winter), ‘Policing STS: A Boundary-Work Souvenir from the Smithsonian Exhibition on “Science in American Life”‘, Science, Technology & Human Values, Vol. 21 (1) pp. 100-115

Gieryn, Thomas F 1999, Cultural Boundaries of Science: Credibility on the Line, (Chicago and London: University of Chicago Press) ISBN: 0-226-29262

Gilbert, G. Nigel and Mulkay, Michael 1984, Opening Pandora’s Box, (Cambridge: Cambridge University Press)

Gilbert, Scott F. 1995, ‘Postmodernism and Science’, Science in Context, Vol. 8 (4) pp. 559-561 Gillott, John and Kumar, Manjit 1995, Science and the Retreat from Reason, (London: Merlin

Press) ISBN: 085036-433-7 Gimon, Charles A. ‘Heroes of Cyberspace: Claude Shannon’ for Info Nation

http://www.skypoint.com/~gimonca/shannon.html (August 2000)

272

Gimzewski, James 1998 (June), ‘Molecules, Nanophysics and Nanoelectronics’, Physics World, pp. 29-33

Gisolf, Aart 1993, Science on Television, Ackrill, Kate (Ed.) pp. 113-122, (London: Ciba Foundation)

Glance, Vivienne 1995, ‘Supernova: A Quest for Hearts and Minds’, Physics World, Vol. 8 (3) pp. 19-20

Glasner, Peter 2000, ‘Reporting Risks: Problematising Public Participation and the Human Genome Project’, Allan, Stuart, Adam, Barbara, and Carter, Cynthia (Eds.) Environmental Risks and the Media (Chap. 8) pp. 130-141, (London and New York: Routledge) ISBN: 0-415-21447-5

Gleick, James 1988, Chaos: Making a New Science, (London, etc.: Penguin) Göbel, Martin, Müller, Heinrich, and Urban, Bodo (Eds.) 1995, Visualization in Scientific

Computing, (Vienna, New York, etc.: Springer-Verlag) ISBN: 3-211-82633-5 Goldberg, Vicki 1991, The Power of Photography: How Photographs Changed our Lives, (New

York, London, Paris: Abbeville Press) ISBN: 1-55859-039-0 Goldsmith, Maurice 1986, The Science Critic: A Critical Analysis of the Popular Presentation of

Science, (London and New York: Routledge and Kegan Paul) ISBN: 0-7102-0467-1 Golinski, J. V. 1989, ‘Language, Discourse and Science’, Olby, R. C. et al (Eds.) Companion to

the History of Modern Science (Chap. 9) pp. 110-123, (London: Routledge) Golinski, J. V. 1992, Science as Public Culture: Chemistry and Enlightenment in Britain,

1760-1820, (Cambridge) Gombrich, E. H. 1977, Art and Illusion: A Study in the Psychology of Seeing, 5th Edn., (London:

Phaidon) ISBN: 0-7148-1756-2 Gooch, Richard 1995, Visualisation: From Data to Understanding Goro, Fritz 1993, On the Nature of Things: The Scientific Photography of Fritz Goro, (New York:

Aperture) ISBN: 0-89381-542-X Gould, Stephen Jay 1993, Myths and How to Dissipate Them, Ackrill, Kate (Ed.) pp. 7-22,

(London: Ciba Foundation) Gower, Barry 1997, Scientific Method: An Historical and Philosophical Introduction, (London

and New York: Routledge) ISBN: 0-415-12282-1 Gramsci, Antonio 1971, Selections from the Prison Notebooks, (London: Lawrence and Wishart) Grave, Michael, Hewitt, W. Terry, and Le Lous, Yvon (Eds.) 1994, Visualization in Scientific

Computing, (Berlin, London, etc.: Springer Verlag) ISBN: 3-540-56147-1 Greenberg, Henry 1997, ‘Medicine Took and Earlier Flight’, Gross, Paul R, Levitt, Norman, and

Lewis, Martin W (Eds.) The Flight from Science and Reason p. ix-xi, (New York: New York Academy of Sciences & Johns Hopkins University Press) ISBN: 0-8018-5676-0

Greenwade, L. Eric 1991 12 September, ‘Scientific Visualization: Practices and Promises’, Nature, Vol. 353 pp. 191-192

Gregory, Bruce 1988, Inventing Reality: Physics as Language, (John Wiley & Sons) ISBN: 0-471-52482-4

Gregory, Jane 1998, Fred Hoyle and the Popularisation of Cosmology, University of London , PhD Thesis

Gregory, Jane and Miller, Steve 1998, Science in Public: Communication, Culture and Credibility, (New York and London: Plenum Trade) ISBN: 0-306-45860-8

Gregory, R. L. and Gombrich, E. H. (Eds.) 1973, Illusion in Nature and Art, (London: Duckworth) ISBN: 0-7156-0759-6

Gregory, Richard L. 1977, Eye and Brain: The Psychology of Seeing, 3rd Edn., ( London: Weidenfeld and Nicolson) ISBN: 297 77298 8

Gribbin, John 1984, In Search of Schrödinger’s cat, (London: Black Swan) Gribbin, John 1995, Schrödinger’s Kittens, (London: Weidenfeld & Nicolson)

ISBN: 0-297-81519-9

273

Griffin, Em 1994, A First Look at Communication Theory, 2nd Edn., (London, New York, etc.: McGraw-Hill) ISBN: 0-07-022796-9

Gross, Alan G. 1990, The Rhetoric of Science, (London: Harvard University Press) ISBN: 0-674-76873-6

Gross, Alan G. 1994, ‘The Roles of Rhetoric in the Public Understanding of Science’, Public Understanding of Science, Vol. 3 pp. 3-23

Gross, Paul R 1996 (Winter), ‘Reply to Tom Gieryn’, Science, Technology & Human Values, Vol. 21 (1) pp. 116-120

Gross, Paul R 1997, ‘Introduction’, Gross, Paul R, Levitt, Norman, and Lewis, Martin W (Eds.) The Flight from Science and Reason pp. 1-7, (New York: New York Academy of Sciences & Johns Hopkins University Press) ISBN: 0-8018-5676-0

Gross, Paul R and Levitt, Norman 1994, Higher Superstition – The Academic Left and Its Quarrels with Science, 1st Edn., (Baltimore: Johns Hopkins University Press) ISBN: 0-8018-4766-4

Gross, Paul R, Levitt, Norman, and Lewis, Martin W (Eds.) 1997, The Flight from Science and Reason, (New York: New York Academy of Sciences & Johns Hopkins University Press) ISBN: 0-8018-5676-0

Groß, Markus 1994, Visual Computing: Systems and Applications, (Berlin, London: Springer-Verlag) ISBN: 3-540-57222-8

Grünewald, Helmut 1996 25 April, ‘Public Scepticism: Letter to the Editor’, Nature, Vol. 380 p. 664

Grünewald, Helmut 1996 25 January, ‘Letter to the Editor’, Nature, Vol. 379 p. 292, Guiraud, Pierre 1975, Semiology, (London, Henley and Boston: Routledge and Kegan Paul)

ISBN: 0-7100-8011-5 Gusfield, Joseph 1976, ‘The Literary Rhetoric of Science: Comedy and Pathos in Drinking Driver

Research’, American Sociological Review, Vol. 41 pp. 16-34 Habermas, Jurgen 1970, Toward a Rational Society: Student Protest, Science and Politics,

(Boston: Beacon Press) Hacking, Ian 1975, Why Does Language Matter to Philosophy?, (Cambridge: Cambridge

University Press) Hacking, Ian 1983, Representing and Intervening, (Cambridge, Eng.: Cambridge University Press)

ISBN: 0 521 28246 2 Hacking, Ian 1985, ‘Do We See Through a Microscope?’, Churchlnd, p. M. and Hooker, C. A.

(Eds.) Images of Science () Hakken, David 1995 May, ‘The Cultural Reconstruction of Science: A Response to Labinger’,

Social Studies of Science, Vol. 25 (2) pp. 317-320 Halestrap, Andrew p. 1996 25 January, ‘Letter to the Editor’, Nature, Vol. 379 p. 292, Hall, Nina (Ed.) 1991, The New Scientist Guide to Chaos, (London, etc.: Penguin)

ISBN: 0-14-014571-0 Hall, Stuart 1977, ‘The Hinterland of Science: Ideology and the “Sociology of Knowledge”’, in

Centre for Contemporary Cultural Studies (Ed.) On Ideology (London: Hutchinson) ISBN: 0-09-134151-5

Hall, Stuart 1980, ‘Encoding/Decoding’, Hall, Stuart et al (Eds.) Culture, Media, Language: Working Papers in Cultural Studies, 1972-79 (Chap. 10) pp. 128-138, (London: Unwin Hyman & Routledge)

Hall, Stuart 1989, ‘Ideology and Communication Theory’, Dervin, Brenda et al (Eds.) Rethinking Communication (Newbury Park, Calif.: Sage)

Hall, Stuart et al (Eds.) 1980, Culture, Media, Language: Working Papers in Cultural Studies 1972-79, (London: Unwin Hyman & Routledge)

274

Hansen, Anders 1993, Reporting Science: Problematic Assumptions in the Debate About Media and Public Understanding of Science, Ackrill, Kate (Ed.) pp. 175-188, (London: Ciba Foundation)

Harris, Randy A 1991, ‘Rhetoric of Science’, College English, Vol. 53 pp. 282-307 Hasegawa, Y. and Avoris, Ph. 1993 16 August, ‘Direct Observation of Standing Wave Formation

at Surface Steps Using Scanning Tunneling Spectroscopy’, Physical Review Letters, Vol. 71 (7) pp. 1071-1074

Hawkes, Terrence 1988, Structuralism and Semiotics, (London and New York: Routledge) ISBN: 0415025257

Hawking, Stephen 1993, Black Holes and Baby Universes (and Other Essays), (London, etc.: Bantam Press) ISBN: 0593 034007

Hawking, Stephen W. 1988, A Brief History of Time: From The Big Bang to Black Holes, (London, etc.: Bantam Press) ISBN: 0-593-01518-5

Hays, Nancy 1993 November, ‘Art and Science Converge’, IEEE Computer Graphics and Applications, pp. 9-12

Head III, James W. and Solomon, Sean C. 1991 12 April, ‘Fundamental Issues in the Geology and Geophysics of Venus’, Science, Vol. 252 pp. 252-260

Heck, Marina Camargo 1980, ‘The Ideological Dimension of Media Messages’, Hall, Stuart et al (Eds.) Culture, Media, Language: Working Papers in Cultural Studies, 1972-79 (Chap. 9) pp. 122-127, (London: Unwin Hyman & Routledge)

Heimann, p. M. 1972, ‘The Unseen Universe: Physics and the Philosophy of Nature in Victorian Britain’, British Journal for the History of Science, Vol. 6 pp. 73-79

Heller, E. J., Crommie, M. F., Lutz, C. P., and Eigler, D. M. 1994 9 June, ‘Scattering and Absorption of Surface Electron Waves in Quantum Corrals’, Nature, Vol. 369 pp. 464-466

Heller, Eric J. and Tomsovic, Steven 1993 July, ‘Postmodern Quantum Mechanics’, Physics Today, pp. 38-46

Hellman, Mary Fallenstein and James, W. R. 1995, The Multimedia Casebook, ( New York, London, etc.: Van Norstrand Reinhold) ISBN: 0-442-01819-3

Hemmings, Ray and Tahta, Dick 1984, Images of Infinity, (Leapfrogs) ISBN: 0 905531 32 9 Henbest, Nigel and Marten, Michael 1996, The New Astronomy, 2nd Edn., (Cambridge:

Cambridge University Press) ISBN: 0-521-40871-7 Henning, Michelle 1995, ‘Digital Encounters: Mythical Pasts and Electronic Presence’, Lister,

Martin (Ed.) The Photographic Image in Digital Culture (Chap. 10) pp. 217-235, (London and New York: Routledge) ISBN: 0-415-12157-4

Herman, Ros 1996 April, ‘Book Review: Facing the Future: The Case for Science by Michael Allaby’, Public Understanding of Science, Vol. 5 (2) pp. 171-181

Hess, David J. 1997, Science Studies: An Advanced Introduction, (New York and London: New York University Press) ISBN: 0-8147-3564-9

Hesse, Mary 1953, ‘Models in Physics’, British Journal for the Philosophy of Science, Vol. 4 pp. 198-214

Hesse, Mary 1958, ‘Theories, Dictionaries and Observation’, British Journal for the Philosophy of Science, Vol. 9 pp. 12-28

Hesse, Mary 1966, ‘The Explanatory Function of Metaphor’, Models and Analogies in Science pp. 157-177, (University of Notre Dame Press)

Hesse, Mary 1968 Aug, ‘Review of Israel Scheffler: Science and Subjectivity (Indianapolis: The Bobbs-Merril Company, Inc., 1967)’, British Journal for the Philosophy of Science, Vol. 19 (2) pp. 176-177

Hesse, Mary 1970, ‘Is There an Independent Observation Language?’, Colodney, Robert G (Ed.) The Nature and Function of Scientific Theories (Pittsburgh: University of Pittsburgh Press)

275

Hesse, Mary 1988, ‘The Cognitive Claims of Metaphor’, Journal of speculative philosophy, Vol. 2 (1) pp. 1-16

Hesselink, Lambertus, Post, Frits H., and van Wijk, Jarke J. 1994 March, ‘Research Issues in Vector and Tensor Field Visualization’, IEEE Computer Graphics and Applications, Vol. 14 (2) pp. 76-79

Hilgartner, Stephen 1990, ‘The Dominant View of Popularization: Conceptual Problems, Political Uses’, Social Studies of Science, Vol. 20 pp. 519-539

Hobbold, R. J. 1994, ‘Interactive Scientific Visualization: A Position Paper’, Grave, Michael, Hewitt, W. Terry, and Le Lous, Yvon (Eds.) Visualization in Scientific Computing (Chap. 8) pp. 75-83, (Berlin, London, etc.: Springer Verlag) ISBN: 3-540-56147-1

Hoggart, Richard 1992, The Uses of Literacy, (London: Penguin) ISBN: 0140170693 Holland, Patricia, Spence, Jo, and Watney, Simon (Eds.) 1986, Photography/Politics: Two,

(London: Comedia Publishing Group and Photography Workshop) ISBN: 0-906890-89-6 Holland, Patricia, Spence, Jo, and Watney, Simon 1986, ‘Introduction: The Politics and Sexual

Politics of Photography’, Holland, Patricia, Spence, Jo, and Watney, Simon (Eds.) Photography/Politics: Two (Chap. 1) pp. 1-7, (London: Comedia Publishing Group and Photography Workshop) ISBN: 0-906890-89-6

Holoquist, Michael and Shulman, Robert 1996 3 October, ‘Letter to the Editor’, New York Review of Books, p. 54,

Holton, Gerald 1993, ‘Can Science be at the Centre of Modern Culture?’, Public Understanding of Science, Vol. 2 pp. 291-305

Holton, Gerald 1993, Science and Anti-Science, (Cambridge Mass. and London: Harvard University Press) ISBN: 0-674-79298-X

Holub, Renate 1992, Antonio Gramsci: Beyond Marxism and Postmodernism, (London & New York: Routledge) ISBN: 0-415-07510-6

Hooke, Robert 1665, Micrographia: Or Some Physiological Descriptions of Minute Bodies Made by Magnifying Glasses with Observations and Inquiries Thereupon, (London: Royal Society)

Horkheimer, Max 1989, ‘Notes on Science and the Crisis’, Bronner, Stephen Eric and Kellner, Douglas MacKay (Eds.) Critical Theory and Society: A Reader (Chap. 4) pp. 52-57, (London and New York: Routledge) ISBN: 0-415-90041-7

Hospers, John 1973, An Introduction to Philosophical Analysis, 2nd Edn., (London and Henley: Routledge and Kegan Paul) ISBN: 0-7100-7724-6

Hughes, Patrick 1993, The Paradox Box, (London: Redstone Press) ISBN: 1-870003-61-6 Hughes, Thomas p. 1985, ‘Edison and Electric Light’, MacKenzie, Donald and Wajcman, Judy

(Eds.) The Social Shaping of Technology: How the Refrigerator got its Hum (Chap. 2) pp. 39-52, (Milton Keynes and Philadelphia: Open University Press) ISBN: 0-335-15026-8

Hultberg, John 1997 March, ‘The Two Cultures Revisited’, Science Communication, Vol. 18 (3) pp. 194-215

Humphreys, Joe 1998 20 May, ‘Scientist Warns Against Placing Curbs on Research’, The Irish Times

Huxley, T. H. 1967, The Essence of T. H. Huxley, (London: Macmillan) Inglesfield, J. E. 1984, ‘Surface Electronic Structure’, Prutton, M. (Ed.) Electronic Properties of

Surfaces (Chap. 1) pp. 2-70, (Bristol: Adam Hilger) ISBN: 0-85274-773 Jack, Andrew 1998 9 January, ‘How the Physicist Confounded the New Philosophes’, New

Statesman, pp. 22-23 Jacobi, Daniel and Schiele, Bernard 1989, ‘Scientific Imagery and Popularized Imagery:

Differences and Similarities in the Photographic Portraits of Scientists’, Social Studies of Science, Vol. 19 pp. 731-753

Jaeger, Carlo C and Zehnder, Alexander J B 1993, Informing the Public about Environmental Issues, Ackrill, Kate (Ed.) pp. 159-174, (London: Ciba Foundation)

276

Jasanoff, Sheila 1995 May, ‘Cooperation for What?: A View from the Sociological/Cultural Study of Science Policy’, Social Studies of Science, Vol. 25 (2) pp. 314-317

Jasanoff, Sheila 1996, ‘Beyond Epistemology: Relativism and Engagement in the Politics of Science’, Social Studies of Science, Vol. 26 pp. 393-418

Jasanoff, Sheila 1997, ‘Civilisation and Madness: The Great BSE Scare of 1996’, Public Understanding of Science, Vol. 6 pp. 221-232

Jasanoff, Sheila et al (Eds.) 1994, Handbook of Science and Technology Studies, (Thousand Oaks (CA), London, New Delhi: Sage Publications) ISBN: 0-8039-4021-1

Jauch, J M 1989, Are Quanta Real? (Bloomington and Indianapolis: Indiana University Press) ISBN: 0-253-20545-X

Jeans, James 1934, The New Background of Science, 2nd Edn., (Cambridge: Cambridge University Press)

Jenks, Chris (Ed.) 1995, Visual Culture, (London and New York: Routledge) ISBN: 0-415-10623-0

Jern, Mikael and Earnshaw, Rae A. 1995, ‘Interactive Real-Time Visualization Systems using a Virtual Reality Paradigm’, Göbel, Martin, Müller, Heinrich, and Urban, Bodo (Eds.) Visualization in Scientific Computing pp. 174-189, (Vienna, New York, etc.: Springer-Verlag) ISBN: 3-211-82633-5

Johnson, Christopher B. 1996 (7 March), ‘Distrust of Science: Letter to the Editor’, Nature, Vol. 380 p. 18,

Jones, Caroline A. and Galison, Peter (Eds.) 1998, Picturing Science, Producing Art, (New York and London: Routledge) ISBN: 0-415-91912-6

Jones, Gerald E. 1995, How to Lie With Charts, (San Francisco, etc.: Sybex) ISBN: 0-7821-1723-6 Jones, Steve 1996 (Friday 7 June), ‘Don’t Blame the Genes: Steve Jones argues that a combination

of idle reporters and arrogant scientists has fuelled and unnecessary public fear of genetic manipulation’, The Guardian p. 19

Jurdant, Baudouin 1993, ‘Popularisation of Science as the Autobiography of Science’, Public Understanding of Science, Vol. 2 pp. 365-373

Kallick-Wakker, Ingrid 1994, ‘Science Icons: The Visualization of Scientific Truths’, Leonardo, Vol. 27 (4) pp. 309-315

Kamada, Tomihisa and Kawai, Satoru 1991 January, ‘A General Framework for Visualizing Abstract Objects and Relations’, ACM Transactions on Graphics, Vol. 10 (1) pp. 1-39

Kaufman, Arie E. 1995 March, ‘Editorial’, IEEE Transactions on Visualization and Computer Graphics, Vol. 1 (1) pp. 1-2

Kaufman, Arie E., Höhne, Karl Heinz, Krüger, Wolfgang, Rosenblum, Lawrence J., and Schröder, Peter 1994 March, ‘Research Issues in Volume Visualization’, IEEE Computer Graphics and Applications, Vol. 14 (2) pp. 63-67

Kaufman, Arie E., Nielson, Gregory M., and Rosenblum, Lawrence J. 1993 July, ‘The Visualizaton Revolution’, IEEE Computer Graphics and Applications, Vol. 13 (4) pp. 16-17

Kaufman, Lloyd 1974, Sight and Mind: An Introduction to Visual Perception, (New York, London, Toronto: Oxford University Press) ISBN: 0-19-501763-3

Keast, Stephen 1996 25 January, ‘Letter to the Editor’, Nature, Vol. 379 p. 292, Keith, William 1995 May, ‘Response to Labinger’, Social Studies of Science, Vol. 25 (2)

pp. 321-324 Keith, William 1997, ‘Science and Communication: Beyond Form and Content’, Collier, James H.

and Toomey, David M. (Eds.) Scientific and Technical Communication: Theory, Practice, and Policy (Chap. 9) pp. 299-323, (Thousand Oaks, London, New Delhi: Sage) ISBN: 0-7619-0321-6

Keller, Peter and Keller, Mary 1993, Visual Cues, (Los Aamitos CA & Piscataway NJ: IEEE Computer Society Press) ISBN: 0-8186-3102-3

277

Kember, Sarah 1995, ‘Medicine’s New Vision’, Lister, Martin (Ed.) The Photographic Image in Digital Culture (Chap. 4) pp. 95-114, (London and New York: Routledge) ISBN: 0-415-12157-4

Kemp, Martin 1990, The Science of Art: Optical Themes in Western Art from Brunelleschi to Seurat, (New Haven and London: Yale University Press) ISBN: 0-300-05241-3

Kemp, Martin 1998 25 June, ‘Hooke’s Housefly’, Nature, Vol. 393 p. 745 Kerr, Anne, Cunningham-Burley, Sarah, and Amos, Amanda 1997, ‘The New Genetics:

Professionals’ Discursive Boundaries’, The Sociological Review, Vol. 45 (2) pp. 279-303 Kerr, Richard A. 1991 1 March, ‘Magellan Paints a Portrait of Venus’, Science, Vol. 251 (1026)

p. 1027, Kittel, Charles 1986, Introduction to Solid State Physics, 6th Edn., (New York, Chichester: Wiley)

ISBN: 0-471-87474 Knorr Cetina, Karin 1994, ‘Laboratory Studies – The Cultural Approach to the Study of Science’,

Jasanoff, Sheila et al (Eds.) Handbook of Science and Technology Studies (Chap. 7) pp. 140-166, (Thousand Oaks (CA), London, New Delhi: Sage) ISBN: 0803940211

Koradi, Reto, Billeter, Martin, and Wüthrich, Kurt 1996, ‘MOLMOL: A Program for Display and Analysis of Macromolecular Structures’, Journal of Molecular Graphics, Vol. 14 pp. 51-55

Kress, Gunther 1985, ‘Ideological Structures in Discourse’, van Dijk, T A (Ed.) Handbook of Discourse Analysis (Chap. 3) pp. 27-42, (London: Academic Press) ISBN: 0 12 712004 1

Kress, Gunther and van Leeuwen, Theo 1996, Reading Images: The Grammar of Visual Design, (London and New York: Routledge)

Krieger, Martin H 1992, Doing Physics: How Physicists Take Hold of the World, (Bloomington and Indianapolis: Indiana University Press) ISBN: 0-253-20701-0

Krogh, Michael and Grimsrud, Anders 1998, ‘Interfacing Commercial Applications to Virtual Reality Environments’, ACM SIGGRAPH Computer Graphics, Vol. 32 (4)

Kubovy, Michael 1986, The Psychology of Perspective and Renaissance Art, (Cambridge, etc.: Cambridge University Press) ISBN: 0-521-36849-9

Kuhn, Thomas S 1970, The Structure of Scientific Revolutions, 2nd Edn., (Chicago and London: University of Chicago Press) ISBN: 0-226-45804-0

Kuhn, Thomas S 1977, ‘Mathematical Versus Experimental Traditions in the Development of Modern Science’, The Essential Tension: Selected Studies in Scientific Tradition and Change pp. 311-365, (Chicago: )

Kunii, Tosiyasu L. and Shinagawa, Yoshihisa 1994 March, ‘Research Issues in Modeling Complex Object Shapes’, IEEE Computer Graphics and Applications, Vol. 14 (2) pp. 80-83

Labinger, Jay A. 1995 May, ‘Science as Culture: A View from the Petri Dish’, Social Studies of Science, Vol. 25 (2) pp. 285-306

Labinger, Jay A. 1995, ‘Out of the Petri Dish Endlessly Rocking: Reply to My Responders’, Social Studies of Science, Vol. 25 (2) pp. 341-348

LaFollette, Marcel C 1990, Making Science Our Own: Public Images of Science 1910-1955, (Chicago and London: University of Chicago Press)

Laing, Mark 1986, ‘Can the Mass Media Help Increase Developing Countries Science Literacy?’, Impact of Science on Society (Unesco), Vol. 144 pp. 341-346

Lakoff, G. and Johnson, M. 1996, Metaphors We Live By, () Larrain, Jorge 1979, The Concept of Ideology, (London, etc.: Hutchinson & Co.) Latham, Roy 1995, The Dictionary of Computer Graphics and Virtual Reality, 2nd Edn., (New

York, London, etc.: Springer-Verlag) ISBN: 0-387-94405-2 Latour, Bruno 1990, ‘Drawing Things Together’, Lynch, Michael and Woolgar, Steve (Eds.)

Representation in scientific practice pp. 19-68, (Cambridge Mass.: MIT Press) ISBN: 0-206-62076-6

278

Latour, Bruno 1993, We Have Never Been Modern, (Cambridge, Mass.: Harvard University Press) Laudan, Larry 1983, ‘The Demise of the Demarcation Problem’, Lauden, R. (Ed.) Working Papers

on the Demarcation of Science and Pseudo-Science pp. 7-36, (Blacksburg: Virginia Tech. Centre for the Study of Science in Society)

Laudan, Larry 1990, Science and Relativism: Some Key Controversies in the Philosophy of Science, (Chicago and London: University of Chicago Press) ISBN: 0-226-46949-2

Lawrence, Christopher 1979, ‘The Nervous System and Society in the Scottish Enlightenment’, Barnes, Barry and Shapin, Steven (Eds.) Natural order: historical studies of scientific culture pp. 19-40, (Beverly Hills: )

Lechte, John 1994, Fifty Key Contemporary Thinkers: From Structuralism to Postmodernity, (London and New York: Routledge) ISBN: 0-415-07408-8

Lehndorff, Vera and Trülzsch, Holger 1986, ‘Veruschka’ Trans-figurations, (London: Thames and Hudson) ISBN: 0-500-27424-X

Leigh Star, Susan and Griesemer, James R. 1999, ‘Institutional Ecology, “Translations,” and Boundary Objects: Amateurs and Professionals in Berkeley’s Museum of Vertebrate Zoology, 1907-39’, Biagioli, Mario (Ed.) The Science Studies Reader (Chap. 33) pp. 505-524, (London: Routledge)

Lenoir, Timothy 1994, ‘Was the Last Turn the Right Turn? The Semiotic Turn and A.J. Greimas’, Configurations, Vol. 2 pp. 119-136

Levine, George 1996, ‘Letter to the Editor’, New York Review of Books, p. 54, Levkowitz, Haim, Holub, Richard A. , Meyer, Gary W., and Robertson, Philip K. 1992 July,

‘Color versus Black and White in Visualization’, IEEE Computer Graphics and Applications , Vol. 12 (4) pp. 20-22

Lewenstein, Bruce V 1987, ‘Public Understanding of Science’ in America, 1945-1965, Ph.D. Lewenstein, Bruce V 1995, ‘From Fax to Facts: Communication in the Cold Fusion Saga’, Social

Studies of Science, Vol. 25 pp. 403-436 Lewenstein, Bruce V. (Ed.) 1992, When Science Meets the Public, (Washington: American

Association for the Advancement of Science) ISBN: 087168-440-3 Lewenstein, Bruce V. 1994, ‘Science and the Media’, Jasanoff, Sheila et al (Eds.) Handbook of

Science and Technology Studies (Chap. 16) pp. 343-361, (Thousand Oaks (CA), London, New Delhi: Sage Publications) ISBN: 0-8039-4021-1

Lewin, Roger 1993, Complexity: Life at the Edge of Chaos, (London: Phoenix) ISBN: 1-85799-028-5

Leydesdorff, Loet 1993, ‘Why the Statement: “Plasma-Membrane Transport is Rate-Limiting for its Metabolism in Rat-Liver Parenchymal Cells” Cannot Meet the Public’, Public Understanding of Science, Vol. 2 pp. 285-426

Limoges, Camille 1993, ‘Expert Knowledge and Decision-Making in Controversy Contexts’, Public Understanding of Science, Vol. 2 pp. 285-426

Lister, Martin (Ed.) 1995, The Photographic Image in Digital Culture, (London and New York: Routledge) ISBN: 0-415-12157-4

Lister, Martin 1997, ‘Photography in the Age of Electronic Imaging’, Wells, Liz (Ed.) Photography: A Critical Introduction (Chap. 6) pp. 249-291, (London and New York: Routledge) ISBN: 0-415-12559-6

Lloyd, Geoffrey E. R. 1992, ‘Greek Antiquity: The Invention of Nature’, Torrance, John (Ed.) The Concept of Nature: The Herbert Spencer Lectures (Chap. 1) pp. 1-24, (Oxford: Oxford University Press) ISBN: 0-19-852276-2

Lock, Christine 1994 December, ‘Art, Sex and Science’, Physics World, pp. 15-16 Locke, John 1979 1689, An Essay Concerning Human Understanding, (Oxford: Oxford University

Press) ISBN: 0-19-824595-5 Locke, Simon 1994, ‘The Use of Scientific Discourse by Creation Scientists: Some Preliminary

Findings.’, Public Understanding of Science, Vol. 3 pp. 403-424

279

Locke, Simon 1997 Oct., ‘Book Review: Charles A Taylor, Defining Science: A Rhetoric of Demarcation, Madison, WI: University of Wisconsin Press, 1996’, Public Understanding of Science, Vol. 6 (4)

Ludwig, Jan Keith 1994, ‘Review of Steve Fuller: Philosophy of Science and It’s Discontents’, Isis, Vol. 85 (3) pp. 554-554

Lynch, Michael 1990, ‘The Externalised Retina: Selection and Mathematization in the Visual Documentation of Objects in the Life Sciences’, Lynch, Michael and Woolgar, Steve (Eds.) Representation in Scientific Practice pp. 153-187, (Cambridge, Mass.: MIT Press) ISBN: 0-262-62076-6

Lynch, Michael 1994, ‘Representation is Overrated: Some Critical Remarks About the Use of the Concept of Representation in Science Studies’, Configurations, Vol. 2 pp. 137-149

Lynch, Michael 1995, ‘Collaboration and Scandal: A Comment on Labinger’, Social Studies of Science, Vol. 25 (2) pp. 324-329

Lynch, Michael and Edgerton Jr, Samuel Y. 1988, ‘Aesthetics and Digital Imaging Processing: Representational Craft in Contemporary Astronomy’, Fyfe, Gordon and Law, John (Eds.) Picturing Power: Visual Depiction and Social Relations pp. 184-220, (London and New York: Routledge) ISBN: 0-415-03144-3

Lynch, Michael and Woolgar, Steve (Eds.) 1990, Representation in Scientific Practice, (Cambridge, Mass.: MIT Press) ISBN: 0-262-62076-6

Lynch, Michael and Woolgar, Steve 1990, ‘Introduction: Sociological Orientations to Representational Practice in Science’, Lynch, Michael and Woolgar, Steve (Eds.) Representation in Scientific Practice pp. 85-122, (Cambridge, Mass.: MIT Press) ISBN: 0-262-62076-6

Lyotard, Jean-François 1984, The Postmodern Condition: A Report on Knowledge, (Manchester: Manchester University Press) ISBN: 0-7190-1450-6

Maasen, Sabine and Weingart, Peter 1995 Sept., ‘Metaphors – Messengers of Meaning: A Contribution to an Evolutionary Sociology of Science’, Science Communication, Vol. 17 (No. 1) pp. 9-31

MacDonald, Lindsay W. 1994, ‘Interacting with Graphic Arts Images’, MacDonald, Lindsay W. and Vince, John (Eds.) Interacting with Virtual Environments (Chap. 10) pp. 149-165, (Chichester: John Wiley & Sons) ISBN: 0-471-93941-2

MacDonald, Lindsay W. 1999 July/August, ‘Using Color Effectively in Computer Graphics’, IEEE Computer Graphics and Applications, Vol. 19 (4) pp. 20-35

MacDonald, Lindsay W. and Vince, John (Eds.) 1994, Interacting with Virtual Environments, (Chichester: John Wiley & Sons) ISBN: 0-471-93941-2

Macdonald-Ross, Michel 1987, ‘The Role of Science Books for the Public’, Evered, David and O’Connor, Maeve (Eds.) Communicating Science to the Public: A Ciba Foundation Conference pp. 175-189, (Chichester: John Wiley and Sons) ISBN: 0-471-91511-4

MacEachren, Alan M. and Taylor, D. R. Fraser (Eds.) 1994, Visualisation in Modern Cartography, (Oxford, New York, Tokyo: Pergamon) ISBN: 0-08-042415-5

Macilwain, Colin 1998 30 April, ‘Physicists Seek Definition of Science’, Nature, Vol. 392 p. 849, MacKenzie, Donald and Wajcman, Judy (Eds.) 1985, The Social Shaping of Technology: How the

Refrigerator Got its Hum, (Milton Keynes and Philadelphia: Open University Press) ISBN: 0-335-15026-8

Maddox, John 1995 30 November, ‘The Prevalent Distrust of Science’, Nature, Vol. 378 pp. 435-437

Marcuse, Herbert 1972, One Dimensional Man, (London: Abacus) Marks, Harry M. 1995, ‘Other Voices: A Response to Labinger’, Social Studies of Science,

Vol. 25 (2) pp. 329-334 Matthews, Robert 1993 16 October, ‘When Seeing is Not Believing’, New Scientist, pp. 13-15 May, Robert M. 1997 7 February, ‘The Scientific Wealth of Nations’, Science, Vol. 275

pp. 793-796

280

Mazur, A 1977, ‘Public Confidence in Science’, Social Studies of Science, Vol. 7 pp. 123-125 McCormick, Bruce H., DeFanti, Thomas A., and Brown, Maxine D. 1987 Nov., ‘Visualization in

Scientific Computing’, ACM SIGGRAPH Computer Graphics, Vol. 21 (6) McEntee, Joe 1994 March, ‘Fine Tips Engineer Atomic and Molecular Opportunities’, Physics

World, Vol. 7 (3) pp. 67-69 McGowan, Alan H 1993, The significance of the Media Resource Service – now services., Ackrill,

Kate (Ed.) pp. 51-63, (London: Ciba Foundation) McLuhan, Marshall 1973, Understanding Media, (London: Abacus) ISBN: 0-349-12296-2 McQuail, Denis and Windahl, Sven 1981, Communication Models: For the Study of Mass

Communications, (Harlow & New York: Longman) ISBN: 0-582-29572-6 McRae, Murdo William (Ed.) 1993, The Literature of Science: Perspectives on Popular Scientific

Writing, (Athens and London: University of Georgia Press) ISBN: 0-8203-1506-0 Meadows, Jack 1986, ‘The Growth of Science Popularisation: A Historical Sketch’, Impact of

Science on Society (Unesco), Vol. 144 pp. 341-346 Mellor, Felicity 1999, ‘Scientists’ Rhetoric in the Science Wars’, Public Understanding of

Science, Vol. 8 pp. 51-56 Mellor, Felicity 2000, ‘Between ‘Fact’ and ‘Fiction’: Demarcating Science From Non-Science in

Popular Physics Books’, Paper prepared for Demarcation Socialised Conference, Cardiff University, August 2000.

Mermin, N. David 1990, Boojums All The Way Through: Communicating Science in a Prosaic Age, (Cambridge, etc.: Cambridge University Press) ISBN: 0-521-38880-5

Merton, Robert K 1973, The Sociology of Science: Theoretical and Empirical Investigations, (Chicago and London: University of Chicago Press) ISBN: 0226520919

Metzger, Gustav 1998, ‘Hubble Telescope: The Artist in the Eye of the Storm’, Wood, John (Ed.) The Virtual Embodied: Presence/Practice/Technology (London and New York: Routledge) ISBN: 0-415-16026-X

Midgley, Mary 1992, Science as Salvation: A Modern Myth and Its Meaning, (London: Routledge) ISBN: 0-415-10773-3

Miller, Arthur I. 1996, ‘Visualization Lost and Regained: The Genesis of the Quantum Theory in the Period 1913X-1927’, Druckrey, Timothy (Ed.) Electronic Culture: Technology and Visual Representation pp. 86-107, (New York: Aperture) ISBN: 0-89381-678-7

Miller, Arthur I. 1996, Insights of Genius: Imagery and Creativity in Science and Art, () Mirzoeff, Nicholas (Ed.) 1998, The Visual Culture Reader, (London and New York: Routledge)

ISBN: 0-415-14134-6 Mirzoeff, Nicholas 1999, An Introduction to Visual Culture, (London and New York: Routledge)

ISBN: 0-415-15876-1 Mitchell, William J. 1992, The Reconfigured Eye: Visual Truth in the Post-Photographic Era,

(Cambridge, Mass. & London: The MIT Press) ISBN: 0-262-13286-9 Modern Humanities Research Association 1991, The MHRA Style Book: Notes for Authors,

Editors, and Writers of Theses, (London: Modern Humanities Research Association) ISBN: 0 947623 39 6

Morley, David and Chen, Kuan-Hsing (Eds.) 1996, Stuart Hall: Critical Dialogues in Cultural Studies, (London and New York: Routledge) ISBN: 0-415-08804-6

Morrison, Philip 1992, ‘The Need: Why Should We Understand Public Understanding of Science’, Lewenstein, Bruce V. (Ed.) When Science Meets the Public (Washington: American Association for the Advancement of Science) ISBN: 087168-440-3

Moser, Stephanie 1996, ‘Visual Representation in Archaeology: Depicting the Missing-Link in Human Origins’, Baigrie, Brian S. (Ed.) Picturing Knowledge: Historical and Philosophical Problems Concerning the Use of Art in Science pp. 184-215, (Toronto, Buffalo, London: University of Toronto Press) ISBN: 0-8020-2985-1

281

Moving Atoms: Welcome to the STM Gallery1996 8 March, (http://www.almaden.ibm.com/vis/stm/gallery.html: IBM Almaden Research Centre Visualisation Lab)

Mulkay, Michael 1985, The Word and the World: Explorations in the Form of Sociological Analysis, (London, Boston and Sydney: George Allen and Unwin) ISBN: 0-04-301197-7

Mulkay, Michael, Potter, Jonathan, and Yearley, Steven 1983, ‘Why an Analysis of Scientific Discourse is Needed’, Knorr Cetina, Karin and Mulkay, Michael (Eds.) Science Observed: Perspectives on the Social Study of Science pp. 171-203, (London: Sage)

Mullin, Tom (Ed.) 1993, The Nature of Chaos, (Oxford, New York: Oxford University Press) ISBN: 0-19-853990-8

Murray, Alexander 1992, ‘Nature and Man in the Middle Ages’, Torrance, John (Ed.) The Concept of Nature: The Herbert Spencer Lectures (Chap. 2) pp. 25-62, (Oxford: Oxford University Press) ISBN: 0-19-852276-2

Myers, Greg 1985, ‘Nineteenth-Century Popularisers of Thermodynamics and the Rhetoric of Social Prophecy’, Victorian Studies, Vol. 29 pp. 35-66

Myers, Greg 1990, ‘The Double Helix as an Icon’, Science as culture, Vol. 9 pp. 42-72 Myers, Greg 1990, Writing Biology: Texts in the Social Construction of Scientific Knowledge,

(Madison, Wisconsin: University of Wisconsin Press) ISBN: 0-299-12234-4 Myers, Greg 1996, ‘Biology and the Queen’s English’, Public Understanding of Science, Vol. 5

pp. 59-64 Neidhardt, Friedhelm 1993, ‘The Public as a Communication System’, Public Understanding of

Science, Vol. 2 pp. 285-426 Nelkin, Dorothy 1987, ‘Controversies and the Authority of Science’, Engelhardt, H. Tristram Jr

and Caplan, Arthur L. (Eds.) Scientific Controversies: Case Studies in the Resolution and Closure of Disputes in Science and Technology (Chap. 10) pp. 283-293, (Cambridge, New York and Melbourne: Cambridge University Press) ISBN: 0-521-25565-1

Nelkin, Dorothy 1987, ‘Selling Science: How the Press Covers Science and Technology’, Evered, David and O’Connor, Maeve (Eds.) Communicating Science to the Public: A Ciba Foundation Conference pp. 100-113, (Chichester: John Wiley and Sons) ISBN: 0-471-91511-4

Nelkin, Dorothy 1994, ‘Promotional Metaphors and their Popular Appeal’, Public Understanding of Science, Vol. 3 pp. 25-31

Nelkin, Dorothy and Lindee, M. Susan 1995, The DNA Mystique: The Gene as a Cultural Icon, (New York: W. H. Freeman and Company) ISBN: 0-7167-2709-9

Nelson, Alan 1994, ‘How Could Scientific Facts be Socially Constructed?’, Studies in the History and Philosophy of Science, Vol. 25 (4) pp. 535-547

Newcott, William 1993 February, ‘Venus Revealed’, National Geographic, Vol. 183 (2) pp. 36-59 Nielson, Gregory M. and Kaufman, Arie E. 1994 September, ‘Visualization Graduates’, IEEE

Computer Graphics and Applications, Vol. 12 (4) pp. 17-18 Nielson, Gregory M., Brunet, Pere, Gro�Markus, Hagen, Hans, and Klimenko, S. V. 1994 March,

‘Research Issues in Data Modeling for Scientific Visualization’, IEEE Computer Graphics and Applications, Vol. 14 (2) pp. 70-73

Nielson, Gregory M., Voegele, Keith, and Collins, Brian 1992 July, ‘Introduction to “An Annotated and Keyworded Bibliography for Scientific Visualization”‘, IEEE Computer Graphics and Applications , Vol. 12 (3) pp. 23-24

Nieman, Adam 1996 February, ‘A Lot of Fuss About PUS’, Wavelength, (13) Nieman, Adam 1999 February, ‘Media Scientists Bad for Science’, Wavelength, (22) Norris, Christopher 1997 July, ‘Ontology According to Van Fraassen: Some Problems with

Constructive Empiricism’, Metaphilosophy, Vol. 28 (3) pp. 196-218 Norris, Ray p. 1994, The Challenge of Visualising Astronomical Data, Crabtree, D. R., Hanisch,

R. J., and Barnes, J. (Eds.) ,61,ASP)

282

Nowotny, Helga 1993, ‘Socially Distributed Knowledge: Five Spaces for Science to Meet the Public’, Public Understanding of Science, Vol. 2 pp. 707-319

Olby, R. C. et al (Eds.) 1989, Companion to the History of Modern Science, (London: Routledge) Olson, Roberta J. M. 1984 (Fall), ‘... And They Saw Stars: Renaissance Representations of Comets

and Pretelescopic Astronomy’, Art Journal, Vol. 44 (3) pp. 216-224 Olson, Steve and Stine, Deborah D. 1995, On Being a Scientist: Responsible Conduct in Research,

2nd Edn., (Washington: National Academy Press) ISBN: 0-309-05196-7 Oosterloo, Tom 1996 ?, ‘Visualisation of Radio Data’, Publ.Astron.Soc.Australia, Vol. 12 p. 215, Oppenheimer, J Robert 1989, Atom and Void: Essays on Science and Community, (Princeton, New

Jersey: Princeton University Press) ISBN: 0-691-08547-1 Outram, Dorinda 1978, ‘The Language of Natural Power: The Éloges of George Cuvier and the

Public Language of Nineteenth-Century Science’, History of Science, Vol. 16 pp. 153-178

Overton, William R. 1982 19th February, ‘Creationism in Schools: The Decision in McLean versus the Arkansas Board of Education.’, Science, Vol. 215 pp. 934-943

Palmer, Richard E 1969, Hermeneutics: Interpretation Theory in Schleiermacher, Dilthey, Heidegger and Gadamer, (Evanston: Northwestern University Press)

Pang, Alex 1999, Aesthetics and Early Astrophotography, (http://www.eb.com/editors/apang/Stars/Article/Introduction.html:

Pang, Alex and Pagendarm, Hans-Georg 1998 July/August, ‘Visualization for Everyone’, IEEE Computer Graphics and Applications, Vol. 18 (4) pp. 47-48

Pang, Alex and Wittenbrink, Craig 1997 March/April, ‘Collaborative 3D Visualization with CSpray’, IEEE Computer Graphics and Applications, pp. 32-41

Pauschenwein, Jutta and Thaller, Bernd 1996 Nov/Dec, ‘Visualising Quantum-Mechanical Wavefunctions in Three Dimensions with AVS’, Computers in Physics, Vol. 10 (6) pp. 558-566

Pellegrini, Claudio 1996, ‘Letter to the Editor’, New York Review of Books, p. 55, Penrose, Roger 1992, ‘The Modern Physicist’s View of Nature’, Torrance, John (Ed.) The Concept

of Nature: The Herbert Spencer Lectures (Chap. 5) pp. 117-166, (Oxford: Oxford University Press) ISBN: 0-19-852276-2

Perutz, Max 1991, Is Science Necessary? Essays on Science and Scientists, (Oxford, New York: Oxford University Press) ISBN: 0-19-286118-2

Petkova, Kristina and Boyadjieva, Pepka 1994, ‘The Image of the Scientist and its Function ‘, Public Understanding of Science, Vol. 3 pp. 215-224

Pettengill, Gordon H., Ford, Peter G., Johnson, William T. K., Raney, R. Keith, and Soderblom, Laurence A. 1991 12 April, ‘An Overview of Venus Geology’, Science, Vol. 252 pp. 260-265

Phillips Mahoney, Diana 1996, ‘The Art and Science of Medical Visualization’, Computer Graphics World, (July)

Pickover, Clifford A. and Tewksbury, Stuart K. (Eds.) 1994, Frontiers of Scientific Visualization, (New York, Chichester: Wiley) ISBN: 0-471-30972-9

Pinch, Trevor J. 1995, ‘In and Out of the Petri Dish: Science and S&TS’, Social Studies of Science, Vol. 25 (2) pp. 334-337

Plato 1992, The Trial and Death of Socrates: Four Dialogues, (New York: Dover) ISBN: 0-486-27066-1

Plaut, Jeffrey J. 1993 August, ‘Venus In 3-D’, Sky and Telescope, pp. 32-37 Plotnitsky, Arkady 1997, ‘“But It Is Above All Not True”: Derrida, Relativity, and the “Science

Wars”‘, Postmodern Culture, Vol. 7 (2) Pollock, John and Steven, David 1997 Sept., Now for the Science Bit – Concentrate! A Report into

the Public Understanding of Science, River Path Associates)

283

Popper, Frank 1993, Art of the Electronic Age, (London: Thames and Hudson) ISBN: 0-500-27918-7

Popper, Frank 1994, ‘Visualization, Cultural Mediation and Dual Creativity’, Leonardo, Popper, Karl R. 1983, A Pocket Popper, (London: Fontana Press) ISBN: 0-00-686147-4 Porter, Henry 1996 1 February, ‘Trivial Pursuit: How Many of Us Know More About Chauser

than Tarantino? Henry Porter Asks if our Knowledge of Western Culture is Dying’, The Guardian p. 2,

Potmesil, Michael and Hoffert, Eric M. 1994, ‘Architecture and Applications of the Pixel Machine’, Pickover, Clifford A. and Tewksbury, Stuart K. (Eds.) Frontiers of Scientific Visualization (Chap. 8) pp. 213-244, ( New York, Chichester: Wiley) ISBN: 0-471-30972-9

Powell, Corey S. 1993 May, ‘Three Faces of Venus’, Scientific American, pp. 12-13 Powers, Jonathan 1985, Philosophy and the New Physics, (London & New York: Methuen & Co

Ltd) ISBN: 0-416-73480-4 Poyser, David 199526 August, Does Science Matter? Why Care About the Public Image of

Science. Price, Derrick and Wells, Liz 1997, ‘Thinking About Photography: Debates, Historically and

Now’, Wells, Liz (Ed.) Photography: A Critical Introduction (Chap. 1) pp. 11-54, (London and New York: Routledge) ISBN: 0-415-12559-6

Punt, Michael 1995, ‘The Elephant, the Spaceship and the White Cockatoo: An Archaeology of Digital Photography’, Lister, Martin (Ed.) The Photographic Image in Digital Culture (Chap. 2) pp. 51-76, (London and New York: Routledge) ISBN: 0-415-12157-4

Purcell, Carroll 1994, White Heat: People and Technology, (London: BBC Books) ISBN: 0-563-36979-5

Quantum Mechanics1974, Encyclopaedia Britannica, 5th Edn. pp. 426-426, () Quantum Theory1988, The Hutchinson Encyclopaedia, 8th Edn., pp. 972-972, (Hutchinson) Quine, Willard V. O. 1953, ‘Two Dogmas of Empiricism’, From a Logical Point of View

(Cambridge, Mass.: Harvard University Press) Rae, Alastair I M 1986, Quantum Mechanics, 2nd Edn., (Bristol: Adam Hilger)

ISBN: 0-85274-498-6 Ramamurthy, Anandi 1997, ‘Constructions of Illusion: Photography and Commodity Culture’,

Wells, Liz (Ed.) Photography: A Critical Introduction (Chap. 4) pp. 151-198, (London and New York: Routledge) ISBN: 0-415-12559-6

Rassam, Clive Cavendish 1993, The Second Culture: British Science in Crisis: The Scientists Speak Out, (London: Aurum Press)

Reed, B. Cameron 1994, Quantum Mechanics: A First Course, 2nd Edn., (Winnipeg, Canada: Wuerz) ISBN: 0-920063-64-0

Reed, Mark 1993, ‘Quantum Constructions’, Science, Vol. 262 (8 October) pp. 195-195 Reingold, Nathan 1985, ‘Metro-Goldwyn-Mayer Meets the Atom Bomb’, Shinn, Terry and

Whitley, Richard (Eds.) Expository Science: Forms and Functions of Popularisation (Dordrecht & Boston: )

Renshaw, H. 1997 15th July, ‘Could this pendant protect you from mobile phone radiation?’ Daily Mail pp. 40-40

Resnik, David B. 2000, ‘A Pragmatic Approach to the Demarcation Problem’, Studies in the History and Philosophy of Science, Vol. 31 (2) pp. 249-267

Rhodes, Michael L. 1997 Jan.-Feb., ‘Computer Graphics and Medicine: A Complex Partnership’, IEEE Computer Graphics and Applications, pp. 22-28

Richards, Stewart 1983, Philosophy and Sociology of Science: An Introduction, (Oxford: Basil Blackwell Ltd) ISBN: 0-631-13414-X

Ricketts, M. William 1992, ‘Winsom Solid Modeller and its Application to Data Visualization’, Vvedensky, Dimitri D. and Holloway, Stephen (Eds.) Graphics and Animation in Surface

284

Science (Chap. 4) pp. 48-63, (Bristol, Philadelphia & New York: Adam Hilger (IOP)) ISBN: 0-7503-0118-X

Ricoeur, Paul 1978, The Rule of Metaphor: Multi-Disciplinary Studies of the Creation of Meaning in Language, (London & Henley: Routledge & Kegan Paul) ISBN: 0-7100-8964-3

Ricoeur, Paul 1981, Hermeneutics and the Human Sciences, (Cambridge: Cambridge University Press)

Ring, Katy 1988, The Popularisation of Elementary Science Through Popular Science Books c. 1870 – c. 1939, University of Kent at Canterbury, Ph.D.

Robertson, Philip K. 1991 May, ‘A Methodology For Choosing Data Representations’, IEEE Computer Graphics and Applications, pp. 56-67

Robertson, Philip K. and Silver, Deborah 1995 July, ‘Visualization Case Studies: Completing the Loop’, IEEE Computer Graphics and Applications, Vol. 15 (4) pp. 18-19

Robertson, Philip K., Earnshaw, Rae A., Thalmann, Daniel, Grave, Michel, Gallop, Julian, and De Jong, Eric M. 1994 March, ‘Research Issues in the Foundations of Visualization’, IEEE Computer Graphics and Applications, Vol. 14 (2) pp. 73-76

Robin, Harry 1993, The Scientific Image: From Cave to Computer, (New York and Oxford: W H Freeman and Company) ISBN: 0-7167-2504-5

Robins, Kevin 1995, ‘Will Image Still Move Us?’, Lister, Martin (Ed.) The Photographic Image in Digital Culture (Chap. 1 ) pp. 29-50, (London and New York: Routledge) ISBN: 0-415-12157-4

Rodgers, Michael 1992, ‘The Hawking Phenomenon’, Public Understanding of Science, Vol. 1 pp. 231-234

Rogowitz, Bernice E. and Treinish, Lloyd A. 1996 May/June, ‘How Not to Lie with Visualization’, Computers in Physics, Vol. 10 (3) pp. 268-273

Rorty, Richard 1967, ‘Introduction: The Linguistic Turn’, Rorty, Richard (Ed.) The Linguistic Turn: Recent Essays in Philosophical Method pp. 1-39, (Chicago: )

Rorty, Richard 1979, Philosophy and the Mirror of Nature, (Princeton: ) Rorty, Richard and Hesse, Mary 1987 ?, ‘Unfamiliar Noises’, Aristotelian Society Supplement,

Vol. 61 pp. 283-311 Rosen, Charles and Zerner, Henri 1984, Romanticism and Realism: The Mythology of Nineteenth

Century Art, (London and Boston: Faber and Faber) Rosenblum, L. et al (Eds.) 1994, Scientific Visualization: Advances and Challenges, (London and

San Diego: Academic Press) ISBN: 0-12-227742-2 Rosenblum, Lawrence J. 1994 March, ‘Research Issues in Scientific Visualization’, IEEE

Computer Graphics and Applications, Vol. 14 (2) pp. 61-63 Rosenblum, Lawrence J. and Brown, Bruce E. 1992 July, ‘Guest Editor’s Introduction:

Visualization’, IEEE Computer Graphics and Applications, Vol. 12 (4) pp. 18-22 Rosenblum, Lawrence J. and Nielson, Gregory M. 1991 May, ‘Guest Editors Introduction:

Visualization Comes of Age’, IEEE Computer Graphics and Applications, pp. 15-17 Rosenblum, Lawrence J., Burdea, Grigore, and Tachi, Susumu 1998 Nov./Dec., ‘VR Reborn’,

IEEE Computer Graphics and Applications, pp. 21-23 Ross, Andrew 1991, Strange Weather: Culture, Science and Technology in the Age of Limits,

(London and New York: Verso) ISBN: 0-86091-567-0 Rossignac, Jarek R. and Novak, Miroslav 1994 March, ‘Research Issues in Model-based

Visualization of Complex Data Sets’, IEEE Computer Graphics and Applications, Vol. 14 (2) pp. 83-85

Rothman, Harry 1994, ‘Between Science and Industry: The Human Genome Project and Instrumentalities.’, The Genetic Engineer and Biotechnologist, Vol. 14 (2) pp. 81-91

Rowland, Wilard D. 1997, ‘Television Violence Redux: The Continuing Mythology of Effects’, Barker, Martin and Petley, Julian (Eds.) Ill Effects: The Media/Violence Debate (Chap. 7) pp. 102-124, (London and New York: Routledge) ISBN: 0-415-14673-9

285

Royal Society Council 1985, The Public Understanding of Science, (London: Royal Society) Rudwick, M. J. S. 1985, The Great Devonian Controversy: The Shaping of Scientific Knowledge

among Gentlemanly Specialists, (Chicago and London: University of Chicago Press) Russ, John C. 1990, Computer-Assisted Microscopy: The Measurement and Analysis of Images,

(London & New York: Plenum Press) ISBN: 0-306-43410-5 Russell, Bertrand 1952, The Impact of Science on Society, (London: George Allen and Unwin) Ryan, Marie-Laure 1994, ‘Immersion verses Interactivity: Virtual Reality and Literary Theory’,

Postmodern Culture, Vol. 5 (1) Ryder, Neil 1982, Science, Television and the Adolescent: A Case Study and Theoretical Model,

(London: IBA) Ryder, Neil 1992, ‘Selling Science – What it Takes to be a Successful Media Scientist’, The Times

Higher Education Supplement, (November 13) pp. 19-19 Sagan, Carl 1997, The Demon-Haunted World: Science as a Candle in the Dark, (London:

Headline) ISBN: 0-7472-5156-8 Sarap, Madan 1993, An Introductory Guide to Post-Structuralism and Postmodernism, 2nd Edn. ,

(Hemel Hempstead: Harvester Wheatsheaf) Sardar, Ziauddin 2000, Thomas Kuhn and the Science Wars, (Cambridge: Icon Books)

ISBN: 1-84046-136-5 Satterfield, Theresa 1997, ‘“Voodoo Science” and Common Sense: Ways of Knowing Old-Growth

Forests’, Journal of Anthropological Research, Vol. 53 pp. 443-459 Saunders, R. Stephen and Pettengill, Gordon H. 1991 12 April, ‘Magellan: Mission Summary’,

Science, Vol. 252 pp. 247-249 Saunders, R. Stephen, Arvidson, Raymond E., Head III, James W., Schaber, Gerald G., Stofan,

Ellen R., and Solomon, Sean C. 1991 12 April, ‘An Overview of Venus Geology’, Science, Vol. 252 pp. 249-252

Saussure, Ferdinand de 1983, A Course in General Linguistics, (London: Duckworth) ISBN: 0 7156 1670 6

Scanga, F. (Ed.) 1964, Atlas of Electron Microscopy: Biological Applications, (Amsterdam, London and New York: Elsevier)

Scanlon, Eileen, Whitelegg, Elizabeth, and Yates, Simeon (Eds.) 1999, Communicating Science: Contexts and Channels (Reader 2), (London and New York: Routledge) ISBN: 0-415-19753-8

Schwartz Cowan, Ruth 1985, ‘Gender and Technological Change’, MacKenzie, Donald and Wajcman, Judy (Eds.) The Social Shaping of Technology: How the Refrigerator got its Hum (Chap. 3) pp. 53-54, (Milton Keynes and Philadelphia: Open University Press) ISBN: 0-335-15026-8

Schwartz Cowan, Ruth 1985, ‘How the Refrigerator Got its Hum’, MacKenzie, Donald and Wajcman, Judy (Eds.) The Social Shaping of Technology: How the Refrigerator got its Hum (Chap. 15) pp. 202-218, (Milton Keynes and Philadelphia: Open University Press) ISBN: 0-335-15026-8

Schwartz, Lillian F. 1998, ‘Computer Aided Illusions: Ambiguity, Perspective and Motion’, The Visual Computer, Vol. 14 pp. 52-68

Scully, Marlan O., Englert, Berthold-Georg, and Walther, Herbert 1991 9 May, ‘Quantum Optical Tests of Complementarity’, Nature, Vol. 351 pp. 111-116

Seager, William 1995, ‘Ground Truth and Virtual Reality: Hacking vs. Van Fraassen’, Philosophy of Science, Vol. 62 pp. 459-478

Secord, Anne 1994 Sept., ‘Science in the Pub’, History of Science, Vol. 32 (3) pp. 269-315 Sekula, Allan 1982, ‘On the Invention of Photographic Meaning’, Burgin, Victor (Ed.) Thinking

Photography, 1st Edn., (Chap. 4) pp. 84-109, (London: Macmillan) ISBN: 0-333-27194-7

286

Sekula, Allan 1986, ‘Building and Archive: Photography Between Labour and Capital’, Holland, Patricia, Spence, Jo, and Watney, Simon (Eds.) Photography/Politics: Two pp. 153-161, (London: Comedia Publishing Group and Photography Workshop) ISBN: 0-906890-89-6

Serres, Michel 1982, Hermes: Literature, Science, Philosophy, (Baltimore/London: The Johns Hopkins University Press) ISBN: 0-8018-2455-9

Shannon, Claude E. and Weaver, Warren 1998 1949, The Mathematical Theory of Communication, (Chicago and Urbana: University of Illinois Press) ISBN: 0-252-72548-4

Shapin, Steven 1974, ‘The Audience for Science in Eighteenth-Century Edinburgh’, History of Science, Vol. 12 pp. 95-121

Shapin, Steven 1984, ‘Pump and Circumstance: Robert Boyle’s Literary Technology’, Social Studies of Science, Vol. 14 pp. 481-520

Shapin, Steven 1984, ‘Talking History: Reflections on Discourse Analysis’, Isis, Vol. 75 pp. 125-130

Shapin, Steven 1989, ‘Science and the Public’, Olby, R. C. et al (Eds.) Companion to the History of Modern Science (Chap. 65) pp. 990-1007, (London: Routledge)

Shapin, Steven 1996, The Scientific Revolution, (Chicago and London: University of Chicago Press)

Shapin, Steven and Schaffer, Simon 1985, Leviathan and the Air-Pump: Hobbes, Boyle, and the Experimental Life, (Princeton and Oxford: Princeton University Press) ISBN: 0-691-02432-4

Sharafuddin, A. M. 1986, ‘Science Popularisation: A View from the Third World’, Impact of Science on Society (Unesco), Vol. 144 pp. 341-346

Shiach, Morag 1989, Discourse on Popular Culture: Class, Gender and History in Cultural Analysis, 1730 to the Present, (Cambridge & Oxford: )

Shinn, Terry and Whitley, Richard (Eds.) 1985, Expository Science: Forms and Functions of Popularisation, (Dordrecht & Boson: )

Shortland, Michael and Gregory, Jane 1991, Communicating Science, A Handbook, 1st Edn., (Harlow: Longman Scientific & Technical) ISBN: 0-582-05709-4

Silverstone, Roger 1984, Narrative Strategies in Television Science: A Case for Study, British Film Institute)

Silverstone, Roger 1985, Framing Science: The Making of a BBC Documentary, (London: British Film Institute)

Silverstone, Roger 1989, ‘Science and the Media: The Case of Television’, Doorman, S J (Ed.) Images of Science: Scientific Practice and the Public pp. 187-211, (Aldershot: )

Sismondo, Sergio 1997 July, ‘Deflationary Metaphysics and the Construction of Laboratory Mice’, Metaphilosophy, Vol. 28 (3) pp. 219-232

Snow, C. p. 1969, The Two Cultures: And A Second Look, 2nd Edn., (Cambridge, etc.: Cambridge University Press) ISBN: 0-521-09576-X

Sokal, Alan D. 1996 May / June, ‘A Physicist Experiments with Cultural Studies’, Lingua Franca, pp. 62-64

Sokal, Alan D. 1996 (Spring/Summer), ‘Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity’, Social Text, Vol. 46-47 (Science Wars) pp. 217-252

Sokal, Alan D. and Bricmont, Jean 1998, Intellectual Impostures: Postmodern Philosopher’s Abuse of Science, (London: Profile Books) ISBN: 1-86197-074-9

Sontag, Susan 1979, On Photography, (London: Penguin) ISBN: 0-14-005397-2 Sorell, Tom 1991, Scientism: Philosophy and the Infatuation with Science, (London and New

York: Routledge) ISBN: 0-415-03399-3 Squires, Carol (Ed.) 1991, The Critical Image: Essays on Contemporary Photography, (London:

Lawrence & Wishart)

287

Squires, Euan 1994, The Mystery of the Quantum World, 2nd Edn., (Bristol and Philadelphia: Institute of Physics Publishing ) ISBN: 0-7503-0178-3

Stafford, Barbara Maria 1994, Artful Science: Enlightenment Entertainment and the Eclipse of Visual Education, (Cambridge, Mass. and London: MIT Press) ISBN: 0-262-19342-6

Stahl, William A. 1995 (Spring), ‘Venerating the Black Box: Magic in Media Discourse on Technology’, Science, Technology & Human Values, Vol. 20 (2) pp. 234-258

Stebbing, L Susan 1937, Philosophy and the Physicists, 1st Edn., (London: Penguin) Stepansky, Barbara K. 1997, ‘Ambiguity: Aspects of the Wave-Particle Duality’, British Journal

for the History of Science, Vol. 30 pp. 375-385 Steven, Stewart 1999 21 February, ‘If They Didn’t Lie So Much, We Might Listen’, The Mail on

Sunday p. 35, Stewart, Balfour 1873, The Conservation of Energy: An Elementary Treatise on Energy and Its

Laws, (London: Henry S. King) Stewart, Balfour and Lockyer, J. Norman 1868 August, ‘The Place of Life in a Universe of

Energy’, Macmillan’s, Vol. 20 p. 319, Stewart, Balfour and Tait, Peter Guthrie 1886, The Unseen Universe; or, Physical Speculations on

a Future State, 8th Edn., (London: Macmillan) Stewart, Larry 1992, The Rise of Public Science: Rhetoric, Technology and Natural Philosophy in

Newtonian Britain 1660-1750, (Cambridge: ) Stockdale, Alan 1995, ‘‘Stop Talking about Science!’: A Response to Labinger’, Social Studies of

Science, Vol. 25 (2) pp. 337-341 Stofan, Ellen R. 1993 August, ‘The New Face of Venus’, Sky and Telescope, pp. 22-31 Stoppard, Tom 1988, Hapgood, (London and Boston: Faber and Faber ) ISBN: 0-571-15160-4 Sutton, Clive 1992, Words Science and Learning, (Buckingham and Philadelphia: Open University

Press) ISBN: 0-335-09956-4 Sutton, Clive 1994, ‘“Nulius in Verba” and “nihil in Verbis”: Public Understanding of the Role of

Language in Science’, British Journal for the History of Science, Vol. 27 pp. 55-64 Tagg, John 1988, The Burden of Representation: Essays on Photographies and Histories,

(Basingstoke: Macmillan Education) ISBN: 0-333-41823-9 Taylor, Charles Alan 1996, Defining Science: A Rhetoric of Demarcation, (London and

Wisconsin: The University of Wisconsin Press) ISBN: 0-299-15034-8 Teller, Edward 1998 22 May, ‘Science and Morality’, Science, Vol. 280 (5367) pp. 1200-1201 The Editors of Nature 1991 12 September, ‘Nature’s Manifesto for British Science’, Nature,

Vol. 353 pp. 105-112 The Editors of Nature 1999 11 March, ‘Cultural Divides, Forty Years On’, Nature, Vol. 398

(6723) p. 91, The Editors of Time-Life Books 1982, Photography as a Tool, (Amsterdam: Time-Life Books)

ISBN: 7054 0414 5 Theocharis, T and Psimipoulis, M 1987, ‘Where has Science Gone Wrong?’, Nature, Vol. 329

pp. 595-598 Theorizing Culture: An Interdisciplinary Critique After Postmodernism1995, Adam, Barbara and

Allan, Stuart (Eds.) (London: UCL Press) ISBN: 1-85728-329-5 Thomson, John B 1984, Studies in the Theory of Ideology, (Cambridge: Polity Press) Thomson, John B 1990, Ideology and Modern Culture: Critical Social Theory in the Era of Mass

Communication, (Cambridge: Polity Press) ISBN: 0 7456 0082 4 Toben, Bob and Wolf, Fred Alan 1982, Space -Time and Beyond – The New Edition, 2nd Edn.,

(New York: E P Dutton Inc) ISBN: 0-525-47710-1 Tonomura, A., Endo, J., Matsuda, T., Kawasaki, T., and Ezawa, H. 1989, ‘Demonstration of

Single-Electron Buildup of an Interference Pattern’, American Journal of Physics, Vol. 57 (2) pp. 117-120

288

Tonomura, Akira 1994 March, ‘Electron Holography Shows Its Resolve’, Physics World, Vol. 7 (3) pp. 39-43

Toren, Nina 1988, Science and Cultural Context: Soviet Scientists in Comparative Perspective, (New York: Peter Lang) ISBN: 0-8204-0668-6

Trachtman, Leon E 1981, ‘The Public Understanding of Science Effort: A Critique’, Science, Technology & Human Values, Vol. 36

Traub, Charles H. and Lipkin, Jonathan 1998 May, ‘If We Are Digital: Crossing the Boundaries’, Leonardo, Vol. 31 (5) pp. 363-366

Trumbo, Jean 1999 June, ‘Visual Literacy and Science Communication’, Science Communication, Vol. 20 (4) pp. 409-425

Trumbo, Jean 2000 June, ‘Seeing Science: Research Opportunities in the Visual Communication of Science’, Science Communication, Vol. 21 (4) pp. 379-391

Tudge, Colin 1999 Saturday 13 March, ‘Let Us into the Tower of Knowledge’, The Guardian p. 3, Tufte, Edward R. 1983, The Visual Display of Quantitative Information, (Cheshire, Connecticut:

Graphics Press) Tufte, Edward R. 1997, Visual Explanations, (Cheshire, Connecticut: Graphics Press) Turner, Frank M 1978, ‘The Victorian Conflict Between Science and Religion: A Professional

Dimension’, Isis, Vol. 69 (356) p. 376, Upson, Craig, Faulhaber Jr, Thomas, Kamins, David, Laidlaw, David, Schlegel, David, Vroom,

Jeffrey, Gurwitz, Robert, and van Dam, Andries 1989 July, ‘The Application Visualization System: A Computational Environment for Scientific Visualization’, IEEE Computer Graphics and Applications, Vol. 9 (4) pp. 30-42

Van Helden, Albert 2000, Thomas Harriot’s Moon Drawings, (http://es.rice.edu/ES/humsoc/Galileo/Things/moons_harriot.html: Rice University)

Vandoni, Carlo E. 1994, ‘Visualization of Scientific Data for High Energy Physics: Basic Architecture and a Case Study’, Grave, Michael, Hewitt, W. Terry, and Le Lous, Yvon (Eds.) Visualization in Scientific Computing (Chap. 5) pp. 43-53, (Berlin, London, etc.: Springer Verlag) ISBN: 3-540-56147-1

Veltman, Kim H. 1996, ‘Electronic Media: The Rebirth of Perspective and the Fragmentation of Illusion’, Druckrey, Timothy (Ed.) Electronic Culture: Technology and Visual Representation pp. 209-227, (New York: Aperture) ISBN: 0-89381-678-7

Vine, Ian 1997, ‘The Dangerous Psycho-Logic of Media ‘Effects’’, Barker, Martin and Petley, Julian (Eds.) Ill Effects: The Media/Violence Debate (Chap. 8) pp. 125-146, (London and New York: Routledge) ISBN: 0-415-14673-9

Vitebsky, Piers 1995, The Shaman (London, New York, Boston and Toronto: Little Brown and Company) ISBN: 0-316-90304-3

Vollhardt, H., Henn, C., Moekel, G., Teschner, M., and Brickmann, J. 1995, ‘Virtual Reality Modeling Language in Chemistry’, Journal of Molecular Graphics, Vol. 13 pp. 368-372

Vološinov, Valentin Nikolaevic 1973, Marxism and the Philosophy of Language, (London and New York: Seminar Press)

von Baeyer, Hans Christian 1993, Taming the Atom: The Emergence of the Visible Microworld, (London, etc.: Viking) ISBN: 0-670-84017-3

Vvedensky, Dimitri D. and Holloway, Stephen (Eds.) 1992, Graphics and Animation in Surface Science, (Bristol, Philadelphia & New York: Adam Hilger (IOP)) ISBN: 0-7503-0118-X

Waldner, Rosmarie 1993, The Role of Newspapers, Ackrill, Kate (Ed.) pp. 89-96, (London: Ciba Foundation)

Wallis, Roy (Ed.) 1979, On the Margins of Science: The Social Construction of Rejected Knowledge, (Vol. 27) (Sociological Review) ISBN: 0-904425-06-1

Walrand, Jean 1998, Communication Networks: A First Course, 2nd Edn., (Boston, etc.: McGraw-Hill) ISBN: 0-256-17404-0

Walsh, J 1982 15 Jan., ‘Public Attitude to Science is Yes, But-’, Science, Vol. 215 pp. 270-272

289

Watney, Simon 1982, ‘Making Strange: The Shattered Mirror’, Burgin, Victor (Ed.) Thinking Photography, 1st Edn., (Chap. 7) pp. 39-154, (London: Macmillan) ISBN: 0-333-27194-7

Watney, Simon 1986, ‘On the Institutions of Photography’, Holland, Patricia, Spence, Jo, and Watney, Simon (Eds.) Photography/Politics: Two pp. 187-197, (London: Comedia Publishing Group and Photography Workshop) ISBN: 0-906890-89-6

Watt, Alan 1993, 3D Computer Graphics, 2nd Edn., (Wokingham, etc.: Addison Wesley) ISBN: 0-201-63186-5

Watt, Ian M. 1985, The Principles and Practice of Electron Microscopy, (Cambridge, etc.: Cambridge University Press) ISBN: 0-521-38927-5

Watterson, Andrew 1997 March, ‘Diverging Perceptions of Hazards and Risks, Review of A. Irwin and B. Wynne’s Misunderstanding Science.’, EASST Review, Vol. 16 (1)

Weimer, David M. 1994, ‘Brave New Virtual Worlds’, Pickover, Clifford A. and Tewksbury, Stuart K. (Eds.) Frontiers of Scientific Visualization (Chap. 9) pp. 245-278, (New York, Chichester: Wiley) ISBN: 0-471-30972-9

Weinberg, Steven 1993, Dreams of a Final Theory – The Search for the Fundamental Laws of Nature, 1st Edn., (London: Vintage) ISBN: 0 09 9223910

Weinberg, Steven 1994, ‘Response to Steve Fuller’, Social Studies of Science, Vol. 24 pp. 748-751 Weinberg, Steven 1996 8 August, ‘Sokal’s Hoax’, New York Review of Books, pp. 11-15 Weinberg, Steven 1996, ‘Letter to the Editor: Reply to Critics’, New York Review of Books, p. 55, Weisendanger, Roland 1994, Scanning Probe Microscopy and Spectroscopy: Methods and

Applications, (Cambridge, etc.: Cambridge University Press) Weisendanger, Roland 1994, Scanning Probe Microscopy and Spectroscopy, Methods and

Applications, (Cambridge: Cambridge University Press) ISBN: 0521418100 Welland, Mark 1994 March, ‘New Tunnels to the Surface’, Physics World, Vol. 7 (3) pp. 32-36 Wells, Liz (Ed.) 1997, Photography: A Critical Introduction, (London and New York: Routledge)

ISBN: 0-415-12559-6 Wells, Liz 1997, ‘On and Beyond the White Walls: Photography as Art’, Wells, Liz (Ed.)

Photography: A Critical Introduction (Chap. 5) pp. 199-248, (London and New York: Routledge) ISBN: 0-415-12559-6

Wesseling, Lies 1997 March, ‘Technological Culture and Literary Representation (Review of: Joseph Tabbi, Postmodern Sublime: Technology and American Writing from Mailer to Cyberpunk (Ithaca and London: Cornell University Press, 1995. 243 pp.)’, EASST Review, Vol. 16 (1)

Westrum, Ron 1978, ‘Science and Social Intelligence about Anomalies: The Case of Meteorites’, Social Studies of Science, Vol. 8 pp. 461-493

Wheeler, John Archibald 1990, A Journey into Gravity and Spacetime, (New York: HPHLP) ISBN: 0-7167-5016-3

Whitley, Richard 1985, ‘Knowledge Producers and Knowledge Acquirers: Popularisation as a Relation Between Scientific Fields and their Publics’, Shinn, Terry and Whitley, Richard (Eds.) Expository Science: Forms and Functions of Popularisation (Chap. 1) pp. 3-28, (Dordrecht and Boston: )

Williams, Raymond 1958, Culture and Society 1780-1950, (London: Chatto and Windus) Williams, Raymond 1976, Keywords: A Vocabulary of Culture and Society, (Glasgow: Fontana) Williams, Robyn 1993, Old Blokes in White Coats, Ackrill, Kate (Ed.) pp. 189-200, (London: Ciba

Foundation) Wilson, Edward O. 1998 27 March, ‘Integrated Science and the Coming Age of the Environment’,

Science, Vol. 279 pp. 2048-2049 Wilson, Edward O. 1998, Consilience: The Unity of Knowledge, (New York: Knopf) Winner, Langdon 1985, ‘Do Artifacts have Politics?’, MacKenzie, Donald and Wajcman, Judy

(Eds.) The Social Shaping of Technology: How the Refrigerator got its Hum (Chap. 1)

290

pp. 26-38, (Milton Keynes and Philadelphia: Open University Press) ISBN: 0-335-15026-8

Winterson, Jeanette 1997, Gut Symmetries, (London: Granta Books ) ISBN: 1-87207-000-8 Wise, M. Norton 1996, ‘Letter to the Editor’, New York Review of Books, pp. 54-55 Wofson, Richard and Paschoff, Jay M. 1998, Physics with Modern Physics for Scientists and

Engineers, 3rd Edn., (Reading, Mass. etc.: Addison Wesley) Wolfe, Rosalee 1998 Nov., ‘Open GL: Agent of Change or Sign of the Times?’, ACM SIGGRAPH

Computer Graphics, pp. 29-30 Wolfendale, Arnold 1996, Report of the Committee to Review the Contribution of Scientists and

Engineers to the Public Understanding of Science, Engineering and Technology, (London: Her Majesty’s Stationary Office)

Wolff, Robert S. and Yaeger, Larry 1993, Visualization of Natural Phenomena, (Berlin, Heidelberg, New York: Springer-Verlag) ISBN: 3-540-97809-7

Wolpert, Lewis 1992, The Unnatural Nature of Science, (London and Boston: Faber and Faber) ISBN: 0-571-16490-0

Wolpert, Lewis 1994, ‘Response to Steve Fuller’, Social Studies of Science, Vol. 24 pp. 745-747 Wolpert, Lewis 1995 ?, Is Science Dangerous?, (W H Smith) Wolpert, Lewis 1995 Sunday 19 March, ‘Mind and Matter (Review of Robin Dunbar, The Trouble

With Science)’, The Sunday Times p. 6, Wolpert, Lewis and Richards, Alison 1988, A Passion for Science: Renowned Scientists Offer

Vivid Personal Portraits of Their Lives, (Oxford: Oxford University Press) ISBN: 0-19-854212-7

Wood, John (Ed.) 1998, The Virtual Embodied: Presence/Practice/Technology, (London and New York: Routledge) ISBN: 0-415-16026-X

Woolgar, Steve 1981, ‘Discovery: Logic and Sequence in a Scientific Text’, Knorr, Karin D, Krohn, Roger, and Whitley, Richard (Eds.) The Social Process of Scientific Investigation pp. 239-268, (Dordrecht: )

Woolgar, Steve 1988, Science: The Very Idea, (Chichester and London: Ellis Horwood and Tavistock Publications)

Woollacott, Janet 1982, ‘Messages and Meanings’, Gurevitch, Michael et al (Eds.) Culture, Society and the Media (Chap. 4) pp. 91-111, (London and New York: Methuen) ISBN: 0-416-73510-X

Wynne, Brian 1992, ‘Misunderstood Misunderstandings: Social Identities and Public Uptake of Science’, Public Understanding of Science, Vol. 1 pp. 281-304

Wynne, Brian 1993, ‘Public Uptake of Science: A Case for Institutional Reflexivity’, Public Understanding of Science, Vol. 2 pp. 285-426

Wynne, Brian 1994, ‘Reinventing the Wheel’, Jasanoff, Sheila et al (Eds.) Handbook of Science and Technology Studies (Chap. 17) pp. 361-389, (Thousand Oaks (CA), London, New Delhi: Sage Publications) ISBN: 0-8039-4021-1

Yam, Philip 1995 July, ‘In the Atomic Corral’, Scientific American, p. 13, Yearley, Steven 1988, Science, Technology and Social Change, 1st Edn., (London: Unwin

Hyman) ISBN: 0-04-301259-0 Yearley, Steven 1990, ‘The Dictates of Method and Policy: Interpretational Structures in the

Representation of Scientific Work’, Lynch, Michael and Woolgar, Steve (Eds.) Representation in Scientific Practice pp. 337-356, (Cambridge, Mass.: MIT Press) ISBN: 0-262-62076-6

Yearley, Steven 1992 4 Dec, ‘Discriminating Consumers’, The Times Higher Education Supplement, pp. 28-28

Young, Andrew T. 1984 May, ‘No Sulfur Flows on Io’, Icarus, Vol. 58 (2) pp. 197-226 Young, Robert 1985, Darwin’s Metaphor: Nature’s Place in Victorian Culture, (Cambridge:

Cambridge University Press)

291Young, Robert M. 1992, ‘Science, Ideology and Donna Haraway’, Science as culture, Vol. 3 (15)

pp. 165-207 Young, Robert M. 1993, What Scientists Have to Learn Zajonc, Arthur 1993, Catching the Light: The Entwined History of Light and Mind, (Oxford, etc.:

Oxford University Press) ISBN: 0-19-509575-8 Zangwill, A. 1988, Physics at Surfaces, (Cambridge: Cambridge University Press) Zettl, A. 1993 10 June, ‘Making Waves With Electrons’, Nature, Vol. 363 pp. 496-497