national energy research scientific computing …national energy research scientific computing...

102
NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT

Upload: others

Post on 27-May-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

NATIONAL ENERGY RESEARCHSCIENTIFIC COMPUTING CENTER

2003 ANNUAL REPORT

Page 2: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,
Page 3: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

NATIONAL ENERGY RESEARCHSCIENTIFIC COMPUTING CENTER

2003 ANNUAL REPORT

This work was supported by the Director, Office of Science, Office of

Advanced Scientific Computing Research of the U.S. Department of Energy

under Contract No. DE-AC 03-76SF00098.

LBNL-54267, January 2004

Page 4: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,
Page 5: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

TABLE OF CONTENTS

THE YEAR IN PERSPECTIVE. . . . . . . . . . . . . . . . . . 2

ADVANCES IN COMPUTATIONAL SCIENCE . . . 5

Astrophysics: Computing the Big Picture . . . . 7

Universe: The Movie . . . . . . . . . . . . . . . . . 8

Simulation Matches Historic Gamma-Ray Burst. . . . . . . . . . . . . . . . 11

Nearby Supernova Factory Churns Out Discoveries . . . . . . . . . . . . . . . . . . 14

Fusion Energy: Improved Algorithms Shift Plasma Simulations into High Gear. . . . . . 17

Full Waves, Faster Runs for AORSA . . . 18

SuperLU Solver Supercharges NIMROD . . . . . . . . . . . . . . . . . . . . . . . . 20

AMR Methods Accelerate MHD Simulations . . . . . . . . . . . . . . . . 22

Biology: Deciphering Nature’s Assembly Manuals . . . . . . . . . . . . . . . . . . . . 25

ProteinShop: Solving Protein Structures from Scratch . . . . . . . . . . 26

Mapping the Universe of ProteinStructures . . . . . . . . . . . . . . . . . . . . . . 29

DOE Facilty Plan Sees Major Role forComputing. . . . . . . . . . . . . . . . . . . . . . . . . . . 32

Nanoscience: Modeling a Realm Where Everything Is Different . . . . . . . . . . 33

Making Fuel Cells More Efficient and Affordable. . . . . . . . . . . . . . . . . . . 34

Climate Modeling: Predicting the Results of a Global Experiment. . . . . . . . . . . . . . . . . 37

Fingerprints in the Sky . . . . . . . . . . . . . . 38

Applied Mathematics: The Bridge between Physical Reality and Simulations. . . . . . . . 41

Algorithms in the Service of Science . . 42

RESEARCH NEWS . . . . . . . . . . . . . . . . . . . . . . . . . . 47

Accelerator Physics . . . . . . . . . . . . . . . . . . . . . 47

Applied Mathematics . . . . . . . . . . . . . . . . . . . . 48

Astrophysics . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

Chemical Sciences . . . . . . . . . . . . . . . . . . . . . . 52

Climate Change Research. . . . . . . . . . . . . . . . 55

Computer Science . . . . . . . . . . . . . . . . . . . . . . 56

Fusion Energy Sciences . . . . . . . . . . . . . . . . . . 57

High Energy Physics. . . . . . . . . . . . . . . . . . . . . 62

Materials Sciences . . . . . . . . . . . . . . . . . . . . . . 63

Medical and Life Sciences. . . . . . . . . . . . . . . . 66

Nuclear Physics . . . . . . . . . . . . . . . . . . . . . . . . 68

THE NERSC CENTER. . . . . . . . . . . . . . . . . . . . . . . . 71

Clients, Sponsors, and Advisors . . . . . . . . . . . 71

Capability and Scalability . . . . . . . . . . . . . . . . 73

Comprehensive Scientific Support . . . . . . . . . 78

Connectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

Advanced Development . . . . . . . . . . . . . . . . . . 83

Appendix A: NERSC Policy Board. . . . . . . . . . . . . . . 87

Appendix B: NERSC Computational Review Panel . . . . . . . . . . . . . . . . . . . . . 88

Appendix C: NERSC Users Group Executive Committee. . . . . . . . . . . . . . . . . 90

Appendix D: Supercomputing Allocations Committee . . . . . . . . . . . . . . . . . . . . . . . 91

Appendix E: Office of Advanced ScientificComputing Research . . . . . . . . . . . . . . 92

Appendix F: Advanced Scientific Computing Advisory Committee . . . . . . . . . . . . . . . 93

Appendix G: Organizational Acronyms . . . . . . . . . . 94

Page 6: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

2 THE YEAR IN PERSPECTIVE

THE YEAR IN PERSPECTIVE

It is with great satisfaction that wewrite this introduction to a report thatdescribes some truly astounding com-putational results that were obtainedwith the help of NERSC in 2003.

In July 2003 Dr. RaymondOrbach, Director of the DOE Officeof Science, while testifying abouthigh-end computing before the U.S.Congress, explicitly mentionedNERSC’s role in the discovery thatthe expansion of the Universe isaccelerating, as well as the recentU.S. decision to rejoin the interna-tional collaboration to build the ITERfusion reactor. “What changed,” hesaid, “were simulations that showedthat the new ITER design will in factbe capable of achieving and sustain-ing burning plasma.” From accom-

Supercomputing continues to evolve at a rapid rate, and events at NERSC

in 2003 demonstrate this quite clearly. For a few months we could claim to

run the largest unclassified system in the U.S. But as we all know, these

superlatives don’t last. What does last are the scientific accomplish-ments obtained using large systems.

Horst D. Simon

Page 7: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

THE YEAR IN PERSPECTIVE 3

plishments such as these, it is clearthat we really have entered the newera where computational science is anequal partner with theory and exper-iment in making scientific progress.

At NERSC the biggest change in2003 was the upgrade of our currentSeaborg platform from 5 to 10 Tflop/speak performance. This created oneof the largest systems in the U.S.dedicated to basic computationalscience. It now gives NERSC usersaround-the-clock access to an unprece-dented 6,556 processors, coupledwith one of the largest memory sys-tems anywhere. DOE-supportedcomputational scientists continue tohave access to one of best possibleresources to further the DOE mis-sion in the basic sciences.

The upgrade to 10 Tflop/s wentvery smoothly, and the new system

came online one month earlier thanplanned. The first applications testresults on the upgraded Seaborg werevery impressive: not only could usersquickly scale several applications to4,096 processors, but they were alsoable to achieve very high sustainedperformance rates, more than 60%of peak in a few cases. The speed ofthe upgrade process and the newapplications results are a testimonyto the expert staff at NERSC. Wehave again demonstrated our capa-bility to quickly field and use a newsystem productively.

The expanded computationalpower of Seaborg made it possibleto initiate a number of new projects.In the first half of 2003, NERSCstarted the scalability program, whichgave users the opportunity to evalu-ate the scaling of their applications.

For many users, Seaborg became thefirst platform on which they couldexplore calculations using severalthousand processors.

New opportunities for computa-tional science breakthroughs willhopefully arise in 2004 through thenew Innovative and Novel Compu-tational Impact on Theory andExperiment (INCITE) program atNERSC. The goal of the program isto support a small number of com-putationally intensive, large-scaleresearch projects that can make high-impact scientific advances throughthe use of a substantial allocation ofcomputer time and data storage atNERSC. We just completed theselection of the first three INCITEprojects, and we hope to announcebreakthrough results from theseprojects in next year’s annual report.

William T. C. Kramer

Page 8: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

4 THE YEAR IN PERSPECTIVE

One of the signposts of changein 2002 was the launch of the EarthSimulator system in Japan, whichreenergized the high-end computingcommunity in the U.S., resulting ina number of new activities in 2003that made a profound impact on com-putational science and high perform-ance computing. Both the High EndComputing Revitalization Task Force(HECRTF) report and the Science-Based Case for Large-Scale Simulation(SCaLeS) report demonstrated thatthe high-end computing communityin the U.S. is well positioned forfuture developments and growth insupercomputing. The SCaLeS reportgives us a glimpse of what simulationsare possible at sustained speeds inthe range of tens of teraflop/s, andthe HECRTF report lays out thecritical research issues that we needto address in order to reach petascalecomputing performance. This is excit-ing news, because we are thinkingbig again in supercomputing. We

are thrilled that the DOE computa-tional science community, in partic-ular our NERSC users, are ready totake the lead in moving to the nextlevel of computational science.

One small step towards this brightfuture will happen at NERSC inthe next few months. We will havethe opportunity to acquire a newcomputing system that will take oversome of the existing user workloadfrom Seaborg, making even moretime available on Seaborg for large-scale capability computing. This willbe just an intermediate step towardsour next big acquisition with NERSC5. We are looking forward to anotheryear of both scientific and computingaccomplishments at NERSC. Asalways, this progress would not bepossible without the NERSC staff,who continue to tirelessly dedicatetheir time, skill, and effort to makeNERSC the best scientific comput-ing resource in the world. Our specialthanks to all of you.

Horst D. SimonNERSC Center Division Director

William T. C. KramerDivision Deputy and NERSC Center General Manager

Page 9: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

ADVANCES IN COMPUTATIONAL SCIENCE 5

ADVANCES IN COMPUTATIONALSCIENCE

Page 10: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,
Page 11: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

ADVANCES IN COMPUTATIONAL SCIENCE / Astrophysics 7

From ancient times, people have tried to interpret what they see in the night skyas a way of understanding their place in the cosmos. Today astrophysicists relyon high-end computers, networks, and data storage to manage and make senseof the massive amounts of astronomical data being produced by advancedinstruments. Computational simulations performed on NERSC computers areanswering questions as general as how structure developed in the Universe,and as specific as how supernovae produce gamma-ray bursts. NERSC’s real-time integration of data gathering with data analysis is enabling collaboratorslike the Nearby Supernova Factory to step up the pace of discovery. The projectsdescribed in this section demonstrate how modern instruments and computershave ushered in a golden age of astropysics.

ASTROPHYSICSCOMPUTING THE BIG P ICTURE

Page 12: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

8 ADVANCES IN COMPUTATIONAL SCIENCE / Astrophysics

“Turning the snapshots into moviesis the job of theorists,” says JoelPrimack, Professor of Physics at theUniversity of California, SantaCruz. “With computational simula-tions, we can run the history ofstars, galaxies, or the wholeUniverse forward or backward, test-ing our theories and finding expla-nations for the wealth of new datawe’re getting from instruments onthe ground and especially in space.”

Primack is one of the origina-tors and developers of the theory ofcold dark matter, which hasbecome the standard theory ofstructure formation in the Universebecause its predictions generally

match recent observations. Hisresearch group uses NERSC’sSeaborg computer to create simula-tions of galaxy formation. Theirwork here has two major thrusts:simulations of large-scale structureformation in the Universe, and sim-ulations of galaxy interactions.

How did the Universe evolvefrom a generally smooth initial stateafter the Big Bang (as seen in thecosmic microwave background radi-ation) to the chunky texture that wesee today, with clusters, filaments,and sheets of galaxies? That is thequestion of large-scale structure for-mation. In the cold dark matter the-ory, structure grows hierarchically

by gravitational attraction, withsmall objects merging in a continu-ous hierarchy to form more andmore massive objects.

However, “objects” in this con-text refers not only to ordinary“baryonic” matter (composed ofprotons and neutrons), but also tocold dark matter—cold because itmoves slowly, and dark because itcannot be observed by its electro-magnetic radiation. In fact, the lat-est simulation results1 suggest thatgalaxy clustering is primarily aresult of the formation, merging,and evolution of dark matter“halos.”

The concept of dark matter

UNIVERSE: THE MOVIE

When we look up into a clear night sky, we’re looking into the past.

Even with the naked eye, we can see the Andromeda galaxy as it was

2.3 million years ago. Earth- and space-based telescopes provide us with a

plethora of snapshots of the history of the Universe. But none of

these snapshots show how the Universe or any of the objects within it

evolved.

Page 13: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

ADVANCES IN COMPUTATIONAL SCIENCE / Astrophysics 9

halos came from the realizationthat the dark matter in a galaxyextends well beyond the visible starsand gas, surrounding the visiblematter in a much larger halo; theouter parts of galaxies are essentiallyall dark matter. Scientists came tothis conclusion because of the waystars and galaxies move, andbecause of a phenomenon calledgravitational lensing: when lightpasses through the vicinity of agalaxy, the gravity of the dark matterhalo bends the light to differentwavelengths, acting like imperfec-tions in the glass of a lens.

In the view of Primack and hiscolleagues, the large-scale structureof the Universe is determined pri-marily by the invisible evolutionand interactions of massive halosand subhalos, with ordinary matterbeing swept along in the gravita-tional ebbs and flows. They haveused Seaborg to create the highest-resolution simulations of dark mat-ter halos to date, producing 10 tera-bytes of output and models with

around 100,000 halos. These mod-els can be used to predict the typesand relative quantities of galaxies, aswell as how they interact. Figure 1shows the simulated distribution ofdark matter particles and halos attwo different stages of structure for-mation. Interestingly, there aremore subhalos in this model thanthere are visible objects in corre-sponding observational data.Observations from the DEEP(Deep Extragalactic EvolutionaryProbe) and ESO/VLT (EuropeanSouthern Observatory/Very LargeTelescope) sky surveys should soonconfirm or refute the predictions ofthis model.

Elliptical galaxies, galacticbulges, and a significant fraction ofall the stars in the Universe arethought to be shaped largely bygalaxy interactions and collisions,with dark matter still playing themajor role but with bright matterinteractions adding some details.For the first half of the Universe’shistory, stars were formed mostly in

FIGURE 1 Simulated distribution of dark matterparticles (points) and halos (circles) atredshift 3 (left) and redshift 0 (right).(Higher redshift signifies greater dis-tance and age.) Image: Kravtsov et al.,astro-ph/0308519.

1 A. V. Kravtsov, A. A. Berlind, R. H. Wechsler,A. A. Klypin, S. Gottlöber, B. Allgood, and J. R. Primack,“The dark side of the halooccupation distribution,”AstrophysicsJournal (submitted); astro-ph/0308519(2003).

Page 14: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

10 ADVANCES IN COMPUTATIONAL SCIENCE / Astrophysics

spectacular “starburst” events result-ing from the compression of gasesby the gravitational tides of interact-ing galaxies. A starburst is a relative-ly brief event on a cosmic scale,lasting perhaps 100 million years,while the merger of two galaxiesmight take 2 to 3 billion years. TheHubble space telescope in 1996captured spectacular images of twocolliding galaxies and the resultingstarburst (Figure 2).

By creating galaxy interactionsimulations (Figure 3) and compar-ing them with images and datafrom a variety of telescopes andspectral bands, Primack’s researchteam tries to understand the evolu-tion of galaxies, including phenom-ena such as starbursts. They recent-ly focused their attention on howmuch gas is converted to stars dur-ing the course of a major galaxy

merger, and found that previousstudies had overestimated theamount of starburst. Their resultsshowed that galaxy mergers producelarge outflows of gas that inhibit starformation for a while after themerger, an effect that they areadding to their models of galaxy for-mation.

Like many physicists andastronomers, Primack believes thatwe are now living in the golden ageof cosmology. “Cosmology is under-

going a remarkable evolution, driv-en mainly by the wonderful wealthof new data,” he says. “The ques-tions that were asked from thebeginning of the 20th century arenow all being answered. New ques-tions arise, of course, but there’s agood chance that these answers aredefinitive.”

Research funding: HEP, NSF, NASA(Organizational acronyms are spelled outin Appendix G)

FIGURE 2The Hubble telescope uncovered over1,000 bright, young star clusters burstingto life in the heart of a pair of collidinggalaxies. The picture in the upper leftprovides a ground-based telescopicview of the two galaxies, called theAntennae because a pair of long tails ofluminous matter, formed by the gravita-tional tidal forces of their encounter,resembles an insect’s antennae. Thegalaxies are located 63 million light-years away in the southern constellationCorvus. Hubble’s close-up view pro-vides a detailed look at the “fireworks”in the center of the collision. Therespective cores of the twin galaxies arethe orange blobs, crisscrossed by fila-

ments of dark dust. A wide band ofchaotic dust stretches between thecores of the two galaxies. The sweepingspiral-like patterns, traced by brightblue star clusters, are the result of afirestorm of star birth that was triggeredby the collision. Image: Brad Whitmore(Space Telescope Science Institute) andNASA.

Page 15: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

ADVANCES IN COMPUTATIONAL SCIENCE / Astrophysics 11

A GRB detected on March 29,2003 has provided enough informa-tion to eliminate all but one of thetheoretical explanations of its ori-gin. Computational simulationsbased on that model were alreadybeing developed at the NERSCCenter at when the discovery wasmade.

Gamma radiation from outerspace is blocked by the Earth’satmosphere. But if it were not, and

if we could see gamma rays withour eyes, about once a day, some-where in the world, people wouldsee a spectacular flash, a hundredmillion times brighter than a super-nova. These gamma-ray bursts, firstdiscovered in the 1960s by satelliteslooking for violations of theNuclear Test Ban Treaty, are themost energetic events in theUniverse, but they are also amongthe most elusive. GRBs appear ran-

domly from every direction. Theydo not repeat themselves. And theylast from only a few milliseconds toa few minutes—astronomers consid-er a GRB long if it lasts longer than2 seconds.

With data so hard to pin down,scientists for a long time could onlyspeculate about what causes GRBs.Speculations ranged from theintriguing (comet/anti-comet anni-hilation) to the far-fetched (inter-

SIMULATION MATCHES HISTORIC GAMMA-RAY BURST

FIGURE 3Snapshots from a simulation of two galaxies merging. In thissimulation, the galaxies do not crash head-on; their coressideswipe each other and spin wildly apart before their gravi-ty pulls them back together. Image: Thomas J. Cox, Universityof California, Santa Cruz.

After three decades of scientific head-scratching, the origins of at

least some gamma-ray bursts (GRBs) are being revealed, thanks to a newgeneration of orbiting detectors, fast responses from ground-

based robotic telescopes, and a new generation of computers andastrophysics software.

Page 16: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

12 ADVANCES IN COMPUTATIONAL SCIENCE / Astrophysics

stellar warfare). By 1993, 135 differ-ent theories on the origin of GRBshad been published in scientificjournals. But in recent years, threetheoretical models emerged as front-runners—one involving collidingneutron stars, and the other twoinvolving supernovae, the “supra-nova” and “collapsar” models.

In the collapsar model (intro-duced in 1993 by Stan Woosley,Professor and Chair of Astronomyand Astrophysics at the University ofCalifornia, Santa Cruz), the ironcore of an aging star runs out offuel for nuclear fusion and collapsesof its own weight, creating a blackhole or a dense neutron star.Material trying to fall onto thisobject forms a hot swirling disk anda narrow jet, which shoots out ofthe star in less than ten seconds atnearly the speed of light. When thisjet erupts into interstellar space, itcreates a “fireball” of gamma rays—the highest energy, shortest wave-length form of electromagneticradiation. The rest of the starexplodes as a supernova, but thatevent is eclipsed by the brighterGRB.

In the late 1990s, GRB researchtook a great leap forward when anew generation of orbiting detectorswere launched, including theItalian-Dutch satellite BeppoSAXand NASA’s High-Energy TransientExplorer (HETE). These satelliteswere designed to provide quicknotification of GRB discoveries forimmediate follow-up observationsby ground-based instruments. In

1997 astronomers discovered thatlong GRBs leave an afterglow oflower-energy light, such as X rays orvisible light, that may linger formonths, allowing researchers to pin-point where the GRB originatedand providing important cluesabout the event that produced it.The presence of iron in the after-glow light strongly suggested starexplosions. In 1998, the supernovatheories got an additional boostwhen a GRB and a supernovaappeared in the same vicinity atroughly the same time; but the datawas inconclusive and some scien-tists remained skeptical about theconnection. Several similar sugges-tive but inconclusive events in sub-sequent years divided astronomersinto two camps on the issue of asso-ciating GRBs with supernovae.

The skepticism was dispelled onMarch 29, 2003, when HETEdetected an unusually bright andclose GRB—only 2.6 billion lightyears from Earth instead of the typi-cal 10 billion. The discovery trig-gered a swarm of observations thatfound the unmistakable spectral sig-nature of a supernova in the after-glow, as reported in the June 19,2003 issue of Nature.2 This event,named GRB030329 after its detec-tion date, was dubbed the “Rosettastone” of GRBs because it conclu-sively established that at least somelong GRBs come from supernovae,and it singled out the collapsarmodel as the one that best fits thedata collected so far. The March 29burst’s afterglow was so bright that it

allowed astronomers to study theevent in unprecedented detail; theyeven joked about its casting shad-ows. The Hubble Space Telescopewill be able to study the afterglowfor another year, and radio tele-scopes may be able to track itlonger.

The first 3D computational sim-ulations of jet formation and break-out in the collapsar model (Figures4 and 5) were already being con-ducted on Seaborg by Woosley andWeiqun Zhang, a Ph.D. candidateat UC Santa Cruz, under the spon-sorship of the DOE Office ofScience’s SciDAC SupernovaScience Center (http://www.super-sci.org/), one of two SciDAC(Scientific Discovery throughAdvanced Computing) programsfocusing on developing new com-putational methods for understand-ing supernovae. The goal of theSupernova Science Center is to dis-cover the explosion mechanism ofsupernovae through numerical sim-ulation.

Zhang, Woosley, and colleagueshad been modeling jet formationafter core collapse in both two andthree dimensions for a few yearswhen observations fromGRB030329 validated their workand made it newsworthy. “Weweren’t utterly surprised,” Woosleysaid, “because the evidence associ-ating GRBs with supernovae hadbeen accumulating for severalyears. But we weren’t totally confi-dent, either. When the light spectrumfrom the March 29 burst confirmed

2 J. Hjorth, J. Sollerman, P. Møller, J. P. U. Fynbo, S. E. Woosley, et al.,“A very energetic supernova associated with the γ-ray burst of 29March 2003,”Nature 423, 847 (2003).

Page 17: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

ADVANCES IN COMPUTATIONAL SCIENCE / Astrophysics 13

that it came from a supernova, thatfelt good.”

Woosley emphasized, however,that “the data is still way ahead ofthe theory.” The computationalsimulations are still incomplete,and the resolution needs to beimproved. Upcoming work willinclude higher-resolution 3D stud-ies of jet stability—which will helpscientists interpret the differences in

GRB observations—as well as simu-lating the full star explosion thataccompanies the GRB and the corecollapse that precedes it. The proj-ect will be challenging, even on acomputer as fast as Seaborg.

“This does not mean that thegamma-ray burst mystery is solved,”Woosley added. “We are confidentthat long bursts involve a core col-lapse, probably creating a black

FIGURE 4This image from a computer simulationof the beginning of a gamma-ray burstshows the jet 9 seconds after its cre-ation at the center of a Wolf-Rayet starby the newly formed, accreting blackhole within. The jet is now just eruptingthrough the surface of the star, whichhas a radius comparable to that of thesun. Blue represents regions of lowmass concentration, red is denser, andyellow denser still. Note the blue andred striations behind the head of the jet.These are bounded by internal shocks. Image: Weiqun Zhang and Stan Woosley.

FIGURE 5This image from a computer simulationshows the distribution of relativisticparticles (moving near light speed) inthe jet as it breaks out of the star.Yellow and orange are very high energyand will ultimately make a gamma-rayburst, but only for an observer lookingalong the jet (± about 5 degrees). Notealso the presence of some smallamounts of energy in mildly relativisticmatter (blue) at larger angles off the jet.These will produce X-ray flashes thatmay be seen much more frequently.Image: Weiqun Zhang and Stan Woosley.

Page 18: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

14 ADVANCES IN COMPUTATIONAL SCIENCE / Astrophysics

hole. We have convinced mostskeptics. We cannot reach any con-clusion yet, however, on what caus-es short gamma-ray bursts.”

Aside from explaining the mys-terious long GRBs, the study ofsupernovae will fill an essential gapin our knowledge of the Universe,

because supernovae are currentlythought to be the source of all theelements heavier than iron.Because supernovae cannot berecreated in a laboratory, numericalsimulation is the only tool availablefor interpreting observational dataand for developing a detailed

understanding of the physicalprocesses involved. NERSC’s IBMsupercomputer is currently beingused by several research groupsstudying supernovae.

Research funding: NP, SciDAC

The team had figured out how toaccelerate the discovery of Type Iasupernovae by systematicallysearching the same patches of sky atintervals, then subtracting theimages from one another, followingup with more detailed measure-ments of the remaining bright spots.By comparing the distance of theseexploding stars (which all have thesame intrinsic brightness) with theredshifts of their home galaxies,researchers could calculate how fastthe Universe was expanding at dif-ferent times in its history.

As so often happens in scienceand in life, things did not turn outas expected. The researchers got

their high-quality measurements allright, but the data showed that theexpansion of the Universe wasspeeding up, not slowing down aseveryone had assumed. The datawere analyzed on NERSC comput-ers using different sets of cosmologi-cal assumptions, but the resultswere inescapable. And Australia’sHigh-z Supernova Search Team,working independently, hadreached the same conclusion.

So in 1998, the two teams ofresearchers made the startlingannouncement that the expansionof the Universe is accelerating.Before long a new term entered thevocabulary of astrophysics: dark

energy, the unknown force drivingthe expansion, which accounts forabout two-thirds of the total energydensity in the Universe. To find outwhat dark energy is, scientists needmore detailed measurements of theUniverse’s expansion history. Thusthey need even more observationsof Type Ia supernovae, and theyneed to reduce some remaininguncertainty about the uniformbrightness of these exploding stars.

Reducing the uncertainty andimproving the calibration of TypeIa supernovae are among the chiefgoals of the Nearby SupernovaFactory (SNfactory), a spinoff of theSupernova Cosmology Project, led

NEARBY SUPERNOVA FACTORY CHURNS OUT DISCOVERIES

When the Supernova Cosmology Project got up to speed in the

mid-1990s, its team of astrophysicists, led by Saul Perlmutter of

Lawrence Berkeley National Laboratory, were looking forward to finally

getting decent measurements of how much the expansion of the

Universe was slowing down.

Page 19: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

ADVANCES IN COMPUTATIONAL SCIENCE / Astrophysics 15

by Berkeley Lab physicist GregAldering. At present, astronomerscan determine the distance to awell-studied Type Ia supernova withan accuracy of 5 percent, an accu-racy the SNfactory expects toimprove. “Our motive is to establisha much better sample of nearbyType Ia supernovae, against whichthe brightness of distant supernovaecan be compared to obtain relativedistances,” says Aldering.

What researchers want to meas-ure, says Aldering, also includes“the intrinsic colors of a Type Ia atevery stage, so we’ll know theeffects of intervening dust. And wewant to know what difference the‘metallicity’ of the home galaxymakes—that is, the presence of ele-ments heavier than helium.”

To accomplish these goals, theSNfactory needs to discover andmake detailed observations of 300to 600 low-redshift supernovae,many more than have been studiedso far. Automation and tight coordi-nation of the search and follow-upstages are absolutely essential toachieving these numbers. Duringits first year of operation, theSNfactory found 34 supernovae.“This is the best performance everfor a ‘rookie’ supernova search,”Aldering says. “In our second year,we have shown we can discoversupernovae at the rate of nine amonth, a rate other searches havereached only after years of trying.”

This remarkable discovery rateis made possible by a high-speeddata link, custom data pipeline soft-ware, and NERSC’s data handlingand storage capacity. So far theSNfactory has processed a quarter-

million images and archived 6 tera-bytes (trillion bytes) of compresseddata at NERSC—one of the fewcenters with an archive largeenough to store this much data.“The SNfactory owes much of itssuccess to NERSC’s ability to storeand process the vast amounts ofdata that flow in virtually everynight,” says Aldering.

The SNfactory’s custom-devel-oped data pipeline software, devel-oped by UC Berkeley graduate stu-dent Michael Wood-Vasey, managesup to 50 gigabytes (billion bytes) ofdata each night from wide-fieldcameras built and operated by theJet Propulsion Laboratory’s NearEarth Asteroid Tracking program(NEAT). NEAT uses remote tele-scopes at Mount PalomarObservatory in Southern Californiaand at the U.S. Air Force’s MauiSpace Surveillance System onMount Haleakala.

With the help of the HighPerformance Wireless Research and

FIGURE 6A special link in the High PerformanceWireless Research and EducationNetwork (HPWREN) transmits imagesfrom the Near Earth Asteroid Trackingprogram (NEAT) at Mount Palomar toNERSC for storage and processing forthe SNfactory.

Page 20: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

16 ADVANCES IN COMPUTATIONAL SCIENCE / Astrophysics

Education Network (HPWREN)program at the San DiegoSupercomputer Center (SDSC),the SNfactory was able to establisha custom-built, high-speed link withMount Palomar (Figure 6). Existinglinks to Maui and to SDSCthrough the DOE’s EnergySciences Network (ESnet) com-plete the connection to NERSC.

NEAT sends images of about500 square degrees of the sky toNERSC each night. The datapipeline software automaticallyarchives these in NERSC’s HighPerformance Storage System(HPSS). NEAT’s telescopes revisitthe same regions of the sky roughlyevery six days during a typical 18-day observing period. When asupernova appears in one of thosegalaxies, the SNfactory can find itusing image subtraction software

that can sift through billions ofobjects. This analysis is done usingNERSC’s PDSF Linux cluster.

Eventually the pipeline willautomate the entire discovery andconfirmation process. Once asupernova is discovered from thePalomar or Maui images, follow-upobservations will be obtained viaremote control of a custom-builtdual-beam optical spectrograph(being completed by SNfactory col-laborators in France) mounted onthe University of Hawaii’s 88-inchtelescope on Mauna Kea. TheHawaii observations will be shippedby Internet for image processing ata supercomputing center in Franceand then sent to NERSC for analysis.

A recent major discovery madepossible by the SNfactory was thefirst detection of hydrogen in theform of circumstellar material

around a supernova—in this case,SN 2002ic, discovered near maxi-mum light.3 Researchers have beenlooking for hydrogen to discrimi-nate between different types of pos-sible progenitor systems to Type Iasupernovae. Large amounts ofhydrogen around SN 2002ic sug-gest that the progenitor system con-tained a massive asymptotic giantbranch star that lost several solarmasses of gas in a “stellar wind” inthe millennia leading up to theType Ia explosion. Discoveries likethis, which provide a more detailedunderstanding of the physics ofsupernovae, will improve the use-fulness of supernovae as distancemarkers on the journey backthrough the history of the cosmos.

Research funding: HEP, LBNL, FBF, CNRS,IN2P3, INSU, PNC

3 M. Hamuy, M. M. Phillips, N. B. Suntzeff, J. Maza, L. E. González, M. Roth, K. Krisciunas, N. Morrell, E. M. Green, S. E. Persson, and P. J.McCarthy,“An asymptotic-giant-branch star in the progenitor system of a type Ia supernova,”Nature 424, 651 (2003).

Page 21: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

ADVANCES IN COMPUTATIONAL SCIENCE / Fusion Energy 17

“The computer literally is providing a new window through which we canobserve the natural world in exquisite detail.”1

Nowhere is that statement more true than in the field of plasma science, wherefor nearly three decades, computational simulations have contributed as much asexperiments to the advancement of knowledge. The simulation of a magneticfusion confinement system involves modeling of the core and edge plasmas, aswell as the plasma-wall interactions. In each region of the plasma, turbulencecan cause anomalies in the transport of the ionic species in the plasma; therecan also be abrupt changes in the form of the plasma caused by large-scaleinstabilities. Computational modeling of these key processes requires large-scale simulations covering a broad range of space and time scales.

DOE’s SciDAC program has brought together plasma physicists, computer sci-entists, and mathematicians to develop more comprehensive models and morepowerful and efficient algorithms that enable plasma simulation codes to takefull advantage of the most advanced terascale computers, such as NERSC’sSeaborg IBM system. Dramatic improvements in three of these codes aredescribed in this section.

FUSION ENERGYIMPROVED ALGORITHMS SHIFT PLASMA SIMULATIONS INTO HIGH GEAR

1 J. S. Langer, chair of the July 1998 National Workshop on Advanced Scientific Computing.

Page 22: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

18 ADVANCES IN COMPUTATIONAL SCIENCE / Fusion Energy

However, on Earth a fusion reac-tion requires that the fuel be heatedto hundreds of millions of degrees,even hotter than in the sun. Atthese astronomical temperatures,the fuel atoms are torn apart intotheir constituent electrons andnuclei, forming a state of mattercalled plasma.

One of the ways to get the fuelthis hot is to use intense electro-magnetic waves, much as amicrowave oven is used to heatfood. Waves are also used for otherimportant purposes such as to driveelectric currents affecting the plas-ma magnetic configuration, to forcemass flow of the plasma, affectingits stability, and for other plasmacontrol tasks. Therefore, it is impor-tant to have a good theoreticalunderstanding of wave behaviorand to be able to calculate it accu-rately. The goal of the SciDACproject “Terascale Computation ofWave-Plasma Interactions in

Multidimensional Fusion Plasmas”is to develop the computationalcapability to understand the physi-cal behavior of waves in fusion plas-mas and to model these wave phe-nomena accurately enough thattheir effect on other plasma process-es, such as stability and transport ofparticles and energy, can be under-stood.

According to project leader DonBatchelor of Oak Ridge NationalLaboratory, “Because the particlesare so hot, they move at speedsalmost the speed of light and cantravel a distance comparable to awavelength in the time of a fewoscillations of the wave. Thismotion makes it difficult to calcu-late how the plasma particles willrespond to the waves and howmuch heating or electric currentthey will produce.”

“Another challenge,” Batcheloradds, “is that at a given frequency,several kinds of plasma waves can

exist with very different wavelengthsand polarizations. A wave launchedinto the plasma can in a short dis-tance completely change its charac-ter to another type of wave, aprocess called mode conversion. Tostudy these effects, the computermodel must have very high resolu-tion to see the small-scale structuresthat develop, which means that verylarge computers are needed to solvefor the very large number ofunknowns in the equations. Also,the computers must be extremelyfast in order to obtain the solutionsin a reasonable time.”

Until now researchers wishingto calculate the effects of waves inplasma have been forced by compu-tational feasibility to make a diffi-cult choice between restricting con-sideration to a single dimension orsimplifying the physics model. Thefirst choice, treating the plasmageometry as one-dimensional, isakin to tunnel vision. The wave is

FULL WAVES, FASTER RUNS FOR AORSA

DOE’s magnetic fusion energy research program aims to produceenergy by the same process that takes place in the sun.

Page 23: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

ADVANCES IN COMPUTATIONAL SCIENCE / Fusion Energy 19

computed along a single linethrough the plasma, but one doesnot get a picture of what is occurringin the whole plasma cross-section.The second choice eliminates fromconsideration many of the mostimportant wave processes for today’sfusion experiments, which requirehigh frequencies and can have veryshort wavelengths in some regionsof the device.

In a partnership between plas-ma physicists and computer scien-tists, this project has increased theresolution and speed of plasmawave solvers to the point that it ispossible for the first time to studythe mode conversion process in thecomplicated geometry and the fullscale of real fusion devices.Collaborators have developed newwave solvers in 2D and 3D calledAll-Orders Spectral Algorithm(AORSA) solvers based on a moregeneral formulation of the physicscalled an integral equation. It isnow possible to compute plasmawaves across an entire plasma cross-section with no restriction on wave-length or frequency (Figure 1). Inthis approach the limit on attain-able resolution comes only fromthe size and speed of the availablecomputer.

AORSA uses a fully spectral rep-resentation of the wave field in aCartesian coordinate system. Allmodes in the spectral representa-tion are coupled, and so computingthe solution requires calculatingand inverting a very large densematrix. For example, with 272 ×272 Fourier modes in two dimen-sions or 34 × 34 × 64 modes inthree dimensions, the system

processes about 220,000 coupledcomplex equations that requireabout 788 GB of memory. AORSA-3D took about 358 minutes to runon 1,936 processors of NERSC’sSeaborg computer. The efficientcomputation achieved over 60 per-cent of available peak performance(over 1.9 teraflop/s or 1,900 billionoperations per second).

A new formulation of AORSA-3D transforms the linear systemfrom a Fourier basis to a representa-tion in the real configuration space.This alternative representation pres-ents new opportunities to reducethe overall number of equationsused. In one example of 34 × 34 ×64 modes, the number of equationswas reduced by more than a factorof 5—from 220,000 to about

40,000. The new reduced systemrequired only 26 GB of memory andwas completed in about 13 minutes,with a 100-fold speedup in the lin-ear solver and a 27-fold speedup inoverall runtime. The efficiencygained from the real space formula-tion of AORSA-3D allows higherresolution and more design analysisfor future experiments such as theQuasi-Poloidal Stellarator.

“Funding from the SciDACproject has enabled us to assemblea team from multiple institutions towork on a unified set of goals in away that was not possible beforeSciDAC,” Batchelor says. “Workingwith computer scientists funded towork directly with our project hasmade it feasible to study new regimesof wave physics which previously

–0.3

–0.2

–0.1

0

0.1

0.2

0.3

0.4 0.5 0.6 0.7

R (m)

Z (

m)

0.8 0.9

R (m)

0.575 0.625 0.675 0.7250.60 0.65 0.70

He3 H

(a) Real (E⊥) (b) Real (E||)

E parallel for C-Mod D (3He, H)

–0.2

–0.1

0

0.1

0.2

FIGURE 1Mode conversion in a plasma cross sec-tion computed with AORSA. From E. F.Jaeger et al., “Sheared poloidal flowdriven by mode conversion in tokamakplasmas,” Phys. Rev. Lett. 90, 195001(2003).

Page 24: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

20 ADVANCES IN COMPUTATIONAL SCIENCE / Fusion Energy

could only be treated in one dimen-sion or with approximate tech-niques, and to carry out multiplecode runs at high resolution for sci-entific case studies where before itwas only possible to perform a sin-gle run at minimal resolution.”

“Working with applied mathe-

maticians,” he added, “we aredeveloping new mathematical rep-resentations for the wave fields thatare more data efficient than the tra-ditional spectral techniques presentlyemployed and offer promise to sub-stantially reduce the size of the com-putational load to solve plasma wave

problems. And we are making sci-entific progress on problems ofimportance to the fusion programby applying the codes we havedeveloped and improved.”

Research funding: FES, SciDAC, ORNL

As a result, NIMROD runs four tofive times faster for cutting-edgesimulations of nonlinear macro-scopic electromagnetic dynamics—with a corresponding increase inscientific productivity.

The NIMROD project, fundedby the DOE Office of FusionEnergy Sciences and the SciDACCenter for Extended Magneto-hydrodynamic Modeling (CEMM),is developing a modern computercode suitable for the study of long-wavelength, low-frequency, non-linear phenomena in fusion reactorplasmas. These phenomena involvelarge-scale changes in the shapeand motion of the plasma andseverely constrain the operation offusion experiments. CEMM alsosupports a complementary plasmasimulation code, AMRMHD,which uses different mathematical

formulations (see below).By applying modern computa-

tional techniques to the solution ofextended magnetohydrodynamics(MHD) equations, the NIMRODteam is providing the fusion com-munity with a flexible, sophisticatedtool which can lead to improvedunderstanding of these phenomena,ultimately leading to a better ap-proach to harnessing fusion energy.Since the beginning of the project,the NIMROD code has been devel-oped for massively parallel compu-tation, enabling it to take fulladvantage of the most powerfulcomputers to solve some of thelargest problems in fusion. Theteam’s primary high-end computingresource is Seaborg at NERSC.

NIMROD’s algorithm requiressolution of several large sparsematrices in parallel at every time

step in a simulation. The stiffnessinherent in the physical system leadsto matrices that are ill-conditioned,since rapid wave-like responses pro-vide global communication within asingle time-step. The preconditionedconjugate gradient (CG) solver thathas been used was the most compu-tationally demanding part of thealgorithm, so Carl Sovinec of theNIMROD team consulted with theSciDAC Terascale Optimal PDESimulations (TOPS) Center to finda replacement.

Conversations with David Keyesof Old Dominion University, DineshKaushik of Argonne NationalLaboratory, and Sherry Li andEsmond Ng of Lawrence BerkeleyNational Laboratory pointed toSuperLU as a possibly more effi-cient matrix solver for NIMROD.SuperLU is a library of software for

SUPERLU SOLVER SUPERCHARGES NIMROD

Developers of the NIMROD code, which is used to simulate fusionreactor plasmas, collaborated with members of the SciDAC Terascale

Optimal PDE Simulations Center to implement the SuperLU linear solver

software within NIMROD.

Page 25: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

ADVANCES IN COMPUTATIONAL SCIENCE / Fusion Energy 21

solving nonsymmetric sparse linearsystems in parallel using directmethods.

It took less than a month toimplement, test, and releaseSuperLU into the full NIMRODproduction code, and the perform-ance improvements were dramatic.For two-dimensional linear calcula-tions of MHD instabilities, NIMRODruns 100 times faster with SuperLUthan it does with the CG solver.While linear calculations do notrequire supercomputer resources,they are extremely useful for pre-liminary explorations that helpdetermine which cases require in-depth nonlinear simulations. Forlinear problems, this performanceimprovement makes NIMRODcompetitive with special-purposelinear codes.

For cutting-edge three-dimen-

sional, nonlinear tokamak simula-tions (Figure 2), which requiresupercomputing resources, NIM-ROD with SuperLU runs four tofive times faster than before. Thisimproved efficiency yields four tofive times more physics results forthe same amount of time on thecomputer—a major improvementin scientific productivity.

The nonlinear simulations accu-mulate relatively small changes inthe matrix elements at each timestep, but there are no changes tothe sparsity pattern. The NIMRODimplementation allows SuperLU toreuse its factors repeatedly until thematrix elements accumulate a sig-nificant change and refactoringbecomes necessary. Refactoringmakes the performance improve-ment less dramatic in nonlinearsimulations than in linear calcula-

tions, but it is still very significant.The net result is that computa-

tionally demanding simulations ofmacroscopic MHD instabilities intokamak plasmas, which until nowwere considered too difficult, havebecome routine.

Research funding: FES, SciDAC

FIGURE 2Full 3D numerical simulation of plasmaparticle drift orbits in a tokamak. FromCharlson C. Kim, Scott E. Parker, andthe NIMROD team, “Kinetic particles inthe NIMROD fluid code,” 2003 SherwoodFusion Theory Conference, CorpusChristi, Texas.

Page 26: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

At SC2000 in Dallas, Phil Colella,head of the Applied NumericalAlgorithms Group at LawrenceBerkeley National Laboratory, andSteve Jardin, co-leader of theComputational Plasma PhysicsGroup at Princeton Plasma PhysicsLaboratory, were both scheduled togive talks in the Berkeley Labbooth. Colella discussed “Adaptivemesh refinement research and soft-ware at NERSC,” where Jardin hasconducted his scientific computingfor years. Jardin gave a presentationon “A Parallel Resistive MHDProgram with Application toMagnetic Reconnection.”

The scientist and the mathe-matician got to talking betweentheir presentations and one thingled to another. They began an

informal collaboration which wassoon formalized under the auspicesof SciDAC—Jardin is principalinvestigator for the Center forExtended MagnetohydrodynamicModeling (CEMM), while Colellais PI for the Applied PartialDifferential Equations Center(APDEC). Jardin’s group was ableto incorporate the CHOMBO adap-tive mesh refinement (AMR) codedeveloped by Colella’s group into anew fusion simulation code, whichis now called the PrincetonAMRMHD code. “Using the AMRcode resulted in a 30 timesimprovement over what we wouldhave had with a uniform mesh codeof the highest resolution,” Jardinsays.

The AMRMHD code, devel-

oped in conjunction with Princetonresearcher Ravi Samtaney andBerkeley Lab researcher TerryLigocki, is already producing newphysics results as well. It poweredthe first simulation demonstratingthat the presence of a magneticfield will suppress the growth of theRichtmyer-Meshkov instabilitywhen a shock wave interacts with acontact discontinuity separating ion-ized gases of different densities. Theupper and lower images in Figure 3contrast the interface without(upper) and with (lower) the mag-netic field. In the presence of thefield, the vorticity generated at theinterface is transported away by thefast and slow MHD shocks, remov-ing the driver of the instability.Results are shown for an effective

22 ADVANCES IN COMPUTATIONAL SCIENCE / Fusion Energy

AMR METHODS ACCELERATE MHD SIMULATIONS

The annual Supercomputing conference held every November is

well known as a sort of international watering hole, where many of

the world’s leading experts in high-performance computing gather for a

week to take stock of the competition, exchange ideas, and make

new connections.

Page 27: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

ADVANCES IN COMPUTATIONAL SCIENCE / Fusion Energy 23

mesh of 16,384 × 2,048 pointswhich took approximately 150hours to run on 64 processors ofSeaborg—25 times faster than anon-AMR code.

Another new physical effect dis-covered by the AMRMHD code iscurrent bunching and ejection dur-ing magnetic reconnection (Figure4). Magnetic reconnection refers tothe breaking and reconnecting ofoppositely directed magnetic fieldlines in a plasma. In the process,magnetic field energy is converted toplasma kinetic and thermal energy.

The CEMM project has been

collaborating with other SciDACsoftware centers in addition toAPDEC. For example, in a collabo-ration that predated SciDAC, thegroup developing the M3D codewas using PETSc, a portable toolkitof sparse solvers distributed as partof the ACTS Collection of DOE-developed software tools. Also in theACTS Collection is Hypre, a libraryof preconditioners that can be usedin conjunction with PETSc. UnderSciDAC, the Terascale OptimalPDE Solvers (TOPS) Centerworked with CEMM to add Hypreunderneath the same code interface

that M3D was already using to callthe PETSc solvers. The combinedPETSc-Hypre solver library allowsM3D to solve its linear systems twoto three time faster than before.

According to Jardin, fusion plas-ma models are continually beingimproved both by more completedescriptions of the physical process-es, and by more efficient algo-rithms, such as those provided byPETSc and CHOMBO. Advancessuch as these have complementedincreases in computer hardwarespeeds to provide a capability todaythat is vastly improved over what

a1

t = t1 B = 0

a2

b1

t = t2

b2

c1

t = t3

c2

B ≠ 0

B = 0

B ≠ 0

B = 0

B ≠ 0

FIGURE 3Stabilization of Richtmyer-Meshkov instability by a magneticfield. From R. Samtaney, “Suppression of the Richtmyer-Meshkov instability in the presence of a magnetic field,”Princeton Plasma Physics Laboratory report PPPL-3794(submitted to Phys. Fluids).

1.669 1.713 1.732

1.760 1.876 1.942

1.978 2.253 2.593

FIGURE 4Current bunching and ejection during magnetic reconnec-tion. From R. Samtaney, S. Jardin, P. Colella, and T. Ligocki,“Numerical simulations of magnetic reconnection usingAMR,” Sherwood Fusion Theory Conference, Rochester, NY,April 2002.

Page 28: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

24 ADVANCES IN COMPUTATIONAL SCIENCE / Fusion Energy

was possible 30 years ago (Figure 5).This rate of increase of effectivecapability is essential to meet theanticipated modeling demands offusion energy research, Jardin says.

“Presently, we can apply ourmost complete computational mod-els to realistically simulate bothnonlinear macroscopic stability and

microscopic turbulent transport inthe smaller fusion experiments thatexist today, at least for short times,”Jardin says. “Anticipated increasesin both hardware and algorithmsduring the next five to ten years will enable application of evenmore advanced models to thelargest present-day experiments

and to the proposed burning plasmaexperiments such as ITER [theInternational ThermonuclearExperimental Reactor].”

Research funding: FES, SciDACFor further discussion of AMR in computational science, see page 42.

100

1970 1980 1990

Calendar Year

2000 2010

101

102

Eff

ecti

ve s

usta

ined

spe

ed in

equ

ival

ent g

igaf

lops

103

104

105

106

Micro-turbulenceeffective speed

Improvedelectronmodels

Improvedlinearsolvers

Global MHDeffective speed

Higher-orderelements

Semi-implicit

Partially-implicit

Full EarthSimulator(Japan)

1000 NERSCSP3 processors(typical)

16 processorCray C90

Cray YMP

Delta-f,magneticcoordinates

Gyro-kinetics

Effective speedfrom hardwareimprovementsalone

FIGURE 5“Effective speed” increases in fusioncodes came from both faster hardwareand improved algorithms. Image: SteveJardin.

Page 29: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

ADVANCES IN COMPUTATIONAL SCIENCE / Biology 25

Now that a working draft of the human genome has been created, scientists arebusy identifying the genes within the sequences of those 3 billion DNA basepairs of nucleotides. The complete genomes of many other organisms—frommicrobes to plants to animals—are also available, with new genomes beingcompleted every week. Scientists are now facing the opportunity to understandnot just the human body but the fundamental processes of life itself.

Once a gene’s boundaries have been established, the next steps are to deter-mine the structures of the proteins encoded in the gene, then to determine thefunctions of those proteins. NERSC users and collaborators are developinginnovative computational tools to help determine protein structures and func-tions, as described in this section. These tools bring us closer to the goal of utilizing genomic discoveries to improve human health and to protect andrestore the environment.

BIOLOGYDECIPHERING NATURE’S ASSEMBLY MANUALS

Page 30: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

26 ADVANCES IN COMPUTATIONAL SCIENCE / Biology

ProteinShop, a computer visualization tool for manipulating protein

structures, is helping biologists close in on one of their cherishedgoals: completely determining an unknown protein’s shape from

its gene sequence.

PROTEINSHOP: SOLVING PROTEIN STRUCTURESFROM SCRATCH

Silvia Crivelli of the VisualizationGroup in Berkeley Lab’s Compu-tational Research Division says amajor step forward came when “wecopied concepts from robotics.When you move a robot’s arm, youmove all the joints, like your realarm.” After a year of work onProteinShop, Crivelli says, “we wereable to apply the same mathematicaltechniques to protein structures.”

It’s just one of the componentsthe ProteinShop vehicle uses tohelp competitors in the hottest racein biological computation—the“Grand Prix of bioinformatics”known as CASP, the CriticalAssessment of Structure Prediction.In 1994, the challenge of predictingfinished protein structures from

gene sequences gave rise to theunique CASP scientific competi-tion. Teams of biologists and com-puter scientists from around theworld try to beat one another to thefastest and most accurate predic-tions of the structures of recentlyfound but as-yet unpublished pro-teins. The number of competitorshas more than quintupled, fromfewer than three dozen groups in1994 to 187 in 2002’s CASP5.

At CASP5, a team led by TeresaHead-Gordon of Berkeley Lab’sPhysical Biosciences Division, whois also Assistant Professor of Bio-engineering at the University ofCalifornia at Berkeley, employedthe Global Optimization Methoddeveloped by Head-Gordon and her

colleagues, including Crivelli fromBerkeley Lab and Bobby Schnabel,Richard Byrd, and Betty Eskowfrom the University of Colorado.

Instead of depending heavily onknowledge of protein folds (3D setsof structures) already stored in theProtein Data Bank (PDB), theGlobal Optimization Method usesinitial protein configurations provid-ed by ProteinShop, which was cre-ated by Wes Bethel and Crivelli ofBerkeley Lab along with NelsonMax of Lawrence LivermoreNational Laboratory and theUniversity of California at Davis,and UC Davis’s Bernd Hamannand Oliver Kreylos.

With a ProteinShop jumpstart,Head-Gordon’s CASP5 team was

Page 31: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

ADVANCES IN COMPUTATIONAL SCIENCE / Biology 27

able to predict the configurations of20 new or difficult protein foldsranging from 53 to 417 amino acidsin length, which, says Crivelli, “isgreat for a method that doesn’t usemuch knowledge from the PDB.”

A protein’s shape is what deter-mines its function. Yet a one-dimen-sional string of amino acid residues,as specified by a gene’s codingsequences, does not on the face of itreveal much about 3D shapes.Swimming in a watery environment,the amino-acid string—the protein’sprimary structure—quickly twistsinto familiar shapes known as alphahelices and beta sheets, striving to

reduce attractive and repulsiveforces among the linked aminoacids. The result is local regionswhere the energy required to main-tain the structure is minimized.

The secondary structures swiftlyfold up into a 3D tertiary structure;for most proteins this represents the“global” minimum energy for thestructure as a whole. (Some compli-cated proteins like hemoglobin,assembled from distinct components,have a quaternary structure as well.)

By studying thousands of knownproteins, biologists have learned torecognize sequences of amino acidsthat are likely to form alpha helices,

plus those that form the flat strandsthat arrange themselves side by side inbeta sheets. This secondary-structureinformation can be stored in databanks and applied to unknown pro-tein folds.

But the so-called coil regionsthat link secondary structures in anunknown protein are not so easilypredicted. The more amino acids inthe protein, the more local energyminima there are, and the harder itbecomes to calculate the globalenergy minimum.

The Global OptimizationMethod tackles the problem in twodistinct phases. During the setup

FIGURE 1SETUP PHASE OF THE GLOBALOPTIMIZATION METHOD.a. ProteinShop reads an unknown pro-

tein’s string of amino acids andassembles the predicted secondarystructures: in this case, “uncoiled”coils alternate with beta strands anda central alpha helix. Even for largeproteins, this step is virtually instan-taneous.

b. The user tells ProteinShop to aligntwo beta strands into a partial beta

sheet, forming hydrogen bonds betweenselected amino acid residues from eachstrand. In the yellow coil region, inversekinematics calculates the correctdihedral angles as the strands arebent together.

c. To assemble a second beta sheet,ProteinShop semiautomatically alignshydrogen bonds in the third and fourthbeta strands.

d. The two partial sheets are pulled to-gether and stabilized by hydrogen bondsbetween beta strands one and four.

e. Now the user selects the central helixand manipulates it to reduce interfer-ence (where the orange spheres markatom collisions). Meanwhile, inversekinematics calculates the angles inthe flanking coil regions, andProteinShop keeps all structuresintact while the helix is moved.

f. The final 3D protein structure, readyfor optimization.

Images: Oliver Kreylos

a. b.

d.

c.

e. f.

Page 32: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

28 ADVANCES IN COMPUTATIONAL SCIENCE / Biology

phase (Figure 1a–f), the programgenerates likely secondary structuresfrom a given sequence of amino acids,which are combined into severalapproximate configurations. Duringthe next phase, the overall energy ofthe structure is reduced step by step.Global optimizations are performedon subsets of the angles among theamino acid residues (dihedral angles)which are predicted to make up coilregions. Eventually, because theenergy function is not perfect, sev-eral tertiary structures are proposed.

Before ProteinShop, the setupphase of the Global OptimizationMethod was immensely time-con-suming. Because a protein’s primarystructure contains dozens or hun-dreds of acid residues, thus thou-sands of atoms, the initial phaserequired computer runs of hours ordays before settling on secondarystructures. Moreover, while the pre-diction of alpha helices was straight-forward, beta sheets were hard toassemble from beta strands, andtheir configurations were less certain.In CASP4, in 2000, the team pre-dicted the structures of only eightfolds, the longest containing 240amino acids.

The Visualization Group devel-oped ProteinShop in preparation forCASP5. ProteinShop incorporatesthe mathematical technique calledinverse kinematics—well known notonly in robotics but also in videogaming. Inverse kinematics is usedto predict the overall movements ofstructures consisting of jointed seg-ments, for example the fingers, arms,and shoulders of a robot or an ani-mated figure. By taking into accountthe degrees of freedom permissible

at each joint, contortions that don’tbreak limbs or penetrate bodies canbe predicted.

“The difference is that with arobot you have maybe 10 or 20joints, but in a coil we often havelong regions, 80 amino acids,”Crivelli says, “and we want all of thedihedral angles among them to movein a concerted way.” In ProteinShop,the secondary structures and coilsare built up by adding amino acidsto the structure one at a time, treat-ing each as a jointed segment of aflexible structure. Within seconds a“geometry generator” module incor-porates predicted secondary struc-tures, or fragments of them, in thestring. “The whole thing looks likeyou could just move it around likespaghetti,” says Crivelli. “But beforewe incorporated reverse kinematics,if you tried to move a protein con-figuration, it broke.”

Now the process works fastenough to be truly interactive,allowing the user to alter the dihedralangles between individual aminoacids. In difficult cases like theassembly of beta strands into sheets,the user can manipulate the confor-mation to achieve the best, leastenergetic fit. Moreover, the usercan play with the entire “preconfig-uration,” dragging whole secondaryassemblies into new relationshipswithout breaking previous structures.“Once we have the alpha and betastructures, we want to leave themalone,” Crivelli says. “Mainly wework on the coil regions.”

The ProteinShop programallows for various styles of illustrat-ing the fold and its parts, and ithelps out the user by sending plenty

of signals. In one mode, littleorange spheres pop up when theuser has forced atoms to collide: themore energy-expensive the colli-sion, the bigger the orange sphere.Manipulation can also be simplifiedby hand-selecting residues for auto-matic bonding.

The second phase of the GlobalOptimization Method—seeking theglobal energy minimum—remains acomputational challenge. The basicapproach to finding the optimalenergy for a tertiary structure is torepeatedly tweak the energy budgetsof many smaller regions, incorporat-ing each improvement until there isno further change. This “physics-based” method depends only onenergy calculations, not on knowl-edge of existing folds.

In practice, to do such calcula-tions for every part of a structurewould take much too long. Thesophisticated procedures developedby Head-Gordon’s team optimizeonly randomly sampled, smallregions. Thus it’s not certain thatthe end result is the true globaloptimum. Even so, global optimiza-tion of an unknown protein of mod-erate size may require weeks ofcomputer time.

“We are working out newmethodology that will combine ourphysics-based approach with knowl-edge-based methods,” says Crivelli.“By recognizing structures and frag-ments that are known to work, wewon’t have to calculate every anglefrom scratch. The tool will be highlyinteractive, displaying energies andsaving minima as the user finds them.It will be organized as a guidedsearch through a dynamically evolving

Page 33: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

ADVANCES IN COMPUTATIONAL SCIENCE / Biology 29

tree, basing new structures on previ-ous ones that have been shown towork for the fold.”

To enable this high degree ofinteraction, still higher performancewill be required from the GlobalOptimization Method’s already

high-performance parallel-process-ing codes. The results will haveimplications for scientific problemsfar beyond the successful solutionof unknown protein structures.Other complex optimization prob-lems might be amenable to this

combination of local and globaloptimization approaches, such astransportation and scheduling,structural optimization, chip design,and machine learning.

Research funding: BER, ASCR-MICS, NSF

Researchers at Berkeley Lab and theUniversity of California at Berkeleyhave created the first 3D global map ofthe protein structure universe (Figure2). This map provides importantinsight into the evolution and demo-graphics of protein structures and mayhelp scientists identify the functionsof newly discovered proteins.

Sung-Hou Kim, a chemist with ajoint appointment at Berkeley Laband UC Berkeley, led the develop-ment of this map. An internationallyrecognized authority on proteinstructures, he expressed surprise athow closely the map, which is basedsolely on empirical data and a math-ematical formula, mirrored the widelyused Structural Classification Systemof Proteins (SCOP), which is basedon the visual observations of scien-tists who have been solving proteinstructures.

MAPPING THE UNIVERSE OF PROTEIN STRUCTURES

α/β

β

α+β

α

FIGURE 2This 3D map of the protein structure universe shows the distribution in space of the500 most common protein fold families as represented by spheres. The spheres,which are colored according to classification, reveal four distinct classes: α (red), β(yellow), α+β (blue), and α/β (cyan). It can be seen that the α, β, and α+β classesshare a common area of origin, while the α/β class of protein structures is shown tobe a late bloomer on the evolutionary flagpole. From J. Hou, G. E. Sims, C. Zhang,and S.-H. Kim, “A global representation of the protein fold space,” PNAS 100, 2386(2003).

The protein universe, the vast assemblage of biological molecules

that are the building blocks of living cells and control thechemical processes which make those cells work, has finally been

mapped.

Page 34: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

30 ADVANCES IN COMPUTATIONAL SCIENCE / Biology

“Our map shows that proteinfolds are broadly grouped into fourdifferent classes that correspond tothe four classes of protein structuresdefined by SCOP,” Kim says. “Somehave argued that there are reallyonly three classes of protein foldstructures, but now we can mathe-matically prove there are four.”

Protein folds are recurring struc-tural motifs or “domains” thatunderlie all protein architecture.Since architecture and function gohand-in-hand for proteins, solvingwhat a protein’s structure looks likeis a big step towards knowing whatthat protein does.

The 3D map created by Kimalong with Jingtong Hou, GregorySims, and Chao Zhang, shows thedistribution in space of the 500most common protein fold familiesas represented by points which arespatially separated in proportion totheir structural dissimilarities. Thedistribution of these points reveals ahigh level of organization in the foldstructures of the protein universeand shows how these structures haveevolved over time, growing increas-ingly larger and more complex.

“When the structure of a newprotein is first solved, we can placeit in the appropriate location on themap and immediately know who itsneighbors are and its evolutionaryhistory, which can help us predictwhat its function may be,” Kim says.“This map provides us with a con-ceptual framework to organize allprotein structures and functions andhave that information readily avail-able in one place.”

With the completion of a work-ing draft of the human genome, inwhich scientists determined thesequences of the 3 billion DNAbases that make up the humangenome, the big push now is toidentify coding genes and themolecular and cellular functions of the proteins associated withthem. Coding genes are DNAsequences that translate intosequences of amino acids, whichRNA assembles into proteins.

The prevailing method for pre-dicting the function of a newly dis-covered protein is to compare thesequence of its amino acids to theamino acid sequences of proteinswhose functions have already beenidentified. A major problem withrelying exclusively on this approachis that while proteins in differentorganisms may have similar struc-ture and function, the sequences oftheir amino acids may be dramati-cally different. “This is because pro-tein structure and function are muchmore conserved through evolutionthan genetically based amino acidsequences,” Kim says.

Kim has been a leading advocatefor grouping proteins into classes onthe basis of their fold structures andusing these structural similarities tohelp predict individual proteinfunctions. While the protein uni-verse may encompass as many as atrillion different kinds of proteins onEarth, most structural biologistsagree there are probably only aboutten thousand distinctly differenttypes of folds.

“A smaller number of new pro-

tein folds are discovered each year,despite the fact that the number ofprotein structures determined annu-ally is increasing exponentially,”Kim says. “This and other observa-tions strongly suggest that the totalnumber of protein folds is dramati-cally smaller than the number ofgenes.”

The rationale behind this idea isthat through the eons, proteins haveselectively evolved into the architec-tural structures best suited to dotheir specific jobs. These structuresessentially stay the same for proteinsfrom all three kingdoms of life—bacteria, archaea, and eukarya—even though the DNA sequencesencoding for a specific type of proteincan vary wildly from the genome ofone organism to another, and some-times even within the same organism.

In the map created by Kim andhis colleagues, elongated groups offold distributions approximately cor-responding to the four SCOP struc-tural classifications can be clearlyseen. These classifications, whichare based on secondary structuralcompositions and topology, are the“alpha” helices, “beta” strands, andtwo mixes of helices and strands, onecalled “alpha plus beta” and theother “alpha slash beta.” The mapreveals that the first three groupsshare a common area of origin, pos-sibly corresponding to small primor-dial proteins, while the “alpha slashbeta” class of proteins does notemerge until much later in time.

“It is conceivable that, of theprimordial peptides, those contain-ing fragments with high helix

Page 35: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

ADVANCES IN COMPUTATIONAL SCIENCE / Biology 31

and/or strand propensity found theirway to fold into small alpha, beta,and alpha-plus-beta structures,”Kim says. “The alpha-slash-beta foldstructures do not appear until pro-teins of sufficient size rose throughevolution and the formation ofsupersecondary structural unitsbecame possible.”

Since understanding the molec-ular functions of proteins is key tounderstanding cellular functions,

For the next phase of thisresearch, Kim and his colleaguesplan to tap into NERSC’s super-computers to add the rest of thesome 20,000 and counting knownprotein structures to their map.They also plan to set up a Web sitewhere researchers can submit forinclusion new protein structuresthey have solved.

Research funding: BER, NIH, NSF

the map developed by Kim and hiscolleagues holds promise for a num-ber of areas of biology and biomed-ical research, including the designof more effective pharmaceuticaldrugs that have fewer side effects.“This map can be used to helpdesign a drug to act on a specificprotein and to identify which otherproteins with similar structuresmight also be affected by the drug,”Kim says.

Page 36: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

32 ADVANCES IN COMPUTATIONAL SCIENCE / DOE Facility Plan

In a speech at the NationalPress Club on November 10, 2003,U.S. Energy Secretary SpencerAbraham outlined the Departmentof Energy’s Office of Science 20-yearscience facility plan, a roadmap forfuture scientific facilities to supportthe department’s basic science andresearch missions. The plan priori-tizes new, major scientific facilitiesand upgrades to current facilities.

“This plan will be the corner-stone for the future of critical fieldsof science in America. These facili-ties will revolutionize science—and

society,” said Abraham. “With thisplan our goal is to keep the UnitedStates at the scientific forefront.”

Priority 2 in the plan is an Ultra-Scale Scientific ComputingCapability (USSCC), to be locatedat multiple sites, which wouldincrease by a factor of 100 the com-puting capability available to supportopen scientific research. Most exist-ing supercomputers have beendesigned with the consumer marketin mind; USSCC’s new configura-tions will develop computing capa-bility specifically designed for scienceand industrial applications. Thesefacilities will operate like Office ofScience light sources: available toall, subject to proposal peer review.

The USSCC will involve long-term relationships with U.S. com-puter vendors and an integratedprogram of hardware and softwareinvestments designed to optimizecomputer performance for scientificand commercial problems by creat-ing a better balance of memory size,processor speed, and interconnectionrates. A very similar strategy wasspelled out in the white paper thatNERSC together with seven otherDOE labs submitted to the HighEnd Computing Revitalization TaskForce in June 2003 (see http://www.

nersc.gov/news/HECRTF-V4-2003.pdf).

An upgrade of the NERSC facil-ity is tied for Priority 7. This upgradewill ensure that NERSC, DOE’spremier scientific computing facilityfor unclassified research, continuesto provide high-performance com-puting resources to support therequirements of scientific discovery.

NERSC will continue to pro-vide the core scientific computingneeded by the research community,and will complement the “grandchallenge” approach pursued underthe USSCC, according to the facilityplan. Reaffirming the goals of theNERSC Strategic Proposal for2002–2006, the plan describes theNERSC upgrade as using Gridtechnology to deploy a capabilitydesigned to meet the needs of anintegrated science environmentcombining experiment, simulation,and theory by facilitating access tocomputing and data resources, aswell as to large DOE experimentalinstruments. NERSC will concen-trate its resources on supporting sci-entific challenge teams, with thegoal of bridging the software gapbetween currently achievable andpeak performance on the new tera-scale platforms.

DOE FACILITY PLAN SEESMAJOR ROLE FOR COMPUTING

Page 37: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

ADVANCES IN COMPUTATIONAL SCIENCE / Nanoscience 33

NANOSCIENCEMODELING A REALM WHERE EVERTHING IS D IFFERENT

The nanoscale—between 10 and 100 billionths of a meter—is the scale at whichthe fundamental properties of materials and systems are established. Meltingtemperature, magnetic properties, charge capacity, and even color are dictatedby the arrangement of nanoscale structures. The realm of molecular biology alsooperates largely at the nanoscale.

While much is known about the physical properties and behavior of isolatedmolecules and bulk materials, the properties of matter at the nanoscale cannotnecessarily be predicted from those observed at larger or smaller scales.Nanoscale structures exhibit important differences that cannot be explained bytraditional models and theories.

Research at the nanoscale aims to provide a fundamental understanding of theunique properties and phenomena that emerge at this length scale as well as theability to tailor those properties and phenomena to produce new materials andtechnologies in fields as diverse as electronics, biotechnology, medicine, trans-portation, agriculture, environment, and national security. But a quantitativeunderstanding of nanoscale systems requires development of new computationalmethods. This section focuses on one application central to DOE’s mission: pro-moting clean and efficient energy production.

Page 38: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

34 ADVANCES IN COMPUTATIONAL SCIENCE / Nanoscience

circuit, creating a usable electriccurrent. Meanwhile, on the cath-ode side, oxygen molecules (O2) arebeing separated by the catalyst intoseparate atoms in a process calledoxygen reduction. The negativelycharged oxygen atoms (adsorbed onthe catalyst) attract the positivelycharged hydrogen ions through thePEM, and together they combinewith the returning electrons to formwater molecules (H2O).

While splitting the hydrogenmolecules is relatively easy for plat-inum-alloy catalysts, splitting theoxygen molecules is more difficult.Although platinum catalysts are thebest known for this task, they arestill not efficient enough for wide-spread commercial use in fuel cells,and their high price is one of thereasons why fuel cells are not yeteconomically competitive.

It’s no wonder that President Bushproposed new research into hydro-gen-powered automobiles in his2003 State of the Union address.But for this dream to become reali-ty, scientists and engineers need tomake fuel cells less expensive.

While there are several types offuel cells, proton exchange mem-brane (PEM) cells are the leadingcandidates for powering light-dutyvehicles and buildings and forreplacing batteries in portable elec-tronic devices. The PEM is a poly-mer electrolyte that allows hydrogenions (protons) to pass through it. Ashydrogen (H2) is fed to the anodeside of the fuel cell, a catalyst (usu-ally platinum or a platinum-rutheni-um alloy) encourages the hydrogenatoms to separate into electrons andprotons. The electrons travel fromanode to cathode along an external

Finding more efficient andaffordable catalysts is one of thegoals of a research group led byPerla Balbuena, Associate Professorof Chemical Engineering at theUniversity of South Carolina. Usingcomputational chemistry andphysics techniques, they are tryingto predict the properties of elec-trodes consisting of bimetallic andtrimetallic nanoparticles embeddedin a polymer system and dispersedon a carbon substrate. They areworking to identify the atomic distri-bution, vibrational modes, and dif-fusion coefficients of a variety ofalloy nanoparticles.

“The rationale behind the useof alloy nanoparticles,” Balbuenasays, “is that each of the elements,or particular combinations of them,may catalyze different aspects of thereaction. The large surface-to-volume

MAKING FUEL CELLS MORE EFFICIENT AND AFFORDABLE

Fuel cells sound like an ideal power source—they are efficient and

quiet, have few moving parts, run on hydrogen, and emit onlywater vapor.

Page 39: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

ADVANCES IN COMPUTATIONAL SCIENCE / Nanoscience 35

area of nanoparticles may also con-tribute to their effectiveness.”Experimental data indicates thatalloys can be as effective or betterthan platinum catalysts, but nobodyfully understands why this is so.Little is known about the physical,chemical, and electrical propertiesof alloy nanoparticles, and in par-ticular about the atomic distribu-tion of the various elements on thecatalyst surface, which is difficult todetermine experimentally. Compu-tational studies play an essentialrole in answering these questions.

For example, Figure 1 shows atest simulation of a system involvingonly platinum nanoparticles on acarbon substrate and surrounded bya hydrated polymer. The goal of thissimulation was to test the stability ofthe nano-size catalyst in such anenvironment—that is, how the poly-mer groups, water, and substrateinfluence the shape and mobility ofthe nanoparticles, and how theyinfluence the exposed surfaces wherethe oxygen molecules will be adsor-bed and dissociated into oxygen atoms.

Balbuena and her colleaguesrecently developed a new computa-tional procedure to investigate thebehavior of active catalytic sites,including the effect of the bulkenvironment on the reaction. Theydemonstrated this procedure in sim-ulations of oxygen dissociation onplatinum-based clusters alloyed withcobalt, nickel, or chromium, andembedded in a platinum matrix. Todetermine the optimal alloy, theystudied a variety of compositionsand geometries for alloy ensemblesembedded into the bulk metal(Figure 2).

The researchers concluded thatcobalt, nickel, and chromium makeplatinum-based complexes morereactive because they enable a high-er number of unpaired electrons.Their computations yielded exten-sive new information that would bevery hard to obtain experimentally,such as showing for the first timehow alloying results in changes inthe density of electronic states, andhow those changes influence theoxygen reduction process. One ofthe questions they addressed waswhether nickel, cobalt, and chromi-um might become oxidized and actas “sacrificial sites” where otherspecies can adsorb, leaving the plat-inum sites available for oxygen dis-sociation. Their study shows thatthis may be the case for nickel, butthat alternative mechanisms arepossible, especially for cobalt andchromium, which instead of beingjust sacrificial sites, act as activesites where oxygen dissociates, insome cases more efficiently than on

FIGURE 1Platinum nanoparticles (gold-coloredatoms) are deposited on a graphite sup-port. The long chains are Nafion frag-ments composed of fluorine (green),carbon (blue), oxygen (red), and sulfur(yellow) atoms; some of these chainsbecome adsorbed over the metallicclusters. The background contains watermolecules (red for oxygen and white forhydrogen). Nafion, widely used for pro-ton exchange membranes, has bothhydrophilic and hydrophobic chemicalgroups, so some degree of water segre-gation is observed. Image: Eduardo J.Lamas.

Page 40: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

36 ADVANCES IN COMPUTATIONAL SCIENCE / Nanoscience

of tens of nanometers, as well asdeveloping a full understanding ofthe role of the rest of the environ-ment (polymer, water, and sub-strate) on the outcome of the reac-tion. The ultimate goal of thesestudies is to enable the design ofnew catalysts from first principlesrather than trial-and-error experi-ments, reducing the cost of catalystsfor fuel cells through the use ofoptimum alloy combinations requir-ing only minimal amounts ofexpensive materials.

Research funding: BES

platinum sites. They found that themost effective active sites that pro-mote oxygen dissociation areO2CoPt, O2Co2Pt, O2CrPt, andO2Cr2Pt, while ensembles involvingnickel atoms are as catalyticallyeffective as pure platinum.

The Balbuena group’s researchis now focusing on the time evolu-tion of the complete reaction on thebest catalytic ensembles, includingthe combination of oxygen atomswith protons leading to water forma-tion. Future work will address thepossibilities of reducing further thecatalyst size, nowadays in the order

FIGURE 2Oxygen molecules (red) adsorb on abimetallic surface of platinum (purple)and cobalt (green). The oxygen mole-cules most frequently adsorb on“bridge” sites composed of two cobaltatoms or one cobalt and one platinumatom. The oxygen molecules will subse-quently dissociate into individual atoms.From P. B. Balbuena, D. Altomare, L.Agapito, and J. M. Seminario,“Theoretical analysis of oxygen adsorp-tion on Pt-based clusters alloyed withCo, Ni, or Cr embedded in a Pt matrix,”J. Phys. Chem. B (in press).

Page 41: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

ADVANCES IN COMPUTATIONAL SCIENCE / Climate Modeling 37

CLIMATE MODELINGPREDICTING THE RESULTS OF A GLOBAL EXPERIMENT

Ever since the beginning of the industrial era, when humans began dramaticallyincreasing the amount of carbon dioxide in the atmosphere by burning largequantities of fossil fuels, we have unintentionally been conducting a massiveexperiment on the Earth’s climate. Unfortunately, this uncontrolled experimentdoes not meet the criteria of the scientific method—we do not have a parallel“control” planet Earth where we can observe how the climate would haveevolved without the addition of CO2 and other greenhouse gases. One of thegoals of computational climate modeling is to provide such control scenarios sothat human contributions to climate change can be quantified. This sectiondescribes how climate researchers studied the rising boundary between the tro-posphere and stratosphere and determined that human activity is the principalcause.

Page 42: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

38 ADVANCES IN COMPUTATIONAL SCIENCE / Climate Modeling

cations of this “fingerprinting” tech-nique recently answered the ques-tion of why the tropopause has beenrising for more than two decades.The tropopause is the boundarybetween the lowest layer of theatmosphere—the turbulently mixedtroposphere—and the more stablestratosphere. The “anvil” often seenat the top of thunderclouds is a visi-ble marker of the tropopause, whichlies roughly 10 miles above theEarth’s surface at the equator and 5miles above the poles.

The average height of thetropopause rose about 200 metersbetween 1979 and 1999. In theirfirst comparison of observed datawith computer models, Santer andhis colleagues concluded that theincrease in tropopause height was

Like a police detective, Santer looksfor unique identifying marks thatcan be used to determine responsi-bility. But instead of looking forcriminals, Santer is looking for spe-cific, quantitative evidence ofhuman influence on changes in theEarth’s climate. The “fingerprints”he tries to match are patterns of datafrom observations of the atmosphereand from computer simulations.

“We examine the output from thecomputer models and compare thiswith observations,” Santer says. “Inthe model world (unlike the realworld!) we have the luxury of beingable to change one factor at a time,while holding the others constant.This allows us to isolate and quanti-fy the effect of individual factors.”

One of the most dramatic appli-

driven by the warming of the tropo-sphere by greenhouse gases and thecooling of the stratosphere by ozonedepletion.1 But to quantify theinfluences (or “forcings” in climatejargon) even further, they consid-ered three anthropogenic forcings—well-mixed greenhouse gases, sul-fate aerosols, and tropospheric andstratospheric ozone—as well as twonatural forcings—changes in solarirradiance and volcanic aerosols—all of which are likely to influencetropopause height.2

The observational data werecompared with seven simulationexperiments using the ParallelClimate Model (PCM). In the firstfive experiments, only one forcingat a time was changed. In the sixthexperiment, all five forcings were

FINGERPRINTS IN THE SKY

For a physicist and atmospheric scientist, Ben Santer of Lawrence

Livermore National Laboratory has an unusual specialty—he is a

fingerprint expert.

1 B. D. Santer, R. Sausen,T. M. L. Wigley, J. S. Boyle, K. AchutaRao, C. Doutriaux, J. E. Hansen, G. A. Meehl, E. Roeckner, R. Ruedy, G.Schmidt, and K. E.Taylor,“Behavior of tropopause height and atmospheric temperature in models, reanalyses, and observations:Decadal changes,”J. Geophys. Res. 108, 4002 (2003).

Page 43: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

ADVANCES IN COMPUTATIONAL SCIENCE / Climate Modeling 39

included simultaneously. The sev-enth experiment used only the twonatural forcings. To improve thereliability of the results, each experi-ment was performed four timesusing different initial conditions froma control run. This methodologyallowed the researchers to estimatethe contribution of each forcing tooverall changes in atmospheric tem-perature and tropopause height.

The results of the analysisshowed that in the mid-1980s, thecombined five forcings (ALL)diverged from the natural forcings(SV), showing the dominance ofhuman influences (Figure 1A). Ofthe five forcings, greenhouse gases(G) and ozone (O) played thebiggest role (Figure 1B). Major vol-canic eruptions can be seen tolower tropopause height sharply bycooling the troposphere and warm-

ing the stratosphere, but this effectis short-lived. The analysis con-cludes that about 80% of the rise intropopause height is due to humanactivity. The model-predicted fin-gerprint of tropopause heightchanges was statistically detectablein two different observational datasets, NCEP and ERA (Figure 2).

Michael Wehner of BerkeleyLab’s Computational ResearchDivision, a co-author of the Sciencepaper, points out that it would havebeen very difficult to estimate therelative contributions of differentforcing mechanisms without thevery large ensembles of climatemodel experiments conducted byDOE-funded researchers at theNational Center for AtmosphericResearch and made possible byNERSC and other DOE and NSFsupercomputing facilities. Wehner

10

5

0

–5A

nom

aly

(hP

a)

Ano

mal

y (h

Pa)

Hei

ght

decr

ease

Hei

ght

incr

ease

8

6

4

2

0

–2

–4

–6

A

B

Combined forcings

1900 1920 1940 1960 1980 2000

Individual forcings

Agung

ALLSUM

Santa Maria

SVALLNCEPERA

G (well-mixed GHGs)A (sulfate aerosols)O (ozone)S (solar)V (volcanos)

El Chichón

Pinatubo

FIGURE 1Effect of different forcings ontropopause height. Image: B.D. Santer,M. F. Wehner, T. M. L. Wigley et al.

2 B. D. Santer, M. F. Wehner,T. M. L. Wigley, R. Sausen, G. A. Meehl, K. E.Taylor, C. Ammann,J. Arblaster, W. M. Washington, J. S. Boyle, and W. Brüggemann,“Contributions of anthro-pogenic and natural forcing to recent tropopause height changes,”Science 301, 479(2003).

Page 44: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

40 ADVANCES IN COMPUTATIONAL SCIENCE / Climate Modeling

manages the largest single collec-tion of publicly available climatemodel output, the DOE CoupledClimate Model Data Archive(http://www.nersc.gov/projects/gcm_data/), which is stored in NERSC’sHPSS system.

Looking at the tropopause dataas part of the bigger climate picture,Santer said, “Tropopause height isan integrated indicator of human-induced climate change. It reflectsglobal-scale changes in the temper-ature structure of the atmosphere. Itis also consistent with results fromother studies that have identifiedanthropogenic fingerprints in arange of different climate variables,such as ocean heat content, surfacetemperature, sea-level pressure pat-terns, and Northern Hemispheresea-ice extent.”

“The challenge for the future,”Santer added, “is to take a closerlook at the internal consistency ofall these climate change variables.Our present work suggests that thisstory is a consistent one, but thereare lots of details that need to beworked out.”

Santer received the Departmentof Energy’s 2002 Ernest OrlandoLawrence Award for “his seminaland continuing contributions to ourunderstanding of the effects of humanactivities and natural phenomenaon the Earth’s climate system.”

Research funding: BER, NOAA

A NCEP linear trend (1979–1999) B ERA linear trend (1979–1993)

Linear trend (hPa/decade)–5 –3 –1 1 3 5

–6 –4 –2 0 2 4 6

–5 –3 –1 1 3 5

–6 –4 –2 0 2 4 6

Linear trend (hPa/decade)

C PMC ALL linear trend (1979–1999) D PMC ALL EOF 1 (mean included)

Linear trend (hPa/decade)–5 –3 –1 1 3 5

–6 –4 –2 0 2 4 6

–1.5 –0.9 –0.3 0.3 0.9 1.5

–1.8 –1.2 –0.6 0 0.6 1.2 1.8

EOF loading

E PMC ALL EOF 1 (mean removed)

–2.5 –1.5 –0.5 0.5 1.5 2.5

–3 –2 –1 0 1 2 3

EOF loading

FIGURE 2Linear trends over time (A,B,C) and fin-gerprints (D,E) of tropopause pressure(inversely related to tropopause height).Image: B.D. Santer, M. F. Wehner, T. M. L.Wigley et al.

Page 45: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

ADVANCES IN COMPUTATIONAL SCIENCE / Applied Mathematics 41

APPLIED MATHEMATICSTHE BRIDGE BETWEEN PHYSICAL REALITY AND SIMULATIONS

“Mathematics is the bridge between physical reality and computer simulationsof physical reality. The starting point for a computer simulation is a mathemati-cal model, expressed in terms of a system of equations and based on scientificunderstanding of the problem being investigated, such as the knowledge offorces acting on a particle or a parcel of fluid.” 1

One of the challenges facing many model developers is managing model com-plexity. Many physical systems have processes that operate on length and timescales that vary over many orders of magnitude. These systems require multi-scale modeling, which represents physical interactions on multiple scales sothat the results of interest can be recovered without the unaffordable expenseof representing all behaviors at uniformly fine scales. This section describes theaward-winning work of two research groups specializing in multiscale modeling.

1 “A Science-Based Case for Large-Scale Simulation,” report to the DOE Office of Science, July 30,2003, p. 31.

Page 46: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

42 ADVANCES IN COMPUTATIONAL SCIENCE / Applied Mathematics

According to SIAM President MacHyman, the prize “is awarded in thearea of computational science inrecognition of outstanding contribu-tions to the development and use ofmathematical and computationaltools and methods for the solutionof science and engineering prob-lems.” This is the first year the prizehas been awarded.

Algorithms developed by Bell,Colella and their research groups atBerkeley Lab are used for studyingcomplex problems arising in fluidmechanics and computationalphysics. The mathematical con-cepts, models, and methods theyhave developed have been applied insuch diverse areas as shock physics,turbulence, astrophysics, flow in

porous media, and combustion. Bell is head of the Center for

Computational Sciences andEngineering (CCSE), while Colellaheads the Applied NumericalAlgorithms Group (ANAG), bothwithin Berkeley Lab’s Computa-tional Research Division. Bothgroups are participants in theSciDAC Applied Partial DifferentialEquations Center (APDEC).

In 2003, CCSE has been mod-eling turbulent reacting flow inmethane flames, and applying someof the information gleaned in thoseexperiments toward supernovaresearch and modeling. ANAG hasbeen involved in developing moresophisticated geometric tools for itsChombo software toolkit, and

applying it in areas such as beamdynamics in accelerator designs, gasjets in laser-driven plasma-wakefieldaccelerators, simulations of fusionreactors, and biological cell modeling.

In order to deal with problemsin such diverse and complex sci-ences, CCSE and ANAG createcustomized software componentsthat are able to translate scientificideas and empirical insights intoprecise, 3D models that can be rep-resented on a computer. Thisinvolves computational researchthat bridges the gap between mathe-matical, physical and computation-al sciences. By breaking up eachproblem into its fundamental math-ematical parts, and designing opti-mal algorithms to handle each

ALGORITHMS IN THE SERVICE OF SCIENCE

John Bell and Phillip Colella, applied mathematicians at Lawrence

Berkeley National Laboratory and long-time NERSC users, were named

co-recipients of the 2003 SIAM/ACM Prize in ComputationalScience and Engineering, awarded by the Society for Industrial and

Applied Mathematics (SIAM) and the Association for Computing

Machinery (ACM).

Page 47: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

ADVANCES IN COMPUTATIONAL SCIENCE / Applied Mathematics 43

piece, they are able to model a widerange of scientific phenomena.

In describing the cell modelingwork he recently started withBerkeley Lab biologists, Colellaexplained that “in order to get bet-ter predictive models, you have tokeep trying things out. The processbecomes a very experimental,empirical activity, and that’s whythe software tools are so importanthere.”

The software tools are based onadaptive mesh refinement (AMR), agrid-based system that uses partialdifferential equations to describeinformation in the most efficientmanner possible—applying a finemesh, and maximum computingpower, to the areas of most interest,and a coarse mesh to areas wheredata is not as significant. AMR isparticularly useful in problemswhere there is a wide range of timescales, and adaptive methods mustbe used to track movements in themesh over time.

One of Bell and CCSE’s mainprojects in 2003 has been creating3D simulations of turbulentmethane combustion with swirlingflow, which has produced the firstdetailed simulations in this area ofcombustion science (Figure 1). Inthe course of the project, two thingshave been of primary interest: howturbulence in the fuel stream affectsthe chemistry, and how emissionsare formed and released. By study-ing the reacting flow of turbulentflames, CCSE has been able todevelop computational models thatdissect the detailed chemical reac-tions occurring in the flame. Thegroup has also developed predictive

creates a problem size that demandssome hefty computing power.

In order to create the flamevisualizations, computations wereperformed on 2,048 processors ofNERSC’s IBM SP, “Seaborg.”CCSE has created its own dataanalysis framework with specificalgorithms to reformulate equationsand accelerate the computations.

models that can be used to analyzeflame properties and characteristicsof turbulence. Even with a simplefuel like methane, there are some20 chemical species and 84 reac-tions involved; the reaction zone,where the adaptive mesh is concen-trated, occurs in an area that is 150microns, while the whole grid con-sumes 12 centimeters. All of this

FIGURE 1Instantaneous flame surface from a tur-bulent premixed combustion simulation.CCSE carried out the calculation usinga low Mach number flow model with anAMR solution algorithm. The coarsest,base grid is invisible but fills the out-lined cube. This cutaway view displaysas green and pink boxes the containersthat hold two successive levels of refinedcomputational cells. These containersare dynamically reconfigured to followthe evolving flame surface, and are dis-tributed over the parallel processors ofthe supercomputer. Image: Marc Day.

Page 48: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

44 ADVANCES IN COMPUTATIONAL SCIENCE / Applied Mathematics

The result is a computational costthat is reduced by a factor of 10,000compared with a visualization usinga standard uniform-grid approach.

While a fundamental goal ofthe project is a better understandingof turbulence in flames, it couldalso have practical ramifications, asin engineering new designs formore efficient turbines, furnaces,incinerators, and other equipmentheld to strict emission require-ments. A side benefit of the projectis that it is stretching the group’ssoftware to the limit, putting newdemands on the algorithmic andgeometric elements of the toolset.

“The methane flame happens tobe the one application that encom-passes most of the pieces of the soft-ware. They are all folded into thisapplication,” said Marc Day, a staffscientist at CCSE. “You have tohave applied math and computinghardware, together with good com-putational science, in order toaccomplish what we have. Withoutany one of those pieces, it doesn’twork.”

The methane flame project fallsinto a class of problems called“interface dynamics,” where some-thing happens to cause an instabili-ty in an interface between two flu-ids of different chemicals and densi-ty. This instability causes the inter-face to grow and change form. Themodeling goal then becomes totrack the evolution of the new inter-face. AMR is able to tackle thesetypes of problems, where sophisti-cated and powerful tools are neededto scale and quantify the complicat-ed energy dynamics. Equally impor-tant is a skilled, dedicated group of

researchers to frame and calculatethe problems.

To a great degree, CCSE andANAG rely on their own computa-tional scientists to apply the soft-ware to new projects and problemsets. But one of their goals has beento develop software componentsthat other scientists can utilize moreeasily.

“We really want to supportgeometry and we need to support lotsof different applications in differentways,” said Colella. “So how can were-factor the software to do that? Forone, we’re learning a lot about soft-ware engineering and managing thedevelopment process, using a ‘spiraldesign’ approach. When we add anew capability and develop applica-tions in it, we get it out there forreview. As we get feedback, wemake changes to the software andrepeat the process over again, so itbecomes an ongoing spiral.”

Another one of the group’s goalsin software development is to createalgorithmic tools for doing compu-tations in complex geometries. So ifa problem demands hyperbolic,elliptic, and parabolic solver com-ponents, each of those pieces canbe integrated while preserving theintegrity of the model as a whole.

Both groups’ software develop-ment is an ongoing process, andnew capabilities are constantlybeing added to make them moreflexible and applicable in broaderarenas of science.

Computational biology is one ofthe major growth areas, saidColella. Current research is creat-ing enormous amounts of data onhow the development and behavior

of cells is governed by a complexnetwork of reacting chemicalspecies that are transported within acell and among cellular communi-ties. ANAG is working with theArkin Laboratory in Berkeley Lab’sPhysical Biosciences Division andUC Berkeley’s Bioengineering andChemistry departments to developthe software infrastructure to turnthis data into quantitative and pre-dictive models. These models willcontribute to a deeper understand-ing of cellular regulation and devel-opment, more efficient discovery ofcellular networks, and a better abili-ty to engineer or control cells formedical, agricultural, or industrialpurposes.

The California Department ofWater Resources has also asked forANAG’s help in creating high-quali-ty simulation software for modelingwater flows from the SacramentoDelta to the San Francisco Bay(Figure 2). These models are valu-able for researchers interested inbetter understanding fresh waterflows, and other environmental andwater quality issues.

One of the reasons these mathe-matical software tools are so uniqueand powerful is that they can beapplied to a wide array of scientificresearch, in a remarkably flexiblefashion.

After analyzing the results of themethane flame project, for instance,CCSE scientists started to apply someof this newfound knowledge towardsupernova research. Scientists havelong been interested in fluiddynamics and energy transfers insupernovae, and trying to determinehow and why they burn out. Using

Page 49: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

ADVANCES IN COMPUTATIONAL SCIENCE / Applied Mathematics 45

algorithms developed during its flameturbulence project, CCSE created3D models of supernovae that shednew light on this problem, and haveproduced insights into the variousstages of supernova explosions.

“You wouldn’t think that super-novae have anything in commonwith Bunsen burners, but algorith-mically they were very similar,” saidDay. “There was just a small num-ber of pieces that we had to changeout in order to make the chemical

combustion turn into a nuclearflame. So we were able to make sig-nificant progress over a short periodof time in the area of supernovaphysics because we already had thisstuff laying around.”

All of this makes you wonderwhat else CCSE and ANAG havelying around, and where it mightlead them next.

Research funding: ASCR-MICS, SciDAC,NASA, CDWR

FIGURE 2. One of the goals of the ANAG softwareeffort is generating grids for complexgeometries. The images here show acut-cell grid generated to compute flowin the San Francisco Bay. At grid cellsintersecting the bottom boundary, thenumerical representation of the bound-ary is colored by the z-coordinate (i.e.,depth) of the boundary in each cell. Theleft image is of the entire bay, the rightone of the area around the Golden Gate.Image: Mike Barad (UC Davis) and PeterSchwartz (Berkeley Lab).

Page 50: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,
Page 51: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

RESEARCH NEWS 47

This section of the Annual Report provides a repre-sentative sampling of recent discoveries made withthe help of NERSC’s computing, storage, networking,and intellectual resources. News items are organizedby science category, with funding organizations indi-cated by acronyms in blue type. Organizationalacronyms are spelled out in Appendix G.

RESEARCHNEWS

OPTIMIZING LUMINOSITY IN

HIGH-ENERGY COLLIDERS

The beam-beam interaction puts astrong limit on the performance ofmost high-energy colliders. Theelectromagnetic fields generated byone beam focus or defocus the beammoving in the opposite direction,reducing luminosity and sometimescausing beam blowup. Qiang et al.performed the first-ever million-particle, million-turn, “strong-strong”(i.e., self-consistent) simulation ofcolliding beams (in the Large HadronCollider). Instead of a conventionalparticle-mesh code, they used amethod in which the computationalmesh covers only the largest of thetwo colliding beams, allowing themto study long-range parasitic colli-sions accurately and efficiently.

J. Qiang, M. Furman, and R. Ryne,“Strong-strong beam-beam simulation using aGreen function approach,”Phys. Rev. STAB 5, 104402 (2002). HEP, SciDAC

ELECTRON CLOUDS AND

PARTICLE ACCELERATOR

PERFORMANCE

Electron clouds have been shown tobe associated with limitations inparticle-accelerator performance inseveral of the world’s largest circularproton and positron machines.Rumolo et al. have studied theinteraction between a low-densityelectron cloud in a circular particleaccelerator with a circulatingcharged particle beam. The particlebeam’s space charge attracts thecloud, enhancing the cloud densitynear the beam axis (Figure 1). Thisenhanced charge and the imagecharges associated with the cloudcharge and the conducting wall ofthe accelerator may have importantconsequences for the dynamics ofthe beam propagation.

G. Rumolo, A. Z. Ghalam,T. Katsouleas, C. K.Huang, V. K. Decyk, C. Ren, W. B. Mori, F.Zimmermann, and F. Ruggiero,“Electron

cloud effects on beam evolution in a cir-cular accelerator,”Phys. Rev. ST AB 6,081002 (2003). HEP, SciDAC, NSF

–0.00375 0.00375y(c/ωp)

0.23

–0.23

z(c/

ωp)

0–4×107 Density (cm–3)

0 4×109Density (cm–3)

–0.00375 0.00375y(c/ωp)

0.23

–0.23z(

c/ω

p)

(a)

(b)

FIGURE 1(a) Positively charged beam density. (b)The corresponding cloud density. Thisfigure shows cloud compression on theaxis of the beam.

ACCELERATOR PHYSICS

Page 52: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

48 RESEARCH NEWS

DEVELOPING FUTURE

PLASMA-BASED

ACCELERATORS

High-gradient acceleration of bothpositrons and electrons is a prereq-uisite condition to the successfuldevelopment of a plasma-based linear collider. Blue et al. providedsimulation support for the E-162Plasma Wakefield Acceleratorexperiment at the Stanford LinearAccelerator Center. This work wassuccessful at demonstrating the high-gradient acceleration of positrons ina meter-scale plasma for the firsttime. Excellent agreement wasfound between the experimentalresults and those from 3D particle-in-cell simulations for both energygain and loss.

B. E. Blue, C. E. Clayton, C. L. O’Connell,F.-J. Decker, M. J. Hogan, C. Huang,R. Iverson, C. Joshi,T. C. Katsouleas,W. Lu, K. A. Marsh, W. B. Mori, P. Muggli, R.Siemann, and D. Walz,“Plasma-wakefieldacceleration of an intense positronbeam,”Phys. Rev. Lett. 90, 214801(2003). HEP, SciDAC, NSF

ELECTRON CLOUD

DEVELOPMENT IN PROTON

STORAGE RINGS

Pivi and Furman have simulatedelectron cloud development in theProton Storage Ring (PSR) at LosAlamos National Laboratory andthe Spallation Neutron Source(SNS) under construction at OakRidge National Laboratory. Theirsimulation provides a detaileddescription of the secondary electronemission process, including a refinedmodel for the emitted energy spec-

trum and for the three main com-ponents of the secondary yield,namely, the true secondary, redif-fused, and backscattered components.

M.T. F. Pivi and M. A. Furman,“Electroncloud development in the Proton StorageRing and in the Spallation NeutronSource,”Phys. Rev. ST AB 6, 034201(2003). HEP, SNS

HIGH-PERFORMANCE

SPARSE-MATRIX

ALGORITHMS

Sparse-matrix problems are at theheart of many scientific and engi-neering applications of importanceto DOE and the nation. Someapplications are fusion energy,accelerator physics, structuraldynamics, computational chem-istry, and groundwater simulations.Teranishi et al. studied the effect ofdata mapping on the performance oftriangular solutions. They showedthat a careful choice of data map-ping, coupled with the use of selec-tive inversion, can significantlyreduce the amount of communica-tion required in the solution of asparse triangular linear system.

K.Teranishi, P. Raghavan, and E. G. Ng,“Reducing communication costs for par-allel sparse triangular solution,” inProceedings of the SC2002 Conference(2002). ASCR-MICS, SciDAC, NSF

OPTIMIZING MEMORY

HIERARCHY

The BeBOP toolkit automaticallytunes scientific codes by applyingstatistical techniques to producehighly tuned numerical subroutines.Vuduc et al. used the BeBOP toolSPARSITY to investigate memoryhierarchy issues with such sparsematrix kernels as Ax (matrix-vectorproducts) and AT Ax (normal equa-tion products). They evaluated opti-mizations on a benchmark set of 44matrices and four platforms, show-ing speedups of up to a factor of 4.2.

R. Vuduc, A. Gyulassy, J. Demmel, and K.Yelick,“Memory hierarchy optimizationsand performance bounds for sparse AT

Ax.”U.C. Berkeley Technical ReportUCB/CS-03-1232, February 2003. ASCR-MICS, SciDAC, NSF, Intel

MODELING THE COMPONENT

STRUCTURES OF FLAMES

Combustion in turbulent flows maytake the form of a thin flamewrapped around vortical structures.Marzouk et al. used the flame-embedding approach to decouple

APPLIED MATHEMATICS

Page 53: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

RESEARCH NEWS 49

computations of the “outer” non-reacting flow and the combustionzone by breaking the flame surfaceinto a number of elemental flames,each incorporating the local impactof unsteady flow-flame interaction(Figure 2). Results show that usingthe flame leading-edge strain rategives reasonable agreement in casesof low-strain-rate and weak-strain-rategradient within the flame structure.This agreement deteriorates sub-stantially when both are high.Agreement between the 1D modeland the 2D simulation improvesgreatly when the actual strain rateat the reaction zone of the 1Dflame is made to match that of the2D flame.

Y. M. Marzouk, A. F. Ghoniem, and H. N.Najm,“Toward a flame embedding modelfor turbulent combustion simulation,”AIAAJournal 41, 641 (2003). ASCR-MICS

IMPROVING THE ACCURACY

OF FRONT TRACKING IN

FLUID MIXING

Acceleration-driven fluid-mixinginstabilities play important roles ininertially confined nuclear fusion,as well as in stockpile stewardship.Turbulent mixing is a difficult andcentrally important issue for fluiddynamics that affects such questionsas the rate of heat transfer by theGulf Stream, resistance of pipes tofluid flow, combustion rates inautomotive engines, and the late-time evolution of a supernova. Toprovide a better understanding ofthese instabilities, Glimm et al.have developed a new, fully conser-vative front tracking algorithm thatresults in one-order-of-accuracyimprovement over most finite differ-ence simulations and shows goodagreement with experimental results.

J. Glimm, X.-L. Li,Y.-J. Liu, Z. L. Xu, and N.Zhao,“Conservative front tracking withimproved accuracy,”SIAM J. Sci. Comp. (inpress).ASCR-MICS, FES,ARO, NSF, LANL,NSFC

PREDICTING THE

ENERGIES OF QUANTUM

WELL STATES

Understanding the electronic struc-tures of quantum-well states(QWSs) in metallic thin films canlay the groundwork for designingnew materials with predictableproperties. An et al. compared theexperimental electronic propertiesof QWSs in a copper film grown ona cobalt substrate with ab initio sim-ulations. The calculations confirmthe concept of the quantizationcondition inherent in the phase-accumulation model (PAM) to predict the energies of QWSs as a

0.0 ms 2.0 ms 4.0 ms 6.0 ms 8.0 ms

FIGURE 2Premixed flame interaction with acounter-rotating vortex pair. The verticalright-hand edge of each frame is thecenterline of the vortex pair, only half ofwhich is shown. Colored contours indi-cate gas temperature, with darker shad-ing corresponding to burned combus-tion products. Solid/dashed contoursdelineate levels of positive/negativevorticity.

Page 54: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

50 RESEARCH NEWS

function of their thickness, and toprovide new insight into theirnature. The results also show thatband structures and reflection phasesobtained from either experiment orab initio theory can quantitativelypredict QWS energies within thePAM model.

J. M. An, D. Raczkowski,Y. Z. Wu, C.Y. Won,L. W. Wang, A. Canning, M. A. Van Hove, E.Rotenberg, and Z. Q. Qiu,“The quantizationcondition of quantum-well states inCu/Co(001),”Phys. Rev. B 68, 045419(2003). ASCR-MICS, NSF

REDUCED-DENSITY

MATRICES FOR ELECTRONIC

STRUCTURE CALCULATIONS

Accurate first-principles electronicstructure calculations are of specialimportance for spectroscopy and forchemical-reaction dynamics.Reduced-density matrices (RDM)that have linear scaling for large Noffer a first-principles alternative todensity-functional and semi-empiri-cal calculations for large systems.Zhao et al. reformulated the existingRDM method from a primal to a dualformulation, substantially reducingthe computational cost. They com-puted the ground state energy anddipole moment of 47 small mole-cules and molecular ions, bothopen and closed shells, achievingresults more accurate than thosefrom other approximations.

Z. Zhao, B. Braams, M. Fukuda, M. Overton,and J. Percus,“The reduced density matrixmethod for electronic structure calcula-tions and the role of three-index repre-sentability,” J. Chem. Phys. (submitted,2003). ASCR-MICS, NSF

CALCULATING ELECTRON

CAPTURE RATES DURING

CORE COLLAPSE

Supernova simulations to date haveassumed that during core collapse,electron captures occur dominantlyon free protons, while captures onheavy nuclei are Pauli blocked andare ignored. Langanke et al. havecalculated rates for electron captureon nuclei with mass numbers A =65–112 for the temperatures anddensities appropriate for core col-lapse. They found that these ratesare large enough so that, in contrastto previous assumptions, electroncapture on nuclei dominates overcapture on free protons. This resultleads to significant changes in corecollapse simulations.

K. Langanke, G. Martinez-Pinedo, J. M.Sampaio, D. J. Dean, W. R. Hix, O. E. B.Messer, A. Mezzacappa, M. Liebendorfer,H.-Th. Janka, and M. Rampp,“Electroncapture rates on nuclei and implicationsfor stellar core collapse,”Phys. Rev. Lett.90, 241102 (2003). NP, SciDAC, DRA,JIHIR, MCYT, NASA, NSF, PECASE

SIMULATING BINARY STAR

FORMATION

Developing a comprehensive theoryof star formation remains one of themost elusive goals of theoreticalastrophysics, partly because gravita-tional collapse depends on initialconditions within giant molecularcloud cores that only recently havebeen observed with sufficient accu-racy to permit a realistic attack on

the problem. Klein et al. have simu-lated the gravitational collapse andfragmentation of marginally stableturbulent molecular cloud cores

a n d f o l l o

mass fragments as they interact withthe radiation of the protostars form-ing at their centers. Their resultsshow excellent agreement withobservations of real cores, includingtheir aspect ratios, rotation rates,linewidth-size relations, and forma-tion of binary systems (Figure 3).

R. I. Klein, R.T. Fisher, M. R. Krumholz, andC. F. McKee,“Recent advances in the col-lapse and fragmentation of turbulentmolecular cloud cores,”RevMexAA 15, 92(2003). NP, NASA

LOOKING DOWN THE

HOLE IN A SUPERNOVA

Type Ia supernovae are born from awhite dwarf accreting material froma nearby companion star. Soon afterthe white dwarf explodes, the ejectedmaterial runs over the companionstar, and in the interaction a sub-stantial asymmetry can be imprintedon the supernova ejecta. Kasen etal. performed the first-ever calcula-tion of a 3D atmosphere model of aType Ia colliding with its companionstar. They explored the observableconsequences of ejecta-hole asym-metry, using the simulation resultsto explain the diversity of observedsupernovae.

D. Kasen, P. Nugent, R.Thomas, and L.Wang,“Filling a hole in our understandingof Type Ia supernovae,”AstrophysicalJournal (in press). NP, NASA.

ASTROPHYSICS

Page 55: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

experiment and found that theeffect on CMB power spectrumobservations is negligible.

A. H. Jaffe, A. Balbi, J. R. Bond, J. Borrill,P. G. Ferreira, D. Finkbeiner, S. Hanany,A.T. Lee, B. Rabii, P. L. Richards, G. F.Smoot, R. Stompor, C. D. Winant, and J. H. P. Wu,“Determining foreground con-tamination in CMB observations: Diffusegalactic emission in the MAXIMA-I field,”Astrophysical Journal (in press); astro-ph/0301077. HEP, NASA, NSF

SIMULATING BLACK

HOLE MERGERS AND

GRAVITATIONAL WAVES

Simulations of the gravitational wavesresulting from the collision of twoblack holes will provide patterns

RESEARCH NEWS 51

PROBING THE CHEMICAL

EVOLUTION OF THE

UNIVERSE

Type II supernovae have a very largespread in their intrinsic brightness—greater than a factor of 500—becauseof their diverse progenitors. Despitethis diversity, Baron et al. haveshown that their atmospheres canbe well understood with detailedsynthetic spectral modeling that candetermine the stellar compositions,degree of mixing, and kinetic ener-gy of the explosion. These resultsmake Type II supernovae attractiveprobes of chemical evolution in theUniverse and potentially usefulindependent checks on the cosmo-logical results derived from Type 1asupernovae.

E. Baron, P. Nugent, D. Branch, P.Hauschildt, M.Turatto, and E. Cappellaro,“Determination of primordial metallicityand mixing in the Type IIP supernova1993W,”Astrophysical Journal 586, 1199(2003). NP, IBM SUR, NASA, NSF

CHECKING THE FORE-

GROUND OF THE COSMIC

MICROWAVE BACKGROUND

Observations of the cosmicmicrowave background (CMB) canbe contaminated by diffuse fore-ground emission from sources suchas galactic dust and synchrotronradiation, but an international teamof researchers has developed a tech-nique for quantifying this effect.They applied this technique toCMB data from the MAXIMA-1

FIGURE 3Simulation of the birth of a binary starsystem. Each member of the binaryappears to be surrounded by a disk,with one component of the binary show-ing a two-arm spiral. The binary appearsto form inside the structure of a fila-ment in the cloud core.

Page 56: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

52 RESEARCH NEWS

that new gravitational wave detec-tors can look for as they attempt toverify the predictions of GeneralRelativity. A research team led byEdward Seidel of the AlbertEinstein Institute/Max PlanckInstitute for Gravitation Physics inPotsdam, Germany, has determinedeffective computational and theo-retical (gauge) parameters neededto carry out binary black hole evolu-tions for time scales never beforeachieved. Another research group, ledby Carlos Lousto of the University ofTexas at Brownsville, has achieved afull numerical computation of thegravitational radiation generated bythe evolution of a series of binaryblack hole configurations withaligned and counter-aligned spins.

M. Alcubierre, B. Bruegmann, P. Diener, M.Koppitz, D. Pollney, E. Seidel, and R.Takahashi,“Gauge conditions for long-term numerical black hole evolutions with-out excision,”Phys. Rev. D 67, 084023(2003). NP, AEI, EU, LSU

J. Baker, M. Campanelli, C. O. Lousto, andR.Takahashi,“The final plunge of spin-ning binary black holes,”Phys. Rev. D(submitted); astro-ph/0305287 (2003).NP, NASA, NRC, NSF

CALCULATING THE INSTA-

BILITY OF NITROGEN SALTS

Can the combination of a stablepolynitrogen cation, such as N5+,with a stable polynitrogen anion,such as N3–, lead to a stable ionicnitrogen allotrope, such as N5+N3–?Dixon et al. used high-level ab ini-tio molecular orbital theory to con-clusively determine whether the lat-tice energies of N5+N3– and N5+N5–

are sufficient to stabilize them assolids. Their calculations showedthat neither salt can be stabilizedand that both should decomposespontaneously into N3 radicals andN2. This conclusion was experimen-tally confirmed for the N5+N3– salt.

D. A. Dixon, D. Feller, K. O. Christe, W. W.Wilson, A. Vij, V. Vij, H. D. Brooke Jenkins, R.M. Olson, and M. S. Gordon,“Enthalpies offormation of gas phase N3, N3–, N5+, andN5– from ab initio molecular orbital theory,stability predictions for N5+N3– and N5+N5–,and experimental evidence for the insta-bility of N5+N3–,” J. Am. Chem. Soc. (inpress). BES, BER, SciDAC, DARPA, AFOSR,NSF

A MORE ACCURATE

CALCULATION OF

SILICON DICARBIDE

Unlike many typical chemical sys-tems, in which errors due to incom-plete treatment of the one-electronbasis and electron correlation are bal-anced and largely cancelled at modestlevels of theory, silicon dicarbide hasa barrier to linearity that requires both

CHEMICAL SCIENCES

prodigious basis sets and high-ordercorrelation treatments for accuratepredictions. Kenny et al. calculatedmolecular partial-wave expansionsfor the T-shaped and linear forms ofSiC2 using the MP2-R12/A method.Their results are the first of sufficientaccuracy to call for revisitation ofthe spectroscopically determinedbarrier of 5.4 ± 0.6 kcal mol–1, mov-ing it upward to 6.3 kcal mol–1.

J. P. Kenny, W. D. Allen, and H. F. Schaefer,“Complete basis set limit studies of con-ventional and R12 correlation methods:The silicon dicarbide (SiC2) barrier to lin-earity,” J. Chem. Phys. 118, 7353 (2003).BES, SciDAC

HOW ELECTRON IMPACT

EXCITES A MOLECULE

A major goal of studying electron-molecule collisions is to explore themechanisms that control the flow ofenergy from electronic to nucleardegrees of freedom. McCurdy et al.have completed the first calculationon vibrational excitation of a mole-cule (CO2) via coupled resonantstates (Figure 4). The multidimen-sional nuclear dynamics on thesecoupled surfaces was necessary toexplain quantitatively, for the firsttime, the experimental observationson this system. These methods havebeen applied in calculations onelectron-impact dissociation ofwater, which is one of the principalmechanisms suspected to initiateradiation damage of biological tis-sue in environments with ionizingradiation.

Page 57: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

RESEARCH NEWS 53

C. W. McCurdy, W. A. Isaacs, H.-D. Meyer,and T. N. Rescigno,“Resonant vibrationalexcitation of CO2 by electron impact:Nuclear dynamics on the coupled compo-nents of the 2Πu resonance,”Phys. Rev. A67, 042708 (2003). BES

DESIGNING NANO-SCALE

SHOCK ABSORBERS

Carbon nanotubes are the funda-mental building blocks in manynanosystems. Rivera et al. performedmolecular dynamics computational“experiments” on double-walledcarbon nanotubes of varying lengths

and diameters in which they pulledthe inner nano-tube out of the outernanotube to several different dis-tances and allowed the nanotube toretract and subsequently oscillate.They found that the oscillation isdamped (hence acting like a nanoshock absorber) and can be predict-ed by a very simple mechanicalmodel that takes into account thesliding friction between the sur-faces. The oscillation was around 1GHz, consistent with prior experi-mental and theoretical calculations.

J. L. Rivera, C. McCabe, and P.T.

Cummings,“Oscillatory behavior of doublenanotubes under extension: A simplenanoscale damped spring,”Nano Letters3, 1001 (2003). BES, NSF

SOLVING THE SCHRÖDINGER

EQUATION FOR LARGE

SYSTEMS

Exact solutions of the Schrödingerequation within a given one-particlebasis set, lie at the heart of quantumchemistry, providing invaluablebenchmarks by which the accuracyof more approximate methods maybe judged. Chan and Head-Gordon

2A1

Re(Eres)

2B1

2A1

2B1

0.3

3.8

4.64.82 C-O (bohr)

4.44.24

55.2

5.45.6 -40

-30-20

-100

Bend angle

2 C-O (bohr)Bend angle

1020

3040

0.2

0.1

0

0.25

0.15

0.05

Re(Eres)0.45

3.8

0.35

0.25

0.1

0.4

0.3

0.20.15

4.64.8

4.44.24

55.2

5.45.6 -40

-30-20

-100 10

2030

40

2 Im(Eres)0.18

3.8

4.64.82 C-O (bohr)

4.44.24

55.2

5.45.6 -40

-30-20

-100

Bend angle

2 C-O (bohr)Bend angle

1020

3040

0.14

0.1

0.060.040.02

0

0.16

0.12

0.08

2 Im(Eres)

0.25

3.8

0.15

0.05

0.2

0.1

0

4.64.8

4.44.24

55.2

5.45.6 -40

-30-20

-100 10

2030

40

FIGURE 4Complex potential surfaces for both components of the resonance. Top row, realpart of resonance energies; bottom row, widths. Energies are in hartrees, and thebend angle, defined as π minus the O—C—O bond angle, is in degrees.

Page 58: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

54 RESEARCH NEWS

used a newly developed densitymatrix renormalization group algo-rithm to compute exact solutions ofthe Schrödinger equation for waterat two geometries in a basis of 41orbitals, demonstrating that it isnow possible to obtain numericallyexact solutions for systems consider-ably larger than can be contemplat-ed using traditional full configura-tion interaction algorithms.

G. K. L. Chan and M. Head-Gordon,“Exactsolution (within a triple zeta, double polar-ization basis set) of the electronicSchrödinger equation for water,” J. Chem.Phys. 118, 8551 (2003). BES

QUANTUM MONTE CARLO

STUDY OF THE ELECTRONIC

EXCITED STATES

Accurate computational predictionsof molecular electronic excitedstates have proved more difficult toobtain than ground states. Ethyleneis the prototypical π-electron systemwhose photochemical behavior isimportant in chemistry, biology,and technology. El Akramine et al.carried out a theoretical study of thetransition between the ground stateand the lowest triplet state of ethyl-ene using the diffusion MonteCarlo method. The ground stateatomization energy and heat of for-mation were consistent with experi-mental results, and the triplet-stateatomization energy and heat of for-mation were also predicted.

O. El Akramine, A. C. Kollias, and W. A.Lester, Jr.,“Quantum Monte Carlo study ofthe singlet-triplet transition in ethylene,”J. Chem. Phys. 119, 1483 (2003). BES

OXYGEN DISSOCIATION ON

COPPER STEPS

Copper and copper alloy catalysts areat the heart of important industrialprocesses, and copper also plays animportant role in the microelectron-ics industry. Dissociation of dioxy-gen (O2) is the first step of oxygenchemistry on the surface of metalssuch as copper. Xu and Mavrikakisstudied the adsorption and dissocia-tion of dioxygen on copper stepsusing periodic self-consistent densityfunctional theory calculations, andfound that the adsorption of bothatomic and molecular oxygen isenhanced on the stepped surface.

Y. Xu and M. Mavrikakis,“The adsorptionand dissociation of O2 molecular precur-sors on Cu:The effect of steps,”SurfaceScience 538, 219 (2003). BES, DOD, NSF,3M

A NEW METHOD FOR WAVE-

PACKET PROPOGATION

Wang and Thoss have presented amethod for wave-packet propagationthat is based rigorously on a varia-tional principle and is also sufficientlyefficient to be applicable to complexdynamical problems with up to a fewhundred degrees of freedom. Thismethod can treat substantially morephysical degrees of freedom than theoriginal method, and thus signifi-cantly enhances the ability to carry

out quantum dynamical simulationsfor complex molecular systems. Theefficiency of the new formulationwas demonstrated by convergedquantum dynamical simulations forsystems with a few hundred to athousand degrees of freedom.

H. Wang and M.Thoss,“Multilayer formu-lation of the multiconfiguration time-dependent Hartree theory,” J. Chem. Phys.119, 1289 (2003). BES, NSF, DAAD

ADDING HYDROGEN

TO LEAN PREMIXED

NATURAL GAS

Hawkes and Chen performed directnumerical simulation of lean pre-mixed methane-air and hydrogen-methane-air flames to understandhow the addition of hydrogen canaffect flame stability and pollutantformation in gas turbines. Theyfound a greater rate of heat releasein the enriched flame, which isexplained by a difference in flameareas and influences of turbulentstretch on the local burning rates.They found that there may be a pol-lutant tradeoff between NO andCO emissions when using hydrogen-enriched fuels.

E. R. Hawkes and J. H. Chen,“Turbulentstretch effects on hydrogen enriched leanpremixed methane-air flames,”3rd JointMeeting of the U.S. Sections of the Combus-tion Institute, Chicago, IL (2003). BES

Page 59: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

RESEARCH NEWS 55

A MILLENNIUM OF

CLIMATE CHANGE

Researchers at the National Centerfor Atmospheric Research continuedtheir benchmark 1,000-year Com-munity Climate System Model(CCSM) control run and extendedthe Parallel Climate Model (PCM)control run to nearly 1,500 years(Figure 5). Science experiments withthe PCM included historical 20thcentury climate simulations usingvarious combinations of specifiedgreenhouse gases, atmospheric sul-fate concentrations, solar and vol-canic forcing, and tropospheric andstratospheric ozone. This set of

experiments represents the mostextensive set of 20th century forcingclimate simulations ever attempted.

A. Dai, A. Hu, G. A. Meehl, W. M. Washington,and W. G. Strand,“North Atlantic Oceancirculation changes in a millennial controlrun and projected future climates,”Journalof Climate (in press). BER, SciDAC, NSF

MULTIRESOLUTION

CLIMATE MODELING

Baer and colleagues have developedthe Spectral Element AtmosphericModel (SEAM) to incorporate localmesh refinement within a globalmodel. This makes it feasible to

model the interaction between coarse-scale and fine-scale phenomena.The method implies improved cli-mate and turbulence simulations byallowing increased resolution in afew dynamically significant areas,such as fronts or regions of densetopography. Similarly, regional cli-mate simulations may be improvedby allowing regional resolution to beincorporated within a global modelin a two-way interactive fashion.

F. Baer and J. J.Tribbia,“Sensitivity ofatmospheric prediction models to ampli-tude and phase,”Tellus (submitted, 2003).BER, SciDAC

65°N

Year 75-84,Spin-up Run, Top 100m 20. cm/s a

60°N

55°N

50°N

45°N

40°N

35°N

65°N

60°N

55°N

50°N

45°N

40°N

35°N

60°W 50°W 40°W 30°W 20°W 10°W 0°E

60°W 50°W 40°W 30°W 20°W 10°W 0°E

60°W 50°W 40°W 30°W 20°W 10°W 0°E

200–20–40–60–80–100–120–140–160–180

60°W 50°W 40°W 30°W 20°W 10°W 0°E

65°N

60°N

55°N

50°N

45°N

40°N

35°N

65°N

60°N

55°N

50°N

45°N

40°N

35°N

170–199,PCM Control Run, Top 100m 20. cm/s c

Year 80-84,PCM Control Run, Top 100m 20. cm/s b

1070–1099,PCM Control Run, Top 100m 20. cm/s d

FIGURE 5Sea surface height (color, in cm relative to a motionless ocean) and top 100m-averaged ocean currents (arrows) in the North Atlantic Ocean simulated by thePCM (a) at the end of the ocean-sea ice alone run (right before the coupling), andduring the control run at (b) years 80–84, (c) 170–199, and (d) 1070–1099.

CLIMATE CHANGE RESEARCH

Page 60: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

56 RESEARCH NEWS

SORTING OUT THE SOURCES

OF TROPOSPHERIC OZONE

Ozone in the troposphere is bothformed in situ and transported fromthe stratosphere, which has muchhigher ozone levels. The importanceof each source may vary both region-ally and seasonally. Atherton et al.analyzed the results of a simulationin Phoenix from April 1 throughOctober 31, 2001. Their resultsshowed that there are periods oftime during the “ozone season” inwhich higher levels of surfaceozone may be due to transport ofstratospheric ozone from above,rather than created by ground-level(and energy-related) processes.

C. Atherton, D. Bergmann, P. Cameron-Smith, P. Connell, D. Rotman, and J.Tannahill,“Large scale atmosphericchemistry simulations for 2001: An analy-sis of ozone and other species in centralArizona,”American Meteorological Society83rd Annual Meeting, Long Beach,California, February 2003. BER

DO SOOT AND SMOKE

CONTRIBUTE TO CLIMATE

CHANGE?

Soot and smoke contain black car-bon which absorbs solar radiation.These aerosols have been thoughtto enhance climate warming byabsorbing radiation that might oth-erwise be scattered and by reducingoverall cloudiness. Penner et al.evaluated forcing and changes inatmospheric temperature andclouds associated with both directand indirect effects of soot andsmoke using an atmospheric-climatemodel coupled to their chemical-

transport model. They found thatclimate forcing by soot and smokewas not as large as previouslyclaimed, because the adjustment ofthe atmospheric temperature andclouds to aerosols tends to decreasethe forcing, particularly if theaerosols reside at higher levels with-in the atmosphere.

J. E. Penner, S.Y. Zhang, and C. C. Chuang,“Soot and smoke aerosol may not warmclimate,”J. Geophys. Res., in press(2003). BER

CARBON-CYCLE FEEDBACK

IN CLIMATE CHANGE

Taking into account the feedbackeffects of climate change on absorp-tion of carbon by the ocean and ter-restrial biosphere will allow betterpredictions of future climate; however,these effects are ignored in presentU.S. climate models. Thompson et

al. have obtained the first results froma fully coupled, non-flux-corrected,multi-century climate-carbon simu-lation. They have performed threecomplete simulations: a control casewith no emissions, a standard case,and a case where the uptake of CO2

by vegetation is limited. The limitedcase shows that substantially moreCO2 stays in the atmosphere whencarbon uptake by vegetation isreduced, thus giving a range ofuncertainty in current carbon-climatemodeling.

S.Thompson, B. Govindasamy, J. Milovich,A. A. Mirin, and M. Wickett,“A comprehen-sive GCM-based climate and carbon cyclemodel:Transient simulations to year 2100using prescribed greenhouse gas emis-sions,”Amer. Geophys. Union Mtg., SanFrancisco, CA, December 2002; LawrenceLivermore National Laboratory technicalabstract UCRL-JC-149761. BER, LLNL

REDUCING THE

PERFORMANCE GAP

The SciDAC Performance Eval-uation Research Center (PERC) isworking toward reducing the gapbetween peak and sustained per-formance on high-end computers bydeveloping hardware and softwaretools to monitor, model, and controlperformance. In the past year, PERChas analyzed and optimized appli-cation codes in astrophysics, latticegauge theory, climate modeling,

applied mathematics, and other dis-ciplines, significantly improving theperformance and scalability of thecodes. For example, improved com-piler annotation technology (ActiveHarmony) achieved a 15% reduc-tion in run time on the POP oceanmodeling code.

D. H. Bailey, B. de Supinski, J. Dongarra, etal.,“Performance technologies for peta-scale systems,”http://perc.nersc.gov/docs/PERC-white-aes15.doc. ASCR-MICS,SciDAC

COMPUTER SCIENCE

Page 61: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

RESEARCH NEWS 57

nonlinear preconditioner; theyshowed numerically that the newmethod converges well even for somedifficult problems, such as high Rey-nolds number flows, where a tradi-tional inexact Newton method fails.

X.-C. Cai and D. E. Keyes,“Nonlinearly pre-conditioned inexact Newton algorithms,”SIAM J. Sci. Comp. 24, 183 (2002). ASCR-MICS, SciDAC, NSF, NASA

DEVELOPING AN ALTERNA-

TIVE PROGRAMMING MODEL

While MPI is currently the de factostandard for programming super-computers, several federal agenciesare investing in alternative models tosimplify programming. Yelick and

collaborators are investigating UPC,a “partitioned global address space”language. They developed andreleased an open-source version ofthe full UPC compiler, which trans-lates UPC to C with calls to a newlydeveloped UPC runtime layer. Theopen-source compiler has low over-head for basic operations on shareddata; it is competitive with, andsometimes faster than, the commer-cial compiler.

W. Chen, D. Bonachea, J. Duell, P.Husbands, C. Iancu, and K.Yelick,“A per-formance analysis of the Berkeley UPCcompiler,”17th Annual InternationalConference on Supercomputing (ICS),2003. ASCR-MICS, DOD, NSF

GEODESIC ACOUSTIC MODES

IN PLASMA TURBULENCE

Researchers in the SciDAC PlasmaMicroturbulence Project have observeda coherent oscillation in the poloidalflow of density fluctuations that showsremarkable similarity to predictedcharacteristics of geodesic acousticmodes with beam emission spec-troscopy in DIII-D plasmas. Twomethods of analyzing the data, onebased on wavelets and one based ontime-resolved cross-correlation, revealsimilar features in the resulting flowfield. These experimental observations

and the corresponding simulationsprovide compelling evidence thatgeodesic acoustic modes are active inthe turbulence in the outer regionsof tokamak plasmas and play anintegral role in affecting and medi-ating the fully saturated state of tur-bulence and resulting transport.

G. R. McKee, R. J. Fonck, M. Jakubowski, K.H. Burrell, K. Hallatschek, R. A. Moyer, D. L.Rudakov, W. Nevins, G. D. Porter, P. Schoch,and X. Xu,“Experimental characterizationof coherent, radially-sheared zonal flowsin the DIII-D tokamak,”Phys. Plasmas 10,1712 (2003). FES, SciDAC

INTEGRATING THE GLOBUS

TOOLKIT WITH FIREWALLS

The Globus Toolkit is a collection ofclients and services that enable Gridcomputing. Each component of thetoolkit has its own network trafficcharacteristics that must be consid-ered when deploying the toolkit at asite with a firewall. Members of theGlobus Project and the DOE ScienceGrid engineering team have devel-oped requirements and guidancefor firewall administrators, describ-ing the network traffic generated bythe Globus Toolkit services, howthe Toolkit’s use of the ephemeralport range can be controlled, andfirewall requirements for client andserver sites. NERSC provided a fire-wall testbed that was used for verify-ing much of the information in thisdocument.

V. Welch et al.,“Globus Toolkit FirewallRequirements Version 5,”July 22, 2003,http://www.globus.org/security/firewalls/.SciDAC

PRECONDITIONING

INEXACT NEWTON

ALGORITHMS

Inexact Newton algorithms arecommonly used for solving largesparse nonlinear system of equa-tions F(u*) = 0. Even with globalstrategies such as line search or trustregion, the methods often stagnateat local minima of ||F ||, especiallyfor problems with unbalanced non-linearities, because the methods donot have built-in machinery to dealwith the unbalanced nonlinearities.Cai and Keyes studied a nonlinearadditive Schwarz-based parallel

FUSION ENERGY SCIENCES

Page 62: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

58 RESEARCH NEWS

COMPUTATIONAL ATOMIC

PHYSICS FOR FUSION

ENERGY

Colgan and Pindzola have appliedthe time-dependent close-couplingtheory to the study of the electron-impact ionization of helium fromthe excited (1s2s) configuration.They made the calculations in aneffort to resolve the discrepancybetween theoretical calculationsand existing experimental measure-ments for electron scattering fromthe metastable states of helium. Therelative difference between the non-perturbative close-coupling and theperturbative distorted-wave resultsgrew larger as the principal quantumnumber was increased. This differ-ence has important implications inthe collisional-radiative modeling ofmany astrophysical and laboratoryplasmas that use perturbation theory.

J. P. Colgan and M. S. Pindzola,“Time-dependent close-coupling studies of theelectron-impact ionization of excited-statehelium,”Phys. Rev. A 66, 062707 (2002).FES, SciDAC

BALLOONING INSTABILITY IN

THE EARTH’S MAGNETOTAIL

The Earth’s magnetotail, the mainsource of the polar aurora, is the partof the magnetosphere that is pushedaway from the sun by the solar wind.The ballooning instability of themagnetotail interests researchersbecause of its possible relevance tothe dynamics of magnetic substorms.Zhu et al. studied this ballooninginstability within the framework ofHall magnetohydrodynamics, which

enter the stability analysis throughchanges introduced in the plasmacompressibility.

P. Zhu, A. Bhattacharjee, and Z. W. Ma,“Hallmagnetohydrodynamic ballooning instabil-ity in the magnetotail,”Phys. Plasmas 10,249 (2003). FES, SciDAC, NSF

SIZE SCALING OF

TURBULENT TRANSPORT

Lin et al. have developed a nonlinearmodel for turbulence radial spreadingbased on the modified porous-mediumequation. The model offers a pheno-menological understanding of thetransition from Bohm to gyro-Bohmscaling when the parameter (minorradius/gyroradius) is larger than 500.They also estimated the role of thetrapped electron nonlinearity inzonal flow generation in trapped-electron-mode turbulence in thecontext of parametric instability theory.

Z. Lin,T. S. Hahm, S. Ethier, W. W. Lee, J.Lewandowski, G. Rewoldt, W. M.Tang, W. X.Wang, L. Chen, and P. H. Diamond,“Sizescaling of turbulent transport in tokamakplasmas,” Proceedings of 19th IAEA FusionEnergy Conference, Lyon, France (2002).FES, SciDac

ELECTRON-IMPACT IONIZA-

TION OF OXYGEN IONS

Loch et al. made experimentalmeasurements of the ionizationcross section for Oq+, where q = 1–4,performed with a crossed-beamsapparatus, and compared them withtheoretical calculations. For O+, theexperimental measurements are invery good agreement with configu-

ration-average time-dependentclose-coupling calculations. For theremaining oxygen ions, the experi-mental measurements are in goodagreement with time-independentdistorted-wave calculations. Asexpected, the accuracy of the per-turbative distorted-wave calculationsimproves with increasing ion charge.

S. D. Loch, J. P. Colgan, M. S. Pindzola, W.Westermann, F. Scheuermann, K. Aichele,D. Hathiramani, and E. Salzborn,“Electron-impact ionization of O q+ ions for q =1–4,”Phys. Rev. A 67, 042714 (2003).FES, SciDAC, DFG

SIMULATION OF INTENSE

BEAMS FOR HEAVY-ION

FUSION

A multibeamlet approach to a high-current-ion injector, whereby a largenumber of beamlets are acceleratedand then merged to form a singlebeam, offers a number of potentialadvantages over a monolithic single-beam injector. These advantagesinclude a smaller transverse foot-print, more control over the shapingand aiming of the beam, and moreflexibility in the choice of ion sources.A potential drawback, however, is alarger emittance. Grote et al. arestudying the merging of beamlets(Figure 6) and how this mergingdetermines emittance. Their resultssuggest that a multibeamlet injectorcan be built with a normalizedemittance less than 1 πmm mrad.

D. P. Grote, E. Henestroza, and J. W. Kwan,“Design and simulation of a multibeamletinjector for a high-current accelerator,’’Phys. Rev. ST AB 6, 014202 (2003). FES

Page 63: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

RESEARCH NEWS 59

SIMULATING THE

THERMAL STRUCTURE

OF SOLAR STORMS

Solar storms can potentially disruptthe operation of orbiting satellites,endanger astronauts, and cause fail-ure of long-distance power grids onland. Understanding the physicalmechanisms that originate thesestorms is of fundamental impor-tance if they are to be forecasted.Mok et al. simulated the thermalstructure of the solar atmosphere in

the neighborhood of a sunspotgroup in 3D for the first time. Theyfound that the strong, structuredmagnetic field from the sunspotsguides the heat flow and plasmaflow, resulting in fine structures intemperature and density because ofthe complicated field-line topology.

Y. Mok, R. Lionello, Z. Mikic, and J. Linker,“Calculating the thermal structure of solaractive regions in three dimensions,”Astro-phys. J. (submitted, 2003). FES, NSF, NASA

MODELING THE PLASMA-

DENSITY LIMIT

Understanding the plasma-densitylimit is crucial for projecting theperformance of future fusion reac-tors. Xu et al. have developed amodel for the density limit in atokamak plasma based on the edgesimulation results from BOUT andUEDGE codes. Simulations of tur-bulence in tokamak boundary plas-mas show that turbulent fluctuationlevels and transport increase withcollisionality. The simulations alsoshow that it is easier to reach thedensity limit as the density increaseswhile holding pressure constantthan holding temperature constant.

X. Q. Xu, W. M. Nevins,T. D. Rognlien, R. H.Bulmer, M. Greenwald, A. Mahdavi, L. D.Pearlstein, and P. Snyder,“Transitions ofturbulence in plasma density limits,”Phys.Plasmas 10, 1773 (2003). FES, MIT, GA

MICROSTRUCTURE

EVOLUTION IN

IRRADIATED MATERIALS

The promise of fusion as a viableenergy source depends on the devel-opment of structural materials capa-ble of sustaining operation in harshradiation conditions. The structureand mobility of self-interstitial atom

Y

z = 0.5 m

z = 1.9 m

z = 4.1 m

Y

Y

X X

X'

X'

X'

FIGURE 6The merging of the beamlets createscomplex space-charge waves that prop-agate across the beam and mix. This fig-ure shows contours of density at severallocations in configuration space (x-y)and in phase space (x-x′). They showthat the individuality of the beamlets isquickly lost, leaving a nearly uniformbeam. The z is the distance from the endof the preaccelerator column.

Page 64: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

60 RESEARCH NEWS

(SIA) clusters has profound signifi-cance on the microstructural evo-lution of irradiated materials.Marian et al. have studied the effectof substitutional copper solute atomson the mobility of self-interstitialclusters in Fe-Cu alloys. The effectof this oversized substitutional soluteis to enhance the three-dimensionalcharacter of small-cluster diffusionand to affect general cluster diffu-sion properties.

J. Marian, B. D. Wirth, A. Caro, B. Sadigh, G.R. Odette, J. M. Perlado, and T. Diaz de laRubia,“Dynamics of self-interstitial clustermigration in pure alpha-Fe and Fe-Cualloys,”Phys. Rev. B 65, 144102 (2002). FES

DESIGNING THE NATIONAL

COMPACT STELLERATOR

EXPERIMENT

The U.S. fusion community is con-structing a compact, moderate-costexperiment, the National CompactStellerator Experiment (NCSX), toinvestigate 3D confinement and sta-bility. The configurations beingstudied are compact, are passivelystable to known instabilities, andare designed to have a “hidden”symmetry to give good transportproperties. In addition, because oftheir enhanced stability, theyshould be free of disruptions. Theseconfigurations are designed to com-bine the best features of the toka-mak and the stellerator. The NCSXhas achieved a successful conceptualdesign review.

G. H. Neilson and the NCSX Team,“Physicsconsiderations in the design of NCSX,”Proceedings of 19th IAEA Fusion EnergyConference, Lyon, France (2002). FES

STABILIZING MICRO-

TURBULENCE IN PLASMAS

Bourdelle et al. have shown thatmicroturbulence can be stabilizedin the presence of steep temperatureand density profiles. High values of|β′| have a stabilizing influence ondrift modes. This may form the basisfor a positive feedback loop in whichhigh core β values lead to improvedconfinement, and to further increasein β. In high-β spherical tokamakplasmas, high |β′| rather than lowaspect ratio is a source of stabiliza-tion. Therefore, the effect of high|β′| should be stabilizing in the plas-mas of the National Spherical TorusExperiment.

C. Bourdelle, W. Dorland, X. Garbet, G. W.Hammett, M. Kotschenreuther, G. Rewoldt,and E. J. Synakowski,“Stabilizing impact ofhigh gradient of β on microturbulence,”Phys. Plasmas 10, 2881 (2003). FES

TRAPPED ELECTRON MODE

TURBULENCE

Gyrokinetic simulations of turbulentparticle transport in the Alcator C-Mod by Ernst et al. have led to anunderstanding of the observed spon-taneous particle transport barrierformation, and its control using radio-frequency heating in the core. As thedensity gradient steepens, nonreso-nant, collisionless, trapped electronmodes (TEMs) are driven unstable.Gyrokinetic stability analysis showsthat the modes are driven by thedensity gradient, have wavelengthsaround twice the ion gyroradius, andare partially damped by collisions.After the TEM onset, the turbulentoutflow strongly increases with the

density gradient. This outflow ulti-mately balances the inward Warepinch, leading to a steady state.

D. R. Ernst et al.,“Role of trapped electronmode turbulence in density profile controlwith ICRH in Alcator C-Mod,” InternationalSherwood Fusion Theory Conference,Corpus Christi,Texas, April 28–30, 2003.FES

DESIGNING COMPACT

TOKAMAK-STELLARATOR

HYBRIDS

Ware et al. described for the firsttime the complete physics propertiesof compact, drift-optimized tokamak-stellarator hybrids with a high-sheartokamak-like rotational transformprofile and |B| that is quasipoloidallysymmetric (Figure 7). The rotationaltransform in these hybrid configura-tions is produced primarily by a self-consistent bootstrap current. Thenonaxisymmetric components of |B|yield a bootstrap current lower thanthat in an axisymmetric device, whichleads to good magnetohydrodynamicstability properties without the needfor a conducting wall. The neoclas-sical confinement improves withincreasing β, leading to a new formof configurational invariance inthese stellarators.

A. S. Ware, S. P. Hirshman, D. A. Spong, L. A.Berry, A. J. Deisher, G.Y. Fu, J. F. Lyon, andR. Sanchez,“High-β equilibria of drift-optimized compact stellarators,”Phys. Rev.Lett. 89, 125003 (2002). FES

UNDERSTANDING NEO-

CLASSICAL TEARING MODES

Tearing modes are magnetic islandsformed by the topological rearrange-

Page 65: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

RESEARCH NEWS 61

ment of magnetic field linesthrough reconnection. The preven-tion of neoclassical tearing modes(NTMs) in tokamak plasmas is amajor challenge because they candegrade plasma confinement. Idealmodes can seed NTMs throughforced reconnection, yet in saw-toothing discharges it is not wellunderstood why a particular saw-tooth crash seeds an NTM after sev-eral preceding sawteeth did not.Also, tearing modes sometimesappear and grow without an

obvious ideal mode, causing a seedisland. Based on theoretical andexperimental results, Brennan et al. have proposed and tested a newmechanism for tearing-mode onsetthat explains these puzzling observa-tions.

D. P. Brennan, R. J. La Haye, A. D.Turnbull,M. S. Chu,T. H. Jensen, L. L. Lao,T. C. Luce,P. A. Politzer, E. J. Strait, S. E. Kruger, andD. D. Schnack,“A mechanism for tearingonset near ideal stability boundaries,”Phys. Plasmas 10, 1643 (2003). FES

SIMULATING PSEUDO-

CHAOTIC DYNAMICS

Nonlinear dynamics and chaos simulations are important forunderstanding the microscale ofplasma turbulence, as well as forunderstanding general nonlineardynamical effects. A family of ran-dom systems with zero Lyapunovexponent is called pseudochaos.Zaslavsky and Edelman have shownhow the fractional kinetic equationcan be introduced for pseudochaosand how the main critical exponentsof fractional kinetics can be evaluatedfrom the dynamics. Pseudochaoticdynamics and fractional kineticscan be applied to the behavior ofstreamlines or magnetic field lines.

G. M. Zaslavsky and M. Edelman,“Pseudochaos,’’ in Perspectives andProblems in Nonlinear Science, edited by E.Kaplan, J. E. Marsden, and K. R.Sreenivasan (Springer-Verlag, New York,2003), pp. 421–443. FES, ONR, NSF

TURBULENT TRANSPORT

WITH KINETIC ELECTRONS

Chen et al. have developed a newelectromagnetic kinetic electron-simulation model in 3D toroidal flux-tube geometry that uses a general-ized split-weight scheme, where theadiabatic part is adjustable. Thismodel includes electron–ion colli-sional effects. For H-mode parame-

(a)

(b)

1.75

1.50

|B|

in T

esla

1.25

1.00

0.75

FIGURE 7Last closed flux surface for two compactstellarator configurations: (a) QPS3,with three field periods, A = 3.7, β = 15%;and (b) QPS2, with two field periods,A = 2.7, β = 5%. Color indicates the

variation in |B|.

Page 66: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

62 RESEARCH NEWS

ters, the nonadiabatic effects ofkinetic electrons increase lineargrowth rates of the ion-temperature-gradient-driven (ITG) modes, main-ly due to trapped-electron drive.The ion heat transport is alsoincreased from that obtained withadiabatic electrons. The linearbehavior of the zonal flow is not sig-nificantly affected by kinetic elec-trons. The ion heat transportdecreases to below the adiabaticelectron level when finite plasma βis included due to finite-β stabiliza-tion of the ITG modes.

Y. Chen, S. E. Parker, B. I. Cohen, A. M.Dimits, W. M. Nevins, D. Shumaker, V. K.Decyk, and J. N. Leboeuf,“Simulations ofturbulent transport with kinetic electronsand electromagnetic effects,”Nucl. Fusion43, 1121 (2003). FES, NASA

HIGH ENERGY PHYSICS

LATTICE QCD CONFRONTS

EXPERIMENT

For almost thirty years precisenumerical studies of nonperturbativeQCD, formulated on a space-timelattice, have been stymied by aninability to include the effects ofrealistic quark vacuum polarization.The MILC, HPQCD, UKQCD,and Fermilab Lattice collaborationshave presented detailed evidence ofa breakthrough that may now permita wide variety of high-precision, non-perturbative QCD calculations,including high-precision B and Dmeson decay constants, mixingamplitudes, and semi-leptonic formfactors—all quantities of great impor-tance in current experimental workon heavy-quark physics. The break-through comes from a new discretiza-tion for light quarks: Symanzik-improved staggered quarks. Theresearchers compared a wide varietyof nonperturbative calculations inQCD with experiment, and foundagreement to within statistical andsystematic errors of 3% or less.

C.T. H. Davies, E. Follana, A. Gray, G. P.Lepage, Q. Mason, M. Nobes, J. Shigemitsu,H. D.Trottier, M. Wingate, C. Aubin, C.Bernard,T. Burch, C. DeTar, S. Gottlieb, E.B. Gregry, U. M. Heller, J. E. Hetrick, J.Osborn, R. Sugar, D.Toussaint, M. Di

Pierro, A. El-Khadra, A. S. Kronfeld, P. B.Mackenzie, D. Menscher, and J. Simone,“High-precision lattice QCD confrontsexperiment,”Phys. Rev. Lett. (submitted,2003), hep-lat/0304004. HEP, NSF, PPARC

TWO-COLOR QCD WITH

FOUR CONTINUUM

FLAVORS

Recently there has been a re-evalua-tion of the old idea that quark pairsmight condense, giving rise to a tran-sition to a color superconducting stateat high baryon-number density. Kogutet al. approached this question byexamining the spectrum of two-colorlattice QCD with four continuumflavors at a finite chemical potential(µ) for quark-number, on a 123 × 24lattice. They found evidence that thesystem undergoes a transition to astate with a diquark condensate, whichspontaneously breaks quark numberat µ = mπ/2, and that this transitionis mean field in nature. They thenexamined the three states that wouldbe Goldstone bosons at µ = 0 for zeroDirac and Majorana quark masses.

J. B. Kogut, D.Toublan, and D. K. Sinclair,“The pseudo-Goldstone spectrum of 2-color QCD at finite density,”ArgonneNational Laboratory report ANL-HEP-PR-03-022 (May 2003), hep-lat/0305003.HEP, NSF, HFS

Page 67: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

RESEARCH NEWS 63

BIMETALLIC SURFACE

SEGREGATION OF

NANOPARTICLES

To better understand catalysts infuel cells, Wang et al. used a MonteCarlo code built around the embed-ded atom method to investigate thesegregation of platinum atoms on thesurfaces of platinum-nickel nano-particles. They used four kinds ofnanoparticle shapes (cube, tetrahe-dron, octahedron, and cubo-octahe-dron) terminated by {111} and {100}facets to examine the extent of Ptsegregation. Regardless of the shapeand composition of the nanoparticles,the Pt atoms segregated preferentiallyto the facet sites, less to edge sites, andleast to vertex sites in the outermostatomic layer of the nanoparticles.

G. Wang, M. A. Van Hove, and P. N. Ross,“Segregations in Pt-Ni catalyst nanoparti-cles,”Surface Science (submitted, 2003).BES, SciDAC

ELECTRONIC INTERACTION

OF TOTA+ WITH DNA

Investigating the electronic interac-tions of compounds intercalated inDNA is important because of thebiological and medical roles playedby intercalators. Reynisson et al.made first-principles quantummechanical calculations of trioxatri-angulenium ion (TOTA+) bindingto duplex DNA by intercalation.The electronic structure calcula-tions revealed no meaningfulcharge transfer from DNA toTOTA+, but showed the importance

of backbone, water, and counterioninteractions, which shift the energylevels of the base and the intercalat-ed TOTA+ orbitals significantly.The calculations also showed thatthe inserted TOTA+ strongly polar-izes the intercalation cavity, where asheet of excess electron density sur-rounds the TOTA+ (Figure 8).

J. Reynisson, G. B. Schuster, S. B.Howerton, L. D. Williams, R. N. Barnett, C. L.Cleveland, U. Landman, N. Harrit, and J. B.

MATERIALS SCIENCES

5'C–3'GTOTA+ (I)

G–CA–T

TOTA+ regionfrom above

TOTA+ region removed

TOTA+ and water regions removed

FIGURE 8Polarization-charge (δρ) isosurfacescalculated for the hydrated TOTA+(I) 3-bpduplex DNA segments. Magenta regionscorrespond to those where excess elec-tronic charge is induced by the interca-lation process, and yellow regions denotethose where depletion of charge ensued.The upper panel shows δρ for the entirehydrated TOTA+(I) 3-bp duplex segment;in the middle (right) panel, the TOTA+

intercalator region is excluded; and inthe bottom panel, both the TOTA+ inter-calator and hydration shell regions areexcluded. The latter illustrates clearlythe electronic polarization that isinduced by the intercalation of theTOTA cation.

Page 68: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

64 RESEARCH NEWS

Chaires,“Intercalation of trioxatriangu-leneium ion with DNA: Binding, electrontransfer, x-ray crystallography, and elec-tronic structure,”J. Am. Chem. Soc. 125,2072 (2002). BES, NSF, NCI

ELECTRONIC STRUCTURE OF

C60 MONOLAYERS

Exhibiting properties such as hightransition temperature supercon-ductivity and antiferromagnetism,C60-based fullerides are ideal modelcompounds for exploring key con-ceptual issues in strongly correlatedphysics. Yang et al. studied thestructure and electronic structure ofelectron-doped C60 monolayers. Theoverall shape of the calculated bandstructure showed good agreementwith the high-resolution angle-resolved photoemission experiments,indicating a reduction of the occupiedbandwidth by about 40 percent dueto electron-phonon interaction.

W. L.Yang, V. Brouet, X. J. Zhou, H. J. Choi,S. G. Louie, M. L. Cohen, S. A. Kellar, P. V.Bogdanov, A. Lanzara, A. Goldoni, F.Parmigiani, Z. Hussain, and Z.-X. Shen,“Band structure and Fermi surface ofelectron-doped C60 monolayers,”Science300, 303 (2003). BES, ONR, NSF

NUCLEATION OF SINGLE-

WALLED CARBON

NANOTUBES

The nucleation pathway for single-walled carbon nanotubes has beeninvestigated using first-principlesdensity-functional calculations. Fanet al. have shown that incorporationof pentagons at an early stage of thenanotube growth reduces the num-ber of dangling bonds and facilitates

bonding to the metal. As a result, theformation of a capped single-wallednanotube is overwhelmingly favoredcompared to any structure with dan-gling bonds (such as graphite sheets)or to a fullerene.

X. Fan, R. Buczko, A. A. Puretzky, D. B.Geohegan, J.Y. Howe, S.T. Pantelides, andS. J. Pennycook,“Nucleation of single-walled nanotubes,”Phys. Rev. Lett. 90,145501 (2003). BES, ORNL, NSF, MEVU

INVESTIGATING THE CON-

NECTION BETWEEN ORBITAL

AND SPIN ORDERING

Medvedeva et al. studied the complexnature of the fundamental connectionbetween orbital ordering phenomenaand spin ordering alignment on amicroscopic scale using their recentlydeveloped average spin state calcula-tion scheme. This scheme allowedthem to treat a paramagnetic state andto describe successfully the experi-mental magnetic and orbital phasediagrams of LaMnO3, KCuF3,La0.5Sr1.5MnO4, and LaSr2Mn2O7.For the first time, they investigatedfrom first principles the stability ofcharge and orbital ordering upononset of a magnetic transition inLa0.5Sr1.5MnO4 and LaSr2Mn2O7.

J. E. Medvedeva, M.A. Korotin, V. I. Anisimov,and A. J. Freeman,“Charge and orbitalordering melting in layered LaSr2Mn2O7”(in preparation, 2003). BES

CALCULATING THE PROPER-

TIES OF AN ORGANIC

SEMICONDUCTOR

There has been increasing interest inorganic semiconductors such as

pentacene for applications in elec-tronic devices. Tiago et al. havestudied the optical and electronicproperties of crystalline pentacene,using a first-principles Green’s func-tion approach. They investigated therole of polymorphism on the elec-tronic energy gap and linear opticalspectrum by studying two differentcrystalline phases: the solution-phasestructure and the vapor-phase struc-ture. Charge-transfer excitons werefound to dominate the optical spec-trum. Excitons with sizable bindingenergies were predicted for bothphases.

M. L.Tiago, J. E. Northrup, and S. G. Louie,“Ab initio calculation of the electronicand optical properties of solid pentacene,”Phys. Rev. B 67, 115212 (2003). BES, NSF

RELATIVITY AND THE

STABILITY OF INTER-

METALLIC COMPOUNDS

Why are some intermetallic com-pounds stable while others are not?Wang and Zunger showed that theexistence of stable, ordered 3d-5dintermetallics CuAu and NiPt, asopposed to the unstable 3d-4d iso-valent analogs CuAg and NiPd,results from relativity. First, inshrinking the equilibrium volumeof the 5d element, relativity reducesthe atomic size mismatch withrespect to the 3d element, thus low-ering the elastic packing strain.Second, in lowering the energy ofthe bonding 6s,p bands and raisingthe energy of the 5d band, relativityenhances (diminishes) the occupa-tion of the bonding (antibonding)bands. Raising the energy of the 5d

Page 69: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

RESEARCH NEWS 65

band also brings it closer to theenergy of the 3d band, improvingthe 3d-5d bonding.

L. G. Wang and A. Zunger,“Why are the 3d-5d compounds CuAu and NiPt stable,whereas the 3d-4d compounds CuAg andNiPd are not,”Phys. Rev. B 67, 092103(2003). BES

ELECTRONIC STRUCTURES

AND PROPERTIES OF

VITAMIN B12

Methodologies developed for thematerials sciences are being extendedto the study of biomaterials and biomolecules. Kurmaev et al. havecalculated the electronic structureof the vitamin B12 molecule and itsco-enzymes: cyanocobalamin,methylcobalamin, and adenosylcobalamin. For the first time, all the side chains in these complexmolecules were included in the abinitio calculation.

E. Z. Kurmaev, A. Moewes, L. Ouyang, L.Randaccio, P. Rulis, W.Y. Ching, M. Bach,and M. Neumann,“The electronic struc-ture and chemical bonding of vitaminB12,”Europhys. Lett. 62, 582 (2003). BES,NSF, EU, RFBR, NSERC, NEDO, MIUR, CMR

DENSITY FUNCTIONAL

CALCULATION OF LEAD

ON SILICON

Lead and silicon can form a well-defined interface (Pb/Si) which isideal for studying two-dimensionalbehavior. But despite extensive workin the literature, the structure of thedifferent Pb phases formed onSi(111) is still not clear, especiallywhen the Pb coverage is more than

one monolayer (ML). Using first prin-ciples calculations, Chan et al. havedetermined the atomic configuration,energy stability, and electronic struc-ture of the different phases of Pb/Si(111). Their calculations will helpexperimenters identify the atomicmodel and domain wall arrangementfor the controversial structures ofPb/Si(111) at the Pb coverage of 1to 4/3 ML.

T.-L. Chan, C. Z. Wang, M. Hupalo, M. C.Tringides, Z.-Y. Lu, and K. M. Ho,“First-principles studies of structures and sta-bilities of Pb/Si(111),”Phys. Rev. B 68,045410 (2003). BES

GROWTH AND FORMATION

OF SILICON NANOCLUSTERS

Silicon nanoclusters have been thefocus of intense theoretical andexperimental interest, because thecombination of the unique opticalproperties of silicon quantum dots andtheir compatibility with existing sili-con-based and nanobiological tech-nologies demonstrates great promisefor numerous applications. Draegeret al., using density-functional andquantum Monte Carlo calculations,have demonstrated that the opticalgap is significantly reduced by thepresence of oxygen and fluorineatoms on the surface. This is animportant result for experimentalists,as oxygen is a common contaminantin the procedure for synthesizingnanoparticles.

E. Draeger, J. C. Grossman,A. J. Williamson,and G. Galli,“Influence of synthesis condi-tions on the structural and optical proper-ties of passivated silicon nanoclusters,”Phys. Rev. Lett. 90, 167402 (2003). BES

MOLECULAR-DYNAMICS

SIMULATIONS OF KINETIC

COEFFICIENTS

Asta et al. have demonstrated a newequilibrium molecular-dynamics sim-ulation technique for calculating thekinetic coefficient (i.e., the propor-tionality constant between interfacevelocity and undercooling) of solid-liquid interfaces in elemental metals.This work represents a necessary firststep towards studying dynamic prop-erties of alloy solid-liquid interfacesnear equilibrium.

M. Asta, D.Y. Sun, and J. J. Hoyt,“Role ofatomistic simulation in the modeling ofsolidification microstructure,” in NATO-ASIProceedings: Thermodynamics, Microstruc-ture and Plasticity, ed. by A. Finel (inpress). BES

SOLVING THE MYSTERY OF

EXCHANGE BIAS

Exchange bias, a shift in magnetiza-tion that occurs when two differentkinds of magnet come into contact,is used in numerous electronicapplications such as magnetic multilayer storage and read headdevices, but the effect is still notproperly understood. Canning et al.have performed first-principles spin-dynamics simulations of the magneticstructure of iron-manganese/cobalt(FeMn/Co) interfaces (Figure 9).These quantum mechanical simula-tions, involving 2,016-atom super-cell models, reveal details of the ori-entational configuration of the mag-netic moments at the interface thatare unobtainable by any othermeans, offering important clues tosolve the mystery of exchange bias.

Page 70: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

66 RESEARCH NEWS

A. Canning, B. Ujfalussy,T. C. Schulthess,X.-G. Zhang, W. A. Shelton, D. M. C.Nicholson, G. M. Stocks,Y. Wang, and T.Dirks,“Parallel multi-teraflops studies ofthe magnetic structure of FeMn alloys,”Proc. Int. Parallel and Distributed ProcessingSymposium (IPDPS’03, April 2003, Nice,France), p. 256b. BES, ASCR-MICS

FAST LARGE-SCALE

NANOSTRUCTURE

CALCULATIONS

Fast first-principles calculation ofthousand-atom systems is a chal-lenge in computational nanoscience.Li and Wang have found a methodto solve this challenge, using thecharge-patching method for thecharge density and an idealized surface passivation. It takes onlyabout one hour on 64 processors tocalculate a thousand-atom systemwith first-principles accuracy.Thousand-atom quantum dots forvarious materials were calculatedfor the first time.

J. B. Li and L. W. Wang,“First principlethousand atom quantum dot calculations,”Phys. Rev. Lett. (submitted, 2003). BES

UNDERSTANDING

ATOMIC ORDERING OF

SEMICONDUCTOR ALLOYS

It has long been understood thatatomic ordering, widely observed incertain epitaxially grown semicon-ductor alloys, is driven by surfacethermodynamics and/or by growthkinetics, but not by bulk thermody-namics. Batyrev et al. have proposeda microscopic growth model for theordering of gallium arsenide anti-monide (GaAsSb) to account for theobserved 3D ordered structure. For

the first time, an interplay betweenordering and surface reconstructionduring the growth was proposed toexplain the experimental observations.The model is qualitatively differentfrom the widely accepted surfacereconstruction- and dimerization-induced ordering models that explainonly the in-plane 2D patterns.

I. G. Batyrev, A. G. Norman, S. B. Zhang, andS.-H. Wei,“Growth model for atomic order-ing:The case for quadruple-period order-ing in GaAsSb alloys,”Phys. Rev. Lett. 90,026102 (2003). BES

COMPUTATIONAL MODELING

OF PROTEIN SWITCHES

The Src-family kinases and theirclose relatives, such as Abl, functionas molecular switches and areimportant targets for therapeuticintervention; for example, the c-Ablinhibitor Gleevec is used to treatchronic myolegenous leukemia.The development of specific

inhibitors is complicated by the factthat human cells contain ~500 dif-ferent protein kinases, all of whichcatalyze the same chemical reac-tion and contain highly conservedactive sites. Because crystal struc-tures provide only static views ofcertain states of each kinase, Nagaret al. used molecular dynamics sim-ulations to help understand specificswitching mechanisms. The simula-

MEDICAL AND LIFE SCIENCES

3Q

FIGURE 9Visualization of the exchange-bias sys-tem initial state, viewed from the direc-tion of the y-axis. The arrows representthe orientation of the magnetic moments.The gold balls represent the Fe atoms,the grey ones the Mn atoms, while thegreen and pink balls stand for therelaxed and unrelaxed Co atoms. On theleft, the 3Q antiferromagnetic orderingis schematically displayed within theconventional fcc structure.

Page 71: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

RESEARCH NEWS 67

tions confirmed that the regulatorymechanism of the proto-oncogenicAbl protein shares an important fea-ture with the regulatory mechanismof the related Src-family tyrosinekinases (Figure 10). This result wasverified by in vitro experiments.

B. Nagar, O. Hantschel, M. A.Young, K.Scheffzek, D. Veach, W. Bornmann, B.Clarkson, G. Superti-Furga, and J. Kuriyan,“Structural basis for the autoinhibition ofc-Abl tyrosine kinase,”Cell 112, 859(2003). BER, HFSP, EMBL, EMBO, AF, GNMF,CAG, DRCRF, BWF

THE IMPACT OF

CARCINOGENS ON DNA

STRUCTURE

Understanding the 3D structure of DNA that has been modified bychemical carcinogens is importantin understanding the molecular basisof cancer and chemical toxicology.Peterson et al. studied [POB]dG,which is produced from metaboli-cally activated tobacco-specificnitrosomines. This compound modifies the base guanine, posing asignificant cancer risk. The results

of nuclear magnetic resonancespectra for [POB]dG were used as inputs to the DUPLEX code,which produced a minimum energystructure that agreed with the NMR data.

L. A. Peterson, C. Vu, B. E. Hingerty, S.Broyde, and M. Cosman,“Solution structureof an O6-[4-oxo-4-(3-Pyridyl)butyl]guanineadduct in an 11mer DNA duplex: Evidencefor formation of a base triplex,”BiochemistryASAP, DOI: 10.1021/bi035217v (2003).BER, NIH, ORNL

N-lobe

C-lobe

C-lobe

N-lobe

c-Abl

c-Src

Tyr 527

Myristate

C

I-I'loop

SH2

SH3

SH2

SH3-2connector

N

A

B

C

C

SH2αAαE

αI

αH

αF

αC

Tyr 412

Tyr 245

SH2-Kinaselinker

SH2-Kinaselinker

αI'

SH3

N

SH3-2connector

PD166326

Activationloop

FIGURE 10(A) Ribbon and surface representations of c-Abl.(B) Superposition of the structure of c-Src.

Page 72: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

68 RESEARCH NEWS

MODELING ENZYME STRUC-

TURE AND KINETICS

Although information on the func-tion of enzymes has been obtainedexperimentally, there is still anessential missing link between thestructures and biomolecular dynam-ics and function. Yang et al. studiedthe binding change mechanism ofF1-ATPase, the enzyme responsiblefor most of the ATP synthesis in liv-ing systems. By computing thechemical potential, they identifiedβTP as the tight binding site for ATPand βDP as the loose site. The halfclosed site was identified as thebinding site for the solution ADPand Pi in ATP synthesis; it is differ-ent from the empty binding site forATP hydrolysis. Based on this result,a consistent structural and kineticmodel for the binding changemechanism is being developed forF1-ATPase.

W.Yang,Y. Q. Gao, Q. Cui, J. P. Ma, and M.Karplus,“The missing link between ther-modynamics and structure in F1-ATPase,”PNAS 100, 874 (2003). BER, NIH

INTERACTIONS BETWEEN

AMINO ACIDS AND

NUCLEOBASES

There is a paucity of theoreticalinformation about interactionsbetween amino acids and nucle-obases. Dabkowska et al. made elec-tronic-structure calculations con-cerning the simplest amino acid-nucleobase complex, i.e., the dimerof glycine and uracil. Glycine is thesmallest amino acid, and uracil is a

building pyrimidine nucleobase ofRNA. They demonstrated that themost stable complexes betweenuracil and glycine are formed whenthe carboxylic group of glycine isbound through two hydrogen bondsto uracil.

I. Dabkowska, J. Rak, and M. Gutowski,“Computational study of hydrogen-bondedcomplexes between the most stable tau-tomers of glycine and uracil,” J. Phys.Chem. A 106, 7423 (2002). BER, KBN

QUANTUM MONTE CARLO

STUDIES OF FERMI GASES

Determining the properties of Fermigases is an intriguing topic for many-body physics, with applications tophenomena such as neutron stars.Carlson et al. conducted quantumMonte Carlo calculations of super-fluid Fermi gases with short-rangetwo-body attractive interactions withinfinite scattering length. The energyof such gases is estimated to be(0.44 ± 0.01) times that of the non-interacting gas, and their pairing gapis approximately twice the energyper particle.

J. Carlson, S.-Y. Chang,V. R. Pandharipande,and K. E. Schmidt,“Superfluid Fermi gaseswith large scattering length,”Phys. Rev.Lett. (in press), arXiv:physics/0303094(2003). NP, NSF

DATA ANALYSIS AND

SIMULATIONS FOR KAMLAND

The primary goal of the KamiokaLiquid Scintillator Anti-Neutrino

Detector (KamLAND) is to searchfor the oscillation of antineutrinosemitted from distant power reactors.KamLAND’s first results indicatethat it is measuring only 61% of thereactor anti-neutrino flux expectedin the no-oscillation hypothesis.This result (using CPT invariance)excludes all solutions to the solar-neutrino problem except for the“large mixing angle” (LMA) region(Figure 11).

K. Eguchi et al. (KamLAND Collaboration),“First results from KamLAND: Evidence forreactor antineutrino disappearance,”Phys.Rev. Lett. 90, 021802 (2003). NP, MEXT

EXCITED BARYON PHYSICS

FROM LATTICE QCD

Unraveling the nature of Roper res-onance (the first excited state of thenucleon with the same quantumnumber) has a direct bearing onunderstanding the quark structureand chiral dynamics of baryons, oneof the primary missions at laboratories

NUCLEAR PHYSICS

Page 73: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

of ~225% and 185% in the predict-ed atomization energy for HsO4 andOsO4, respectively. (2) Mullikenpopulation analysis of the Dirac-Fockself-consistent field wave functions,in contrast to the nonrelativisticHartree-Fock wave function, predictsthe HsO4 to be less volatile than thelighter congener OsO4, and thisprediction is in accord with the firstexperimental work on hassium.

G. L. Malli,“Dramatic relativistic effects inatomization energy and volatility of thesuperheavy hassium tetroxide and OsO4,”J. Chem. Phys. 117,10441 (2002). NP

EXPLORING NUCLEON

STRUCTURE USING

LATTICE QCD

An important question about thenucleon and its excited state, ∆, iswhether they are spherical ordeformed. Recent experiments haveaccurately measured the electricand Coulomb quadrupole and mag-netic dipole multipoles of thenucleon-to-∆ transition form factor,

which directly reflect the presenceof deformation. Alexandrou et al.subsequently calculated this formfactor using lattice QCD. The cal-culated magnetic dipole form factorand electric quadrupole amplitudewere consistent with experimentalresults, but systematic errors due tolattice artifacts prevented a determi-nation of the Coulomb quadrupoleform factor. Further study of theselattice artifacts is needed for bettercontrol of systematic errors.

C. Alexandrou, P. de Forcrand,T. Lippert, H.Neff, J. W. Negele, K. Schilling, and A.Tsapalis,“N to ∆ electromagnetic transitionform factor from lattice QCD,”hep-lat/0307018 (2003). NP, ESOP, UC

HFB CONTINUUM PROBLEM

SOLVED FOR DEFORMED

NUCLEI

One of the fundamental questionsof nuclear structure physics is, Whatare the limits of nuclear stability?Terán et al. have solved, for the firsttime, the Hartree-Fock-Bogoliubov(HFB) continuum problem in coor-dinate space for deformed nuclei intwo spatial dimensions without anyapproximations. The novel featureof the new Vanderbilt HFB code isthat it takes into account high-energycontinuum states with an equivalentsingle-particle energy of 60 MeV ormore. In the past, this has onlybeen possible in 1D calculations forspherical nuclei.

E.Terán, V. E. Oberacker, and A. S. Umar,“Axially symmetric Hartree-Fock-Bogoliubov calculations for nuclei nearthe drip-lines,”Phys. Rev. C 67, 064314(2003). NP

RESEARCH NEWS 69

such as Jefferson Lab. Dong et al.have calculated the correct level-ordering between the Roper (1440)resonance and the S11 (1535) reso-nance for the first time on the lattice.This result verifies that the Roperstate is a radial excitation of thenucleon with three valence quarks,and indicates that spontaneouslybroken chiral symmetry dictates thedynamics of light quarks in thenucleon.

S. J. Dong,T. Draper, I. Horvath, F. X. Lee, K.F. Liu, N. Mathur, and J. B. Zhang,“Roperresonance and S11 (1535) in lattice QCD,”Phys. Rev. Lett. (submitted, 2003), hep-ph/0306199. NP

ELECTRONIC STRUCTURE OF

SUPERHEAVY ELEMENTS

Malli investigated the effects of rela-tivity on the electronic structureand bonding in the tetroxides of thesuperheavy element hassium and itslighter congener osmium, with twomajor conclusions: (1) Relativisticeffects lead to a dramatic increase

FIGURE 11The neutrino oscil-lation parameterregion for two-neutrino mixing ofsolar neutrinoexperiments. Thesolid circle showsthe best fit to theKamLAND data.

Page 74: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

70 RESEARCH NEWS

MEASURING THE FATE OF

PARTON FRAGMENT JETS

In collisions of heavy nuclei at highenergies, a new state of matter con-sisting of deconfined quarks andgluons at high density is expected.Large transverse momentum (pT)partons in the high-density systemresult from the initial hard scatteringof nucleon constituents. After a hardscattering, the parton fragments tocreate a high-energy cluster (jet) ofparticles. The STAR Collaborationmeasured two-hadron angular cor-relations at large transverse momen-tum for p + p and Au + Au collisions.These measurements provided themost direct evidence for productionof jets in high-energy nucleus-nucleus collisions and allowed thefirst measurements of the fate ofback-to-back jets in the dense medi-

um as a function of the size of theoverlapping system. The back-to-back correlations were reduced con-siderably in the most central Au +Au collisions, indicating substantialinteraction as the hard-scatteredpartons or their fragmentation prod-ucts traversed the medium.

C.Adler et al. (STAR Collaboration),“Disap-pearance of back-to-back high pT hadroncorrelations in central Au + Au collisionsat √

—sNN = 200 GeV,”Phys. Rev. Lett. 90,

082302 (2003). NP, HEP, NSF, BMBF,IN2P3, EPSRC, FAPESP, RMST, MEC, NNSFC

CALCULATING THE ENERGY

SPECTRA OF LIGHT NUCLEI

The absence of stable five- or eight-body nuclei is crucial to both pri-mordial and stellar nucleosynthesis,enabling stars such as our sun to

burn steadily for billions of years.Wiringa and Pieper demonstratedthat the binding energies, excitationstructure, and relative stability of lightnuclei, including the opening of theA = 5 and 8 mass gaps, are cruciallydependent on the complicatedstructure of the nuclear force. Theycalculated the energy spectra of lightnuclei using a variety of nuclear-force models ranging from very sim-ple to fully realistic, and they observedhow features of the experimentalspectrum evolve with the sophistica-tion of the force. They found thatthe absence of stable five- andeight-body nuclei depends cruciallyon the spin, isospin, and tensorcomponents of the nuclear force.

R. B. Wiringa and S. C. Pieper,“Evolution ofnuclear spectra with nuclear forces,”Phys.Rev. Lett. 89, 182501 (2002). NP

Page 75: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

NERSC CENTER 71

NERSCCENTER

NERSC served 2,323 scientiststhroughout the United States in FY2003. These researchers work inDOE laboratories, universities,industry, and other Federal agencies.Figure 1 shows the proportion ofNERSC usage by each type of insti-tution, while Figures 2 and 3 showlaboratory, university, and other

organizations that used large alloca-tions of computer time. Computa-tional science conducted at NERSCcovers the entire range of scientificdisciplines, but is focused on researchthat supports the DOE’s mission andscientific goals, as shown in Figure 4.

Allocations of computer timeand archival storage at NERSC are

awarded to research groups, regard-less of their source of funding, basedon an annual review of proposals.As proposals are submitted, they aresubjected to peer review by theComputational Review Panel(CORP, Appendix B) to evaluate thecomputational approach and thereadiness of the applicant’s codes to

CLIENTS, SPONSORS, AND ADVISORS

DOE Labs55%

Universities35%

Other Labs4%

Industries6%

FIGURE 1NERSC usage by institution type, FY03

677,611258,700

737,322800,365

1,140,1451,164,898

1,694,737

Lawrence BerkeleyLawrence LivermoreOak RidgePrinceton Plasma PhysicsArgonnePacific NorthwestNational Center forAtmospheric Research

BrookhavenSandiaNational Renewable EnergyLos AlamosAmesOther Labs (6)

14,264,870

10,104,4758,309,044

6,287,364

3,252,543

1,893,419

FIGURE 2DOE and other ferderal laboratory usage at NERSC, FY03

Page 76: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

72 NERSC CENTER

Experiment (INCITE) awarded10% of NERSC resources to threecomputationally intensive researchprojects that were judged capable ofmaking high-impact scientificadvances through the use of a largeallocation of computer time anddata storage. The three projects are:Thermonuclear Supernovae: StellarExplosions in Three Dimensions;Fluid Turbulence and Mixing atHigh Reynolds Number; andQuantum Monte Carlo Study ofPhotoprotection via Carotenoids inPhotosynthetic Centers.

Two other groups provide gener-

al oversight for the NERSC Center:the NERSC Policy Board(Appendix A) advises the BerkeleyLab Director on the policies thatdetermine the impact and perform-ance of the NERSC Center, andthe NERSC Users Group(Appendix C) advises the NERSCmanagement and provides feedbackfrom the user community. DOEprogram management is providedby the Office of AdvancedScientific Computing Research(Appendix E), with advice from theAdvanced Scientific ComputingAdvisory Committee (Appendix F).

fully utilize the computing resourcesbeing requested. DOE Office ofScience program managers and theSupercomputing Allocations Com-mittee (SAC, see Appendix D)review and make award decisions forall “base award” requests, reflectingtheir mission priorities. SciDACawards are made by the SciDACResource Allocation ManagementTeam, while allocations for smallstartup projects are determined byNERSC.

For FY 2004, a new programentitled Innovative and Novel Com-putational Impact on Theory and

705,452786,129

1,259,6621,401,660

1,483,836

University of ArizonaScience Applications International Corp.University of California, Santa Cruz University of California, Berkeley Auburn University General Atomics University of Kentucky Max Planck InstituteUniversity of Maryland

Massachusetts Institute of Technology University of California, Los AngelesUniversity of TexasUniversity of WisconsinNew York UniversityHarvard UniversityGeorgia Institute of TechnologyVanderbilt UniversityNorthwestern UniversityGeorge Washington UniversityOthers (50)

3,798,873

523,4916,972,216

3,534,760

3,408,282

2,667,480

2,161,381

2,095,4202,039,613

1,610,9681,576,950

546,798560,041

608,885

668,295

Fusion EnergyNuclear PhysicsHigh Energy PhysicsMaterials ScienceClimate and Environmental Science

ChemistryAccelerator PhysicsComputer Science and MathematicsLife SciencesEarth and Engineering Sciences

32%

16%11%

11%

7%

7%

7%

5% 3% 1%

FIGURE 3Academic and private laboratory usage at NERSC, FY03

FIGURE 4.NERSC usage by scientific discipline, FY03

Page 77: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

NERSC CENTER 73

The past year was marked by adramatic increase in NERSC’scomputational capability, resultingin an impressive upsurge in the pro-ductivity of many research projects.But in traditional NERSC fashion,these changes took place almostseamlessly, with little disruption tousers’ daily work.

In 2002, after requesting propos-als to replace NERSC’s soon-to-bedecommissioned Cray systems, theprocurement team, lead by DeputyDirector Bill Kramer, made thedecision to increase the capability ofthe existing IBM SP Power 3 system,rather than purchase an entirelynew one. This expansion gave theuser community an unexpectedincrease of 30 million MPP hoursin FY 2003, and doubled the hoursthat were allocated for FY 2004.“The bottom line was choosing a

system that would have the most cost-effective impact on DOE science,”Kramer said when the decision wasannounced. The new contractincludes five years of support for thecombined system, named “Seaborg.”

The new Seaborg has 416 16-CPU Power 3+ SMP nodes (6,656processors) with a peak performanceof 10 teraflop/s, which made it themost powerful computer for unclas-sified research in the United States,with the largest memory capacity, atthe time of its installation. The sys-tem has 7.8 terabytes of aggregatememory and a Global Parallel FileSystem with 44 terabytes of storage.

A team of NERSC staff (Figure1) worked for six months to config-ure, install, and test the system. Thesystem was available more than 98percent of the time during testing,and benchmarks ran at 72 percent

CAPABILITY AND SCALABILITY

FIGURE 1The Seaborg expansion procurement and installation teams doubled NERSC’s com-puting resources 20 months earlier than expected and for 75% of the cost expected,without interrupting service to the research community.

Page 78: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

74 NERSC CENTER

of peak speed, much higher thanthat achieved on similar parallel sys-tems. The team resolved major con-figuration issues, such as whether tooperate Seaborg as one or two sys-tems (they decided on one), andbrought it into production status onMarch 4, 2003, one month earlierthan planned. The expanded systemquickly achieved the high availabilityand utilization rates that NERSCdemands of itself (Figure 2).

Initial results from the expandedSeaborg showed some scientificapplications running at up to 68% ofthe system’s theoretical peak speed,compared with the 5–10% of peakperformance typical for scientificapplications running on massivelyparallel or cluster architectures.Performance results for four science-of-scale applications are summarizedin Table 1. (More details are avail-able at http://www.nersc.gov/news/newNERSCresults050703.pdf.)

To learn about how Seaborgperforms as a production environ-ment for a larger sample of scientificapplications, NERSC conducted a

survey of user programs that run ona large number of nodes, as well asprograms that run on a smallernumber of nodes but account for asignificant fraction of time used onthe system. The report “IBM SPParallel Scaling Overview” (http://www.nersc.gov/computers/SP/scaling) discusses issues such as con-straints to scaling, SMP scaling, MPIscaling, and parallel I/O scaling.The report offers recommendationsfor choosing the level of parallelismbest suited to the characteristics ofthe code.

Improving the scalability ofusers’ codes helps NERSC achievegoals set by the DOE Office ofScience: that 25% of computing

time in FY 2003 and 50% in FY2004 be used by jobs that require512 or more processors. Job sizes forFY 2003 are shown in Figure 3.

The scaling report notes that asSeaborg doubled in size, the numberof jobs increased correspondingly,and over time the number of jobsthen decreased as the parallelism ofjobs increased (Figure 4). The wait-ing time for large-concurrency jobsalso changed in accordance withchanges in queueing policies(Figure 5).

Another measure of Seaborg’sperformance is Sustained SystemPerformance (SSP). The SSP metricwas developed at NERSC as a com-posite performance measurement of

Oct

Pro

cess

or H

ours

Nov Dec Jan Feb Mar Apr May Jun Jul

80%85%90%95%

Aug Sep

10,000

20,000

30,000

40,000

50,000

60,000

70,000

80,000

90,000

100,000

110,000

120,000

130,000

140,000

0

ProjectNumber ofprocessors

Performance(% of peak)

Electromagnetic Wave-Plasma Interactions 1,936 68%

Cosmic Microwave Background Data Analysis 4,096 50%

Terascale Simulations of Supernovae 2,048 43%

Quantum Chromodynamics at High Temperatures 1,024 13%

FIGURE 2NERSC strives to maintain a high rateof processor utilization, balancing thisgoal with good job turnaround times.

TABLE 1Performance Results for Science-of-Scale Applications

Page 79: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

tem performance in a way that isrelevant to scientific computing.

To obtain the SSP metric, afloating-point operation count isobtained from microprocessor hard-ware counters for the five applica-tions. A computational rate perprocessor is calculated by dividing

the floating-point operation countby the sum of observed elapsedtimes of the applications. Finally,the SSP is calculated by multiply-ing the computational rate by thenumber of processors in the system.

The SSP is used as the primaryperformance figure for NERSC’scontract with IBM. It is periodicallyreevaluated to track performanceand monitor changes in the system.For example, the performanceimpact of a compiler upgrade isreadily detected with a change inSSP. The expansion of Seaborgmore than doubled the SSP, from657 to 1357.

During the acceptance periodfor the Seaborg expansion, therewere reports of new nodes perform-ing slightly better than old nodes,which was puzzling since the hard-ware is identical. All of these reportsinvolved parallel MPI codes; noperformance differences weredetected on serial codes. The per-formance differences appeared to

NERSC CENTER 75

Oct

Pro

cess

or H

ours

Nov Dec Jan Feb Mar Apr May Jun Jul Aug Sep

10,000

20,000

30,000

40,000

50,000

60,000

70,000

80,000

90,000

100,000

110,000

120,000

130,000

140,000

0

1-63 Processors64-255 Processors256-511 Processors512-1023 Processors1024-2047 Processors>2047 Processors

FIGURE 3Job sizes on Seaborg over the course of FY 2003. DOE’s goal for NERSC is toincrease the percentage of jobs that use 512 or more processors.

FIGURE 4Number and size of jobs run on Seaborg in May 2002 (left) and May 2003 (right). The horizontal axis is time. The colored rectangles are jobs, with start and stop time depicted by width, and level of parallelism (number of nodes) depicted byheight. Colors signify different users. The rectangles are arranged vertically toavoid overlapping.

codes from five scientific applica-tions in the NERSC workload:fusion energy, material sciences,cosmology, climate modeling, andquantum chromodynamics. Thus,SSP encompasses a range of algo-rithms and computational tech-niques and manages to quantify sys-

Page 80: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

In addition to the Seaborgexpansion, NERSC also upgradedits PDSF Linux cluster and HighPerformance Storage System (HPSS).

The PDSF, which supportslarge data capacity serial applica-tions for the high energy and

nuclear physics communities, wasexpanded to 412 nodes with a diskcapacity of 135 terabytes (TB). Newtechnologies incorporated this yearinclude Opteron chips and 10 TBof configurable network attachedstorage (NAS).

76 NERSC CENTER

depend on the concurrency and theamount of synchronization in theMPI calls used in the code, butperiodic observations and testingprovided inconsistent results, and attimes no difference could be meas-ured. So in October 2003, groups ofold and new nodes were set asidefor in-depth testing.

A critical insight occurred whenthe control workstation happened tobe inoperable during the node test-ing. During that time, the perform-ance of the old nodes improved tomatch the new nodes. NERSC staffdiscovered that the control work-station ran certain commands fromits problem management subsystemup to 27 times more often on oldnodes than on new nodes. Once the problem was pinpointed, it wasquickly resolved by deleting fourdeactivated problem definitions.Resolving this issue improved thesynchronization of MPI codes athigh concurrency, resulting in fasterand more consistent run times formost NERSC users.

0

200

1998

/10

1999

/04

1999

/10

2000

/04

2000

/10

2001/

04

Year and Month

2001/

10

2002

/04

2002

/10

2003

/04

2003

/10

400

800

600

Tera

byte

s

FIGURE 6Cumulative storage by month, 1998–2003.

FIGURE 5Queue waiting time for jobs run on Seaborg in May 2002 (left) and May 2003 (right).Jobs are depicted as in Figure 3 except for colors. Here the color range signifieswaiting time, with blue depicting a shorter wait and red a longer wait.

Page 81: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

NERSC CENTER 77

At NERSC, the data in storagedoubles almost every year (Figure 6).As of 2003, NERSC stores approxi-mately 850 TB of data (30 millionfiles) and handles between 3 and 6TB of I/O per day (Figure 7). To keepup with this constantly increasingworkload, new technology is imple-mented as it becomes available.NERSC currently has two HPSSsystems, one used primarily for userfiles and another used primarily forsystem backups. The maximum the-oretical capacity of NERSC’s archiveis 8.8 petabytes, the buffer (disk)cache is 35 TB, and the theoreticaltransfer rate is 2.8 gigabytes per sec-ond. Implementation of a four-waystripe this year increased client datatransfers to 800 megabytes per second.

0

20

1998

/10

1999

/04

1999

/10

2000

/04

2000

/10

2001/

04

Year and Month

2001/

10

2002

/04

2002

/10

2003

/04

2003

/10

40

80

60

100

I/O (T

erab

ytes)

FIGURE 7I/O by month, 1998–2003.

Page 82: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

78 NERSC CENTER

Many of the important break-throughs in computational scienceare expected to come from large,multidisciplinary, multi-institutionalcollaborations working with advancedcodes and large datasets, such as theSciDAC and INCITE collaborations.These teams are in the best positionto take advantage of terascale com-puters and petascale storage, andNERSC provides its highest level ofsupport to these researchers. Thissupport includes specialized con-sulting support; special service coor-dination for queues, throughput,increased limits, etc.; specializedalgorithmic support; special soft-ware support; visualization support;conference and workshop support;Web sever and Web content support

for some projects; and CVS serversand support for community codemanagement.

Providing this level of supportsometimes requires dogged determi-nation. That was the case when theCenter for Extended MHD Model-ing, a SciDAC project, reportedthat one of their plasma simulationcodes was hanging up intermittently.NERSC consultant Mike Stewart(Figure 1) tested the code onSeaborg in both heavily and lightlyloaded scenarios, at various compil-er optimization levels, on differentcombinations of nodes, and onNERSC’s SP test system. He sub-mitted the bug to IBM as a “severity2” problem; but after a week passedwith no response, he asked the localIBM staff to investigate. They putMike in touch with an IBM AIXdeveloper, at whose request he ranmore test cases. Mike even wrote aFORTRAN code to mimic the Ctest case, and it displayed the sameproblem.

Using TotalView debuggingsoftware, Mike finally pinpointedthe location of the bug in an MPIfile read routine. With this newinformation, IBM assigned theproblem to an MPI developer, whoworked with Mike for several hoursby phone to fix the bug. IBM then

released a revised MPI runtimelibrary that allowed the plasma codeto run without interruption. WithoutMike’s persistence, the researchers’work might have been delayed formonths.

NERSC staff are not content towait for users to report problems.Consultant Richard Gerber (Figure2) has taken extra initiative tostrengthen support for SciDAC andother strategic projects by staying intouch with principal investigators

COMPREHENSIVE SCIENTIFIC SUPPORT

FIGURE 1NERSC consultant Mike Stewart’s per-sistence paid off in the timely fix of anobscure software bug.

FIGURE 2Richard Gerber has made significantcontributions to several research proj-ects by improving code performanceand helping scientists use NERSCresources effectively.

Page 83: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

NERSC CENTER 79

and by attending project meetings,where he discusses the state of theirresearch efforts and ways in whichNERSC can contribute to their success. From these discussions,Richard has undertaken a numberof successful projects to improvecode performance and to resolveproblems, such as finding a way toinstall a logistical networking nodethat meets NERSC security require-ments.

One particularly difficult probleminvolved the SciDAC AcceleratorScience and Technology project’s3D beam modeling code, whichwas scaling poorly on the expandedSeaborg. Richard’s initial analysisshowed that the code’s runtime per-formance depended strongly on thenumber of tasks per node, not onthe number of nodes used. He thenpinpointed the problem as thedefault threading strategy used byIBM to implement the FORTRANrandom number intrinsic function,and he enlisted the help of otherNERSC consultants to find a solu-tion. Jonathan Carter and Mike

Stewart found two obscure refer-ences to this function in IBM docu-mentation that pointed the way to asolution. Richard used this informa-tion to conduct some tests, andfound that a runtime setting thatcontrols the number of threads usedin the intrinsic function can increasecode performance by a factor of 2when running 16 tasks on one node.Using this setting, the beam model-ing code is now performing betterthan ever.

With the variety of codes thatrun on NERSC systems, NERSCtries to provide users with flexibilityand a variety of options, avoiding a“one size fits all” mentality. Foryears NERSC and other high per-formance computing (HPC) siteshave been urging IBM to supportmultiple versions of compilers,since not all codes get the best per-formance from the same compilerversion. When IBM provided scriptsto install versions newer than thedefault compiler—but not olderversions—NERSC staff saw anopportunity they could take advan-

tage of (Figure 3). David Skinneranalyzed these complicated scriptsand with great insight was able toengineer this process for older com-pilers as well. Majdi Baddourah builtextensive test cases and showedingenuity in resolving the problemsthese test cases exposed. David Paulintegrated the multiple compilersinto the system administration struc-ture. Together they made the use ofmultiple compilers fail-safe for users,and they gave helpful feedback toIBM on improvements in compilersupport.

To maintain quality services anda high level of user satisfaction,NERSC conducts an annual usersurvey. In 2003, 326 users responded,the highest response level in thesurvey’s six year history. Overall sat-isfaction with NERSC was 6.37 ona 7-point satisfaction scale. Areaswith the highest user satisfaction(6.5–6.6) were HPSS reliability,timely consulting response, techni-cal advice, HPSS uptime, and localarea network. Areas with the lowestuser satisfaction (4.7–5.0) wereAccess Grid classes, visualizationsoftware and services, and training.

In answer to the question “Whatdoes NERSC do well?” respondentspointed to NERSC’s good hardwaremanagement practices, user support

FIGURE 3David Skinner, David Paul, and MajdiBaddourah (not pictured) helped NERSCusers get the best performance out oftheir codes by implementing support formultiple compilers.

Page 84: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

80 NERSC CENTER

and responsive staff, documenta-tion, and job scheduling andthroughput. The question “Whatshould NERSC do differently?”prompted concerns about favoringlarge jobs at the expense of smallerones, as well as requests for moreresources devoted to interactivecomputing and debugging, morecompute power overall, differentarchitectures, mid-range computingsupport, vector architectures, betterdocumentation, and more training.Complete survey results are avail-able at http://hpcf.nersc.gov/about/survey/2003/first.php.

During the past year, NERSCinstituted several changes based onthe results of the 2002 survey:

• Run-time limits were extended sothat jobs could run longer, espe-cially jobs running on more than32 nodes.

• Interactive jobs were given a higherpriority than debugging jobs.

• Two new queue classes wereadded to facilitate pre- and post-processing and data transfers toHPSS.

• Software was enhanced to makethe SP environment easier to use.

• The NERSC user web site(http://hpcf.nersc.gov) was restruc-tured with new navigation linksthat make finding informationfaster and easier.

• The presentation of SP documen-tation was streamlined.

• More training was provided onperformance analysis, optimiza-tion, and debugging.

• More information on initialaccount setup was added to theNew User Guide, which was alsoreformatted for ease of use.

Page 85: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

NERSC CENTER 81

GridFTP data transfer functionality,including Seaborg, HPSS, PDSF,and the math and visualizationservers. Globus libraries have beeninstalled on Seaborg’s interactivenodes and will be running on allnodes in early 2004, when theGlobus Gatekeeper interface willalso be installed, providing basicdata and job submission services. A Grid interface to the NERSCInformation Management (NIM)system will make it easier for usersto get Grid authentication certifi-cates. The active intrusion-detec-tion program Bro is being modifiedto monitor Grid traffic. And a Grid-enabled Web portal interface calledVisPortal is being developed todeliver visualization services to Gridcollaborators (see page 85 below).

In the past year, viruses andworms with names like Nimda,

SoBig, Klez, Slammer, Blaster, andBugbear were wreaking havocacross the Internet. In the case ofSoBig, over one million computerswere infected within the first 24hours and over 200 million comput-ers were infected within a week.Nevertheless, NERSC had nomajor security incidents, thanks toits alert security staff and intrusiondetection by Bro, which is updatedregularly to defend against emerg-ing threats. In addition, NERSC’sserver team upgraded the emailserver and added a spam filter.

When DOE officials realizedthat they were unable to categorize

As a national facility whoseusers routinely transfer massivequantities of data, NERSC needsfast networks as much as fast com-puters. DOE’s Energy SciencesNetwork (ESnet) provides high per-formance connections betweenDOE sites as well as links to othernetworks on the Internet. This yearNERSC upgraded its border routerconnection to ESnet to OC-48 (2.4Gb/s). Two one-gigabit Ethernetschanneled together link the routerwith NERSC’s internal network.The router itself was upgraded toJumbo Frames capability, enablingNERSC to send 9,000-byte datapackets across the Internet insteadof the previous 1,500-byte packets.

Integration of NERSC’s com-puting and storage systems into theDOE Science Grid is almost com-plete. All production systems have

FIGURE 1Eli Dart, Scott Campbell, Brent Draney,and Howard Walter developed a method-ology to categorize network traffic thathas been adopted across the DOE com-plex.

CONNECTIVITY

Page 86: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

82 NERSC CENTER

and measure different types of net-work traffic on ESnet, NERSC’snetworking and security group, led byHoward Walter, stepped in to meetthe challenge. In less than a month,using network monitoring data fromBro, routers, and databases, BrentDraney, Eli Dart, and Scott Campbellproduced data that clearly catego-rized more than 95% of all networktraffic (Figure 1). The categories theydeveloped—bulk data, database,Grid, interactive, mail, system serv-ices, and Web—are now in use atall DOE laboratories and facilitiesto monitor and measure networktraffic. The most important out-come of this effort was the demon-stration that ESnet is very differentfrom commercial Internet serviceproviders, and that DOE network

traffic is clearly driven by science.Although the Berkeley Lab/

NERSC Bandwidth Challenge team(Figure 2) retired from the competi-tion after winning three years in arow at the annual SC conferenceon high performance computingand networking, NERSC’s network-ing and security staff continued toplay a major role at SC2003 inPhoenix, helping to create an infra-structure providing the 7,500 atten-dees with high-speed wired andwireless connections. Bro was usedto monitor the conference networkfor security breaches, and passwordssent unsecurely over the networkwere displayed on a large plasmascreen in the exhibits hall. The dis-play continually attracted viewers,looking to see if their passwords had

been grabbed. Another screenshowed the number of complete andincomplete network connectionsemanating from the conference.The strong interest in Bro wasdemonstrated by a standing-room-only turnout for a presentation on Broin the Berkeley Lab booth shortlybefore the end of the conference.

FIGURE 2This team won top honors for the high-est performing application in theSC2002 Bandwidth Challenge competi-tion, moving data at a peak speed of 16.8GB/s. The team included staff fromBerkeley Lab’s Computational Research(CRD), Information Technologies andServices (ITSD), and NERSC Centerdivisions. Shown here are: (back row) AlEarly (ITSD), Mike Bennett (ITSD),Brian Tierney (CRD), John Christman(ITSD); (middle row) Jason Lee (CRD),Wes Bethel (CRD), Cary Whitney(NERSC); (front row) John Shalf (CRD)and Shane Canon (NERSC). Chip Smithand Eli Dart of NERSC are not shown.

Page 87: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

NERSC CENTER 83

in which teams of scientific applica-tions specialists and computer sci-entists would work with computerarchitects from major U.S. vendorsto create hardware and softwareenvironments that will allow scien-tists to extract the maximum per-formance and capability from thehardware. The paper included aconceptual design for a systemcalled Blue Planet that wouldenhance IBM’s Power series archi-tecture for scientific computing.

The first implementation of thisstrategy in 2003 involved a series ofmeetings between NERSC, IBM,and Lawrence Livermore NationalLaboratory (LLNL). IBM had wona contract to build the ASCI Purplesystem at LLNL, the world’s firstsupercomputer capable of up to 100 teraflop/s, powered by 12,544Power 5 processors. LLNL and IBMscientists thought that they mightenhance the performance of ASCIPurple by incorporating key ele-ments of the Blue Planet systemarchitecture, particularly the nodeconfiguration, and so NERSC staffjoined in discussions to work outthe details. The resulting architec-ture enhancements are expected tobenefit all scientific users of futureIBM Power series systems.

MAKING ARCHIVES MORE

ACCESSIBLE

As a long-time collaborator inHPSS development, NERSC madetwo major contributions this year.The first was the Grid authentica-

tion code for the Parallel FTP(PFTP) file transfer protocol; thisenhancement to PFTP is now beingdistributed by IBM. The secondcontribution, in collaboration withArgonne National Laboratory, is atestbed for the Globus/Grid inter-face with HPSS.

NERSC is also collaboratingwith the DOE’s Joint GenomeInstitute (JGI) on a project to opti-mize genomic data storage for wideaccessibility. This project will dis-tribute and enhance access to datagenerated at JGI’s Production Gen-ome Facility (PGF), and will archivethe data on NERSC’s HPSS. ThePGF is one of the world’s largestpublic DNA sequencing facilities,producing 2 million trace data fileseach month (25 KB each), 100assembled projects per month(50–250 MB), and several very largeassembled projects per year (~50GB). NERSC storage will make thisdata more quickly and easily avail-able to genome researchers. Storagewill be configured to hold more dataonline and intelligently move thedata to enable high-speed access.

NERSC and JGI staff will workclosely to improve the organizationand clustering of DNA trace data sothat portions of an assembleddataset can be downloaded insteadof the entire project, which couldcontain millions of trace files. TheWeb interface being developed willfor the first time correlate the DNAtrace data with portions of theassembled sequence, simplifyingthe selection of trace data, reducing

ADVANCED DEVELOPMENT

In the field of high-performancecomputing, there is a saying that ifyou are standing still, you are reallyfalling behind. On the other hand,just grabbing the newest technologyin the hope of keeping ahead is not necessarily a prudent course.Knowing what is looming just overthe horizon—and helping shape itbefore it comes completely intoview—has long been a hallmark ofthe NERSC staff.

By working in close partnershipwith leading developers of high-performance computing systems andsoftware, NERSC staff share theircollective expertise and experienceto help ensure that new productscan meet the demands of computa-tional scientists. Vendors benefitfrom this relationship in thatNERSC serves as a real-world testbedfor innovative ideas and technolo-gies. This section presents currentexamples of how NERSC is workingin partnership to advance the stateof scientific computing.

CREATING SCIENCE-DRIVEN

COMPUTER ARCHITECTURE

In 2002 Lawrence Berkeley andArgonne national laboratories, inclose collaboration with IBM, pro-posed a strategy to create a new classof computational capability that isoptimal for science. The white paper“Creating Science-Driven ComputerArchitecture: A New Path to Scien-tific Leadership” (http://www.nersc.gov/news/blueplanet.html)envisioned development partnerships

Page 88: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

84 NERSC CENTER

the amount of data that must behandled, and enhancing researchers’ability to generate partial compar-isons. Cached Web servers and filesystems will extend the archive tothe wide area network, distributingdata across multiple platforms basedon data characteristics, then unify-ing data access through one inter-face. When this project is complet-ed, its storage and data movementtechniques, together with its datainterfaces, could serve as a modelfor other read-only data repositories.

SHAPING THE FUTURE OF

FILE STORAGE

In 2001, NERSC began a projectto develop a system to streamline itsfile storage system. In a typical HPCenvironment, each large computa-tional system has its own local diskalong with access to additional net-work-attached storage and archivalstorage servers. Such an environ-ment prevents the consolidation ofstorage between systems, thus limit-ing the amount of working storageavailable on each system to its localdisk capacity. The result is an un-necessary replication of files onmultiple systems, an increasedworkload on users to manage theirfiles, and a burden on the infra-structure to support file transfersbetween the various systems.

NERSC is developing a solu-tion using existing and emergingtechnologies to overcome theseinefficiencies. The Global UnifiedParallel File System (GUPFS) proj-ect aims to provide a scalable, high-performance, high-bandwidth,shared-disk file system for use by all

of NERSC’s high-performance pro-duction computational systems.GUPFS will provide unified filenamespace for these systems and willbe integrated with HPSS. Storageservers, accessing the consolidatedstorage through the GUPFS shared-disk file systems, will provide hierar-chical storage management, backup,and archival services. An additionalgoal is to distribute GUPFS-basedfile systems to geographically remotefacilities as native file systems overthe DOE Science Grid.

When fully implemented,GUPFS will eliminate unnecessarydata replication, simplify the userenvironment, provide better distri-bution of storage resources, and per-mit the management of storage as aseparate entity while minimizingimpacts on the computational systems.

The major enabling componentsof this envisioned environment area high-performance shared-disk filesystem and a cost-effective, highperformance storage-area network(SAN). These emerging technolo-gies, while evolving rapidly, are nottargeted towards the needs of high-performance scientific computing.The GUPFS project intends toencourage the development ofthese technologies to support HPCneeds through collaborations withother institutions and vendors, andthrough active development.

During the first three years ofthe project, the GUPFS team(Figure 1) tested and evaluatedshared-disk file systems, SAN tech-nologies, and other components ofthe GUPFS environment. Thisinvestigation included open source

FIGURE 1Rei Lee, Cary Whitney, Will Baird, and Greg Butler, along with (not shown) MikeWelcome (Computational Research Division) and Craig Tull, are working to developa high performance file storage system that is easy to use and independent of com-puting platforms.

Page 89: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

NERSC CENTER 85

can also automate complex work-flows like the distributed generationof MPEG movies or the schedulingof file transfers. The VisPortal devel-opers are currently working withNERSC users to improve the work-flow management and robustness ofthe prototype system.

A TESTBED FOR CLUSTER

SOFTWARE

Three years ago, Berkeley Labpurchased a 174-processor Linuxcluster, named Alvarez in honor ofBerkeley Lab Nobel Laureate LuisAlvarez, to provide a system testbedfor NERSC and other staff to evalu-ate cluster architectures as an optionfor procurement of large productioncomputing systems. Recently thesystem has also proven useful forvendors looking to test and developsoftware for large-scale systems,such as Unlimited Linux and oper-ating system components for RedStorm.

Unlimited Scale, Inc. (USI) usedAlvarez to test a beta version of Un-limited Linux, an operating systemdeveloped to provide productioncapabilities on commodity proces-sors and interconnects that matchor exceed those of the Cray T3Esupercomputer, once the workhorseof the NERSC Center. USI askedNERSC to assess Unlimited Linux’soperational and administrative func-tionality, performance, desired features and enhancements, anduser functionality. A few minorproblems were discovered in adapt-ing Unlimited Linux to the Alvareznetwork address scheme, and USI

and commercial shared-disk file sys-tems, new SAN fabric technologiesas they became available, SAN andfile system distribution over thewide-area network, HPSS integra-tion, and file system performanceand scaling.

In 2003, following extended trials with multiple storage vendors,NERSC selected the YottaYottaNetStorager system to address thescalability and performance demandsof the GUPFS project. A major goalis to provide thousands of heteroge-neous hosts with predictable andscalable performance to a commonpool of storage resources with noincrease in operational costs orcomplexity.

ENABLING GRID-BASED

VISUALIZATION

The Berkeley Lab/NERSC visu-alization group is developing a Webportal interface called VisPortal thatwill deliver Grid-based visualizationand data analysis capabilities froman easy-to-use, centralized point ofaccess. Using standard Globus/Gridmiddleware and off-the-shelf Webautomation, VisPortal hides theunderlying complexity of resourceselection and application managementamong heterogeneous distributedcomputing and storage resources.From a single access point, the usercan browse through the data, wher-ever it is located, access computingresources, and launch all the com-ponents of a distributed visualizationapplication without having to knowall the details of how the varioussystems are configured. The portal

will change its configuration tools tomake it easier to install the systemon existing hardware configurations.Performance benchmarks showedthat Unlimited Linux comparesfavorably to the IBM system softwarethat runs on Alvarez.

Cray Inc. was given access toAlvarez to test the operating systemunder development for the RedStorm supercomputer being devel-oped by Cray Inc. and DOE’sSandia National Laboratories for theNational Nuclear Security Admin-istration’s Advanced Simulation andComputing (ASCI) program. Thefirst round involved testing systemsoftware and administration, andsystem scalability. The Red StormLinux-based software allows simula-tion of multiple virtual processorsper physical processor, and simula-tions of up to 1,000 processors wererun on Alvarez. A set of scripts forsystem configuration and easier joblaunch were developed, and experi-ments provided some initial data onlaunch time for small and large exe-cutable size programs. The teamfound and fixed approximately 20system bugs, from the portals layerup through MPI and system launch.

In the second stage of testing,Cray’s I/O team is working withNERSC to use the GUPFS testbedplatform, and GUPFS joinedtogether with Alvarez, to do scala-bility testing of two potential RedStorm file systems, PVFS andLustre. They will also collaboratewith NERSC staff who are workingon global file systems that will even-tually be deployed within theNERSC Center.

Page 90: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,
Page 91: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

APPENDICES 87

NERSC POLICY BOARD

ROBERT J. GOLDSTON

Princeton Plasma PhysicsLaboratory

STEPHEN C. JARDIN

(ex officio, NERSCComputational Review Panel)Princeton Plasma PhysicsLaboratory

SIDNEY KARIN

University of California, SanDiego

WILLIAM J. MADIA

Battelle Memorial Institute

PAUL C. MESSINA

Argonne National Laboratory

ALBERT NARATH

Lockheed-Martin Corporation(retired)

DANIEL A. REED

University of North Carolina

ROBERT D. RYNE

(ex officio, NERSC Users Group Chair)Lawrence Berkeley NationalLaboratory

TETSUYA SATO

Earth Simulator Center/JapanMarine Science and TechnologyCenter

STEPHEN L. SQUIRES

Hewlett-Packard

MICHAEL S. WITHERELL

Fermi National AcceleratorLaboratory

APPENDIX A

Page 92: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

88 APPENDICES

COMPUTATIONAL SCIENCE

HIGH ENERGY PHYSICSRobert Sugar

University of California, SantaBarbara

ACCELERATOR PHYSICS Robert Ryne

Lawrence Berkeley NationalLaboratory

NUCLEAR ANDASTROPHYSICSDouglas Swesty

State University of New York atStony Brook

CHEMISTRYRobert Harrison

Oak Ridge National Laboratory

MATERIAL SCIENCESDavid Turner

Ames Laboratory

Michael WeinertBrookhaven National Laboratory

EARTH AND ENGINEERINGSCIENCESDavid Alumbaugh

University of Wisconsin

FUSION ENERGYStephen Jardin

Princeton Plasma PhysicsLaboratory

Donald SpongOak Ridge National Laboratory

CLIMATE RESEARCH ANDENVIRONMENTAL SCIENCESFerdinand Baer

University of Maryland

Douglas RotmanLawrence Livermore NationalLaboratory

LIFE SCIENCESBrian Hingerty

Oak Ridge National Laboratory

COMPUTER SCIENCE ANDAPPLIED MATHEMATICSDavid Bailey

Lawrence Berkeley NationalLaboratory

APPLIED MATHEMATICS

Jesse BarlowPennsylvania State University

Christopher BeattieVirginia Polytechnic Institute andState University

Ralph ByersUniversity of Kansas

Marc DayLawrence Berkeley NationalLaboratory

Eric de SturlerUniversity of Illinois at Urbana-Champaign

Tony DrummondLawrence Berkeley NationalLaboratory

Joseph GrcarLawrence Berkeley NationalLaboratory

Van HensonLawrence Livermore NationalLaboratory

Ilse IpsenNorth Carolina State University

Andrew KnyazevUniversity of Colorado at Denver

Robert WardUniversity of Tennessee

Chao YangLawrence Berkeley NationalLaboratory

NERSC COMPUTATIONAL REVIEW PANEL (CORP)

APPENDIX B

Page 93: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

APPENDICES 89

COMPUTER SCIENCE

Jim AhrensLos Alamos National Laboratory

Robert ArmstrongSandia National Laboratories

David BernholdtSyracuse University

Brett BodeAmes Laboratory

Sung-Eun ChoiLos Alamos National Laboratory

Erik DeBenedictisSandia National Laboratories

William GroppArgonne National Laboratory

James KohlOak Ridge National Laboratory

John Mellor-CrummeyRice University

Shirley MooreUniversity of Tennessee

Robert NumrichUniversity of Minnesota

Allan SnavelySan Diego SupercomputerCenter

Erich StrohmaierLawrence Berkeley NationalLaboratory

Rajeev ThakurArgonne National Laboratory

Mary ZoselLawrence Livermore NationalLaboratory

NERSC STAFF REVIEWERS

HIGH ENERGY AND NUCLEARPHYSICS

Iwona Sakrejda

LATTICE QCDParry Husbands

ACCELERATOR PHYSICSAndreas Adelmann

ASTROPHYSICSJulian BorrillPeter Nugent

CHEMISTRYJonathan CarterDavid Skinner

MATERIAL SCIENCESAndrew CanningLin-Wang Wang

EARTH AND ENGINEERINGSCIENCES

Osni Marques

FUSION ENERGYThomas DeBoniJodi Lamoureux

CLIMATE RESEARCH ANDENVIRONMENTAL SCIENCES

Chris DingHelen HeMichael Wehner

LIFE SCIENCESAdrian WongJonathan Carter

APPLIED MATHEMATICS ANDCOMPUTER SCIENCE

Esmond Ng

Page 94: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

90 APPENDICES

MEMBERS AT LARGE

Ahmet Y. Aydemir,* University ofTexas at Austin

David J. Dean, Oak Ridge NationalLaboratory

Stephen C. Jardin, PrincetonPlasma Physics Laboratory

Ricky A. Kendall, Ames Laboratory,Iowa State University

Gregory Kilcup,* The Ohio StateUniversity

Donald Sinclair,** ArgonneNational Laboratory

Donald A. Spong,** Oak RidgeNational Laboratory

John A. Taylor, Argonne NationalLaboratory

OFFICE OF ADVANCEDSCIENTIFIC COMPUTINGRESEARCH

David Bernholdt,** Oak RidgeNational Laboratory

Ahmed Ghoniem,* MassachusettsInstitute of Technology

David Keyes, Old DominionUniversity

Xiaolin Li,* State University ofNew York, Stony Brook

David Turner,** Ames Laboratory

OFFICE OF BASIC ENERGYSCIENCES

Feng Liu,** University of Utah

G. Malcolm Stocks, Oak RidgeNational Laboratory

Theresa L. Windus, PacificNorthwest National Laboratory

OFFICE OF BIOLOGICAL ANDENVIRONMENTAL RESEARCH

Carmen M. Benkovitz,*Brookhaven National Laboratory

Michael Colvin,** University ofCalifornia, Merced, andLawrence Livermore NationalLaboratory

Douglas Rotman (NUG ViceChair), Lawrence LivermoreNational Laboratory

Vincent Wayland, National Centerfor Atmospheric Research

OFFICE OF FUSION ENERGYSCIENCES

Stephane Ethier,** PrincetonPlasma Physics Laboratory

Alex Friedman, LawrenceLivermore and LawrenceBerkeley National Laboratories

Jean-Noel Leboeuf,* University ofCalifornia, Los Angeles

Carl Sovinec,** University ofWisconsin, Madison

Donald A. Spong,* Oak RidgeNational Laboratory

OFFICE OF HIGH ENERGYAND NUCLEAR PHYSICS

Kwok Ko, Stanford LinearAccelerator Center

Douglas L. Olson, LawrenceBerkeley National Laboratory

Robert D. Ryne (NUG Chair),Lawrence Berkeley NationalLaboratory

* Outgoing members

** Incoming members

NERSC USERS GROUP EXECUTIVE COMMITTEE

APPENDIX C

Page 95: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

APPENDICES 91

COMMITTEE CHAIRMAN

David Goodwin

OFFICE OF ADVANCED

SCIENTIFIC COMPUTING

RESEARCH

Daniel Hitchcock

OFFICE OF BASIC ENERGY

SCIENCES

Robert Astheimer

OFFICE OF BIOLOGICAL AND

ENVIRONMENTAL RESEARCH

Michael Riches

OFFICE OF FUSION ENERGY

SCIENCES

Michael Crisp

OFFICE OF HIGH ENERGY

PHYSICS AND OFFICE OF

NUCLEAR PHYSICS

David Goodwin

SUPERCOMPUTING ALLOCATIONS COMMITTEE

APPENDIX D

The Supercomputing Allocations Committee (SAC) is responsible forsetting the policies associated with the utilization of computing, data stor-age, and other auxiliary services available to DOE Office of Science (SC)researchers and otherwise coordinating SC’s computational projects. TheCommittee sets the distribution of NERSC and other available Office ofAdvanced Scientific Computing Research resources for scientific comput-ing applications every year.

Page 96: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

92 APPENDICES

The primary mission of theAdvanced Scientific ComputingResearch (ASCR) program, whichis carried out by the Mathematical,Information, and ComputationalSciences (MICS) subprogram, is todiscover, develop, and deploy thecomputational and networking toolsthat enable researchers in the scien-tific disciplines to analyze, model,simulate, and predict complex phe-nomena important to the Depart-ment of Energy. To accomplish thismission, the program fosters andsupports fundamental research inadvanced scientific computing—applied mathematics, computer sci-ence, and networking—and oper-ates supercomputer, networking,and related facilities. In fulfillingthis primary mission, the ASCR pro-gram supports the Office of ScienceStrategic Plan’s goal of providingextraordinary tools for extraordinaryscience as well as building the foun-dation for the research in support ofthe other goals of the strategic plan.In the course of accomplishing thismission, the research programs ofASCR have played a critical role inthe evolution of high performancecomputing and networks. Berkeley

Lab thanks the program managerswith direct responsibility for theNERSC program and the MICSresearch projects described in thisreport:

C. EDWARD OLIVER

Associate Director, Office ofAdvanced Scientific ComputingResearch, and Acting MICSDirector

ALAN J. LAUB AND

WILLIAM H. KIRCHOFF

SciDAC

WALTER M. POLANSKY

Acting MICS Director (until March 2003)

WILLIAM H. (BUFF) MINER, JR.

NERSC and Applications ProgramManager

DANIEL HITCHCOCK

ASCR Senior Technical Advisor

FREDERICK C. JOHNSON

Computer Science Research

GARY M. JOHNSON

Advanced Computing ResearchTestbeds

THOMAS D. NDOUSSE-

FETTER

Networking

CHARLES H. ROMINE

Applied Mathematics and Statistics

MARY ANNE SCOTT

Collaboratory Research

GEORGE R. SEWERYNIAK

ESnet

LINDA TWENTY

ASCR Program Analyst

JOHN R. VAN ROSENDALE

Computer Science Research

OFFICE OF ADVANCED SCIENTIFIC COMPUTING RESEARCH

APPENDIX E

Mathematical, Information, and Computational Sciences Division

Page 97: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

APPENDICES 93

The Advanced Scientific Com-puting Advisory Committee (ASCAC)provides valuable, independentadvice to the Department of Energyon a variety of complex scientificand technical issues related to itsAdvanced Scientific ComputingResearch program. ASCAC’s rec-ommendations include advice onlong-range plans, priorities, andstrategies to address more effective-ly the scientific aspects of advancedscientific computing including therelationship of advanced scientific

computing to other scientific disci-plines, and maintaining appropriatebalance among elements of theprogram. The Committee formallyreports to the Director, Office ofScience. The Committee primarilyincludes representatives of universi-ties, national laboratories, andindustries involved in advancedcomputing research. Particularattention is paid to obtaining adiverse membership with a balanceamong scientific disciplines, institu-tions, and geographic regions.

CO-CHAIRS

Margaret H. WrightCourant Institute ofMathematical Sciences New York University

John W. D. ConnollyCenter for ComputationalSciencesUniversity of Kentucky

MEMBERS

Jill P. DahlburgNaval Research Laboratory

David J. GalasKeck Graduate Institute

Roscoe C. GilesCenter for ComputationalScienceBoston University

James J. HackNational Center for Atmospheric Research

Helene E. KulsrudCenter for CommunicationsResearch

William A. Lester, Jr.Department of ChemistryUniversity of California, Berkeley

Gregory J. McRaeDepartment of ChemicalEngineeringMassachusetts Institute ofTechnology

Thomas A. ManteuffelDepartment of AppliedMathematicsUniversity of Colorado atBoulder

Karen R. SollinsMassachusetts Institute ofTechnology

Ellen B. StechelFord Motor Company

Stephen WolffCISCO Systems

ADVANCED SCIENTIFIC COMPUTING ADVISORY COMMITTEE

APPENDIX F

Page 98: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

94 APPENDICES

DOE U.S. Department of Energy

SC Office of Science

ASCR-MICS Office of Advanced Scientific Computing Research,Mathematical, Information, and ComputationalSciences Division

BER Office of Biological and Environmental Research

BES Office of Basic Energy Sciences

FES Office of Fusion Energy Sciences

HEP Office of High Energy Physics

NP Office of Nuclear Physics

SciDAC Scientific Discovery through Advanced Computing Program

3M 3M Company

AEI Albert Einstein Institute/Max Planck Institute forGravitation Physics (Germany)

AF Aventis Foundation

AFOSR Air Force Office of Scientific Research

ARO Army Research Office

BMBF Bundesministerium für Bildung und Forschung(Germany)

BWF Burroughs Wellcome Fund

CAG Cellzome AG

CDWR California Department of Water Resources

CNR Consiglio Nazionale delle Ricerche (Italy)

CNRS Centre National de la Recherche Scientifique (France)

DAAD German Academic Exchange Service

DARPA Defense Advanced Research Projects Agency

DOD Department of Defense

DRCRF Damon Runyon Cancer Research Foundation

DFG Deutsche Forschungsgemeinschaft

ORGANIZATIONAL ACRONYMS

APPENDIX G

Page 99: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

APPENDICES 95

DRA Danish Research Agency

EMBL European Molecular Biology Laboratory

EMBO European Molecular Biology Organization

EPSRC Engineering and Physical Sciences Research Council (UK)

ESOP European Electron Scattering Off Confined PartonsNetwork

EU European Union

FAPESP Fundação de Amparo à Pesquisa do Estado de São Paulo, Brazil

FBF France-Berkeley Fund

GA General Atomics

GNMF German National Merit Foundation

HFS Holderbank Foundation Switzerland

HFSP Human Frontier Science Program

IBM SUR IBM Shared University Research Grant

IN2P3 Institut National de la Physique Nucléaire et de laPhysique des Particules (France)

INSU Institut National des Sciences de l’Univers (France)

Intel Intel Corporation

JIHIR Joint Institute for Heavy Ion Research

KBN Polish State Committee for Science Research

LANL Los Alamos National Laboratory

LBNL Lawrence Berkeley National Laboratory

LLNL Lawrence Livermore National Laboratory

LSU Louisiana State University

MCYT Spanish Ministry of Science and Technology

MEC Ministry of Education of China

MEVU William A. and Nancy F. McMinn Endowment atVanderbilt University

MEXT Japan Ministry of Education, Culture, Sports, Science,and Technology

APPENDIX G

Page 100: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

96 APPENDICES

MIT Massachusetts Institute of Technology

MIUR Ministero dell’Istruzione, dell’Università e della Ricerca (Italy)

NASA National Aeronautics and Space Administration

NCI National Cancer Institute

NEDO New Energy Development Organization (Japan)

NIH National Institutes of Health

NNSFC National Natural Science Foundation of China

NOAA National Oceanographic and AtmosphericAdministration

NRC National Research Council

NSERC Natural Sciences and Engineering Research Council of Canada

NSF National Science Foundation

NSFC National Science Foundation of China

ONR Office of Naval Research

ORNL Oak Ridge National Laboratory

PECASE Presidential Early Career Award for Scientists andEngineers

PNC Programme National de Cosmology (France)

PPARC Particle Physics and Astronomy Research Council (UK)

RFBR Russian Foundation for Basic Research

RMST Russian Ministry of Science and Technology

SNS Spallation Neutron Source Project

UC University of Cyprus

APPENDIX G

Page 101: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,

DISCLAIMER

This document was prepared as an account of work sponsored by theUnited States Government. While this document is believed to containcorrect information, neither the United States Government nor anyagency thereof, nor The Regents of the University of California, nor any oftheir employees, makes any warranty, express or implied, or assumes anylegal responsibility for the accuracy, completeness, or usefulness of anyinformation, apparatus, product, or process disclosed, or represents thatits use would not infringe privately owned rights. Reference herein to anyspecific commercial product, process, or service by its trade name,trademark, manufacturer, or otherwise, does not necessarily constituteor imply its endorsement, recommendation, or favoring by the UnitedStates Government or any agency thereof, or The Regents of theUniversity of California. The views and opinions of authors expressedherein do not necessarily state or reflect those of the United StatesGovernment or any agency thereof, or The Regents of the University ofCalifornia.

Ernest Orlando Lawrence Berkeley National Laboratory is an equalopportunity employer.

Published by the Berkeley Lab Technical and Electronic InformationDepartment in collaboration with NERSC researchers and staff.JO#8798

EDITOR AND PRINCIPAL WRITER: John HulesCONTRIBUTING WRITERS: Rich Albert, Jon Bashor, ChristopherJones, Paul Preuss, Lynn Yarris

Page 102: NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING …NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2003 ANNUAL REPORT This work was supported by the Director, Office of Science,