theory and modeling in nanoscience...spread applications in fields of nanotechnol-ogy ranging from...

52
Report of the May 10–11, 2002, Workshop Conducted by the Basic Energy Sciences and Advanced Scientific Computing Advisory Committees to the Office of Science, Department of Energy Theory and Modeling in Nanoscience

Upload: others

Post on 22-May-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

Report of the May 10–11, 2002, Workshop Conducted by the Basic Energy Sciences and Advanced Scientific Computing Advisory Committees to the Office of Science,Department of Energy

Theory and Modeling inNanoscience

Page 2: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

Cover illustrations:

TOP LEFT: Ordered lubricants confined to nanoscale gap (Peter Cummings).

BOTTOM LEFT: Hypothetical spintronic quantum computer (Sankar Das Sarma and Bruce Kane).

TOP RIGHT: Folded spectrum method for free-standing quantum dot (Alex Zunger).

MIDDLE RIGHT: Equilibrium structures of bare and chemically modified gold nanowires (Uzi Landman).

BOTTOM RIGHT: Organic oligomers attracted to the surface of a quantum dot (F. W. Starr and S. C. Glotzer).

Page 3: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

Theory and Modeling in NanoscienceReport of the May 10–11, 2002, WorkshopConducted by the Basic Energy Sciencesand Advanced Scientific ComputingAdvisory Committees to the Office of Science,Department of Energy

Organizing CommitteeC. William McCurdy

Co-Chair and BESAC RepresentativeLawrence Berkeley National Laboratory

Berkeley, CA 94720

Ellen StechelCo-Chair and ASCAC Representative

Ford Motor CompanyDearborn, MI 48121

Peter CummingsThe University of Tennessee

Knoxville, TN 37996

Bruce HendricksonSandia National Laboratories

Albuquerque, NM 87185

David KeyesOld Dominion University

Norfolk, VA 23529

This work was supported by the Director, Office of Science, Office of Basic Energy Sciences and Office ofAdvanced Scientific Computing Research, of the U.S. Department of Energy.

Page 4: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise
Page 5: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

Table of ContentsExecutive Summary .......................................................................................................................1

I. Introduction..............................................................................................................................3A. The Purpose of the Workshop..............................................................................................3

B. Parallel Dramatic Advances in Experiment and Theory......................................................3

C. The Central Challenge .........................................................................................................5

D. Key Barriers to Progress in Theory and Modeling in Nanoscience.....................................6

E. Consensus Observations ......................................................................................................7

F. Opportunity for an Expanded Role for the Department of Energy......................................7

G. Need for Computational Resources and Readiness to Use Them........................................8

H. Summary of Specific Challenges and Opportunities ...........................................................9

Giant Magnetoresistance in Magnetic Storage .......................................................................11

II. Theory, Modeling, and Simulation in Nanoscience ............................................................12A. Nano Building Blocks........................................................................................................14

Transport in Nanostructures: Electronic Devices ..............................................................14Optical Properties on the Nanoscale: Optoelectronic Devices ..........................................15Coherence/Decoherence Tunneling: Quantum Computing ...............................................15Soft/Hard Matter Interfaces: Biosensors............................................................................15Spintronics: Information Technology................................................................................16Implications for Theory and Modeling ..............................................................................16

B. Complex Nanostructures and Interfaces ............................................................................17

C. Dynamics, Assembly, and Growth of Nanostructures.......................................................21

III.The Role of Applied Mathematics and Computer Science in Nanoscience......................24A. Bridging Time and Length Scales......................................................................................26

B. Fast Algorithms..................................................................................................................31Linear Algebra in Electronic Structure Calculations .........................................................32Monte Carlo Techniques....................................................................................................33Data Exploration and Visualization ...................................................................................34Computational Geometry...................................................................................................35

C. Optimization and Predictability .........................................................................................35Optimization ......................................................................................................................35Predictability ......................................................................................................................37Software .............................................................................................................................37

Appendix A: Workshop Agenda ................................................................................................39

Appendix B: Workshop Participants .........................................................................................41

Page 6: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise
Page 7: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

1

Executive SummaryOn May 10 and 11, 2002, a workshopentitled “Theory and Modeling in Nano-science” was held in San Francisco to iden-tify challenges and opportunities for theory,modeling, and simulation in nanoscienceand nanotechnology and to investigate thegrowing and promising role of appliedmathematics and computer science in meet-ing those challenges. A broad selection ofuniversity and national laboratory scientistscontributed to the workshop, which includedscientific presentations, a panel discussion,breakout sessions, and short white papers.

Revolutionary New Capabilities in Theory,Modeling, and SimulationDuring the past 15 years, the fundamentaltechniques of theory, modeling, and simula-tion have undergone a revolution that paral-lels the extraordinary experimental advanceson which the new field of nanoscience isbased. This period has seen the developmentof density functional algorithms, quantumMonte Carlo techniques, ab initio moleculardynamics, advances in classical Monte Carlomethods and mesoscale methods for softmatter, and fast-multipole and multigrid al-gorithms. Dramatic new insights have comefrom the application of these and other newtheoretical capabilities. Simultaneously, ad-vances in computing hardware increasedcomputing power by four orders of magni-tude. The combination of new theoreticalmethods together with increased computingpower has made it possible to simulate sys-tems with millions of degrees of freedom.

Unmistakable Promise of Theory,Modeling, and SimulationThe application of new and extraordinaryexperimental tools to nanosystems has cre-ated an urgent need for a quantitative under-standing of matter at the nanoscale. The ab-

sence of quantitative models that describenewly observed phenomena increasinglylimits progress in the field. A clear consen-sus emerged at the workshop that withoutnew, robust tools and models for the quan-titative description of structure and dynam-ics at the nanoscale, the research communitywould miss important scientific opportuni-ties in nanoscience. The absence of suchtools would also seriously inhibit wide-spread applications in fields of nanotechnol-ogy ranging from molecular electronics tobiomolecular materials. To realize the un-mistakable promise of theory, modeling, andsimulation in overcoming fundamentalchallenges in nanoscience requires newhuman and computer resources.

Fundamental Challenges andOpportunitiesWith each fundamental intellectual andcomputational challenge that must be metin nanoscience comes opportunities for re-search and discovery utilizing the ap-proaches of theory, modeling, and simula-tion. In the broad topical areas of (1) nanobuilding blocks (nanotubes, quantum dots,clusters, and nanoparticles), (2) complexnanostructures and nano-interfaces, and (3)the assembly and growth of nanostructures,the workshop identified a large number oftheory, modeling, and simulation challengesand opportunities. Among them are:

• to bridge electronic through macroscopiclength and time scales

• to determine the essential science oftransport mechanisms at the nanoscale

• to devise theoretical and simulation ap-proaches to study nano-interfaces, whichdominate nanoscale systems and arenecessarily highly complex and hetero-geneous

Page 8: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

2

• to simulate with reasonable accuracy theoptical properties of nanoscale structuresand to model nanoscale opto-electronicdevices

• to simulate complex nanostructures in-volving “soft” biologically or organi-cally based structures and “hard” in-organic ones as well as nano-interfacesbetween hard and soft matter

• to simulate self-assembly and directedself-assembly

• to devise theoretical and simulation ap-proaches to quantum coherence, deco-herence, and spintronics

• to develop self-validating and bench-marking methods

The Role of Applied MathematicsSince mathematics is the language in whichtheory is expressed and advanced, develop-ments in applied mathematics are central tothe success of theory, modeling, and simu-lation for nanoscience, and the workshopidentified important roles for new appliedmathematics in the above-mentioned chal-lenges. Novel applied mathematics is re-quired to formulate new theory and to de-velop new computational algorithms appli-cable to complex systems at the nanoscale.

The discussion of applied mathematicsat the workshop focused on three areas thatare directly relevant to the central challengesof theory, modeling, and simulation in nano-science: (1) bridging time and length scales,(2) fast algorithms, and (3) optimization andpredictability. Each of these broad areas hasa recent track record of developments fromthe applied mathematics community. Recentadvances range from fundamental ap-proaches, like mathematical homogenization(whereby reliable coarse-scale results aremade possible without detailed knowledge

of finer scales), to new numerical algo-rithms, like the fast-multipole methods thatmake very large scale molecular dynamicscalculations possible. Some of the mathe-matics of likely interest (perhaps the mostimportant mathematics of interest) is notfully knowable at the present, but it is clearthat collaborative efforts between scientistsin nanoscience and applied mathematicianscan yield significant advances central to asuccessful national nanoscience initiative.

The Opportunity for a New InvestmentThe consensus of the workshop is that thecountry’s investment in the national nano-science initiative will pay greater scientificdividends if it is accelerated by a new in-vestment in theory, modeling, and simula-tion in nanoscience. Such an investment canstimulate the formation of alliances andteams of experimentalists, theorists, appliedmathematicians, and computer and compu-tational scientists to meet the challenge ofdeveloping a broad quantitative under-standing of structure and dynamics at thenanoscale.

The Department of Energy is uniquelysituated to build a successful program intheory, modeling, and simulation in nano-science. Much of the nation’s experimentalwork in nanoscience is already supported bythe Department, and new facilities are beingbuilt at the DOE national laboratories. TheDepartment also has an internationally re-garded program in applied mathematics, andmuch of the foundational work on mathe-matical modeling and computation hasemerged from DOE activities. Finally, theDepartment has unique resources and expe-rience in high performance computing andalgorithms. The combination of these areasof expertise makes the Department of En-ergy a natural home for nanoscience theory,modeling, and simulation.

Page 9: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

3

I. Introduction

A. The Purpose of the Workshop

On May 10 and 11, 2002, a workshop enti-tled “Theory and Modeling in Nanoscience”was held in San Francisco, California, sup-ported by the offices of Basic Energy Sci-ence and Advanced Scientific ComputingResearch of the Department of Energy. TheBasic Energy Sciences Advisory Committeeand the Advanced Scientific ComputingAdvisory Committee convened the work-shop to identify challenges and opportunitiesfor theory, modeling, and simulation innanoscience and nanotechnology, and addi-tionally to investigate the growing andpromising role of applied mathematics andcomputer science in meeting those chal-lenges. The workshop agenda is reproducedin Appendix A.

A broad selection of university and na-tional laboratory scientists were invited, andabout fifty were able to contribute to theworkshop. The participants are listed in Ap-pendix B. There were scientific presenta-tions, a panel discussion, and breakout ses-sions, together with written contributions inthe form of short white papers from those

participants from the DOE labs. This reportis the result of those contributions and thediscussions at the workshop.

This workshop report should be read inthe context of other documents that defineand support the National NanotechnologyInitiative. Those documents describe a broadrange of applications that will benefit theprincipal missions of the Department of En-ergy, ranging from new materials and theenergy efficiencies they make possible, toimproved chemical and biological sensing.Key among those reports is the one from theOffice of Basic Energy Sciences entitled“Nanoscale Science, Engineering and Tech-nology Research Directions” (http://www.sc.doe.gov/production/bes/nanoscale.html),which points out the great need for theoryand modeling. Other nanoscience documentsfrom the Department of Energy can befound at http://www.er.doe.gov/production/bes/NNI.htm, and a number of reports fromother agencies and groups are linked to theWeb site of the National NanotechnologyInitiative, http://www.nano.gov/.

B. Parallel Dramatic Advances in Experiment and Theory

The context of the workshop was apparentat the outset. The rapid rise of the field ofnanoscience is due to the appearance overthe past 15 years of a collection of new ex-perimental techniques that have made ma-nipulation and construction of objects at thenanoscale possible. Indeed, the field hasemerged from those new experimental tech-niques.

Some of those experimental methods,such as scanning tunneling microscopy, cre-

ated new capabilities for characterizingnanostructures. The invention of atomicforce microscopy produced not only a toolfor characterizing objects at the nanoscalebut one for manipulating them as well. Thearray of experimental techniques for con-trolled fabrication of nanotubes and nano-crystals, together with methods to fabricatequantum dots and wells, produced an en-tirely new set of elementary nanostructures.Combinatorial chemistry and genetic tech-niques have opened the door to the synthesis

Page 10: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

4

of new biomolecular materials and thecreation of nano-interfaces and nano-interconnects between hard and soft matter.Nanoscience arose from the entire ensembleof these and other new experimental tech-niques, which have created the buildingblocks of nanotechnology.

Over the same 15-year period, the fun-damental techniques of theory, modeling,and simulation that are relevant to matter atthe nanoscale have undergone a revolutionthat has been no less stunning. The advancesof computing hardware over this period areuniversally familiar, with computing powerincreasing by four orders of magnitude, ascan be seen by comparing the Gordon BellPrizes in 1988 (1 Gflop/s) and 2001 (11Tflop/s). But as impressive as the increase incomputing power has been, it is only part ofthe overall advance in theory, modeling, andsimulation that has occurred over the sameperiod. This has been the period in which:

• Density functional theory (DFT) trans-formed theoretical chemistry, surfacescience, and materials physics and hascreated a new ability to describe theelectronic structure and interatomicforces in molecules with hundreds andsometimes thousands of atoms(Figure 1).

• Molecular dynamics with fast multipolemethods for computing long-range inter-atomic forces have made accurate cal-culations possible on the dynamics ofmillions and sometimes billions ofatoms.

• Monte Carlo methods for classicalsimulations have undergone a revolu-tion, with the development of a range oftechniques (e.g., parallel tempering,continuum configurational bias, and ex-tended ensembles) that permit extraordi-narily fast equilibration of systems withlong relaxation times.

• New mesoscale methods (including dis-sipative particle dynamics and field-theoretic polymer simulation) have beendeveloped for describing systems withlong relaxation times and large spatialscales, and are proving useful for therapid prototyping of nanostructures inmulticomponent polymer blends.

• Quantum Monte Carlo methods nowpromise to provide nearly exact descrip-tions of the electronic structures ofmolecules.

• The Car-Parrinello method for ab initiomolecular dynamics with simultaneouscomputation of electronic wavefunctionsand interatomic forces has opened theway for exploring the dynamics of mole-cules in condensed media as well ascomplex interfaces.

The tools of theory have advanced asmuch as the experimental tools in nano-science over the past 15 years. It has been atrue revolution.

The rise of fast workstations, clustercomputing, and new generations of mas-sively parallel computers complete the pic-ture of the transformation in theory, model-ing, and simulation over the last decade anda half. Moreover, these hardware (and basicsoftware) tools are continuing on theMoore’s Law exponential trajectory of im-provement, doubling the computing poweravailable on a single chip every 18 months.Computational Grids are emerging as thenext logical extension of cluster and parallelcomputing.

The first and most basic consensus of theworkshop is clear: Many opportunities fordiscovery will be missed if the new tools oftheory, modeling, and simulation are notfully exploited to confront the challenges ofnanoscience. Moreover, new investments bythe DOE and other funding agencies will be

Page 11: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

5

required to exploit and develop these toolsfor effective application in nanoscience.

This consensus is not merely specula-tion, but is based on recent experience of therole of theory in the development of nano-technology. Perhaps no example is morecelebrated than the role of calculations basedon density functional theory in the develop-ment of giant magnetoresistance in magneticstorage systems. The unprecedented speedwith which this discovery was exploited insmall hard disk drives for computers de-pended on a detailed picture from theoreticalsimulations of the electronic structure andelectron (spin) transport in these systems.Some details of this remarkable story aregiven in a sidebar to this report (page 11).

Figure 1. Nanotubes break by first forming a bondrotation 5-7-7-5 defect. An application of densityfunctional theory and multigrid methods byBuongiorno Nardelli, Yakobson, and Bernholc,Phys. Rev. B and Phys. Rev. Letters (1998).

C. The Central Challenge

The discussions and presentations of theworkshop identified many specific funda-mental challenges for theory, modeling, andsimulation in nanoscience. However, a cen-tral and basic challenge became clear. Be-cause of the rapid advance of experimentalinvestigations in this area, the need forquantitative understanding of matter at thenanoscale is becoming more urgent, and itsabsence is increasingly a barrier to progressin the field quite generally.

The central broad challenge thatemerged from discussions at the workshopcan be stated simply: Within five to tenyears, there must be robust tools for quan-titative understanding of structure and dy-namics at the nanoscale, without which thescientific community will have missed manyscientific opportunities as well as a broadrange of nanotechnology applications.

The workshop audience was reminded inthe opening presentation that the electronicsindustry will not risk deploying billions of

Figure 2. Calculated current-voltage curve for anovel memory-switchable resistor with 5µµµµ ×××× 5µµµµjunctions. (Stan Williams, Hewlett-Packard)

devices based on molecular electronics, evenwhen they can be built, unless they are thor-oughly understood (Figure 2) and manufac-turing processes are made predictable andcontrollable. The electronics industry must

Page 12: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

6

have new simulations and models for nano-technology that are at least as powerful andpredictive as the ones in use today for con-ventional integrated circuits before it canchance marketing molecular electronics de-vices for a myriad of applications. It can beargued that biological applications of nano-technology will require the same level ofquantitative understanding before they arewidely applied.

The fundamental theory and modelingon which industry will build those tools doesnot exist. While it is the province of industryto provide its own design tools, it is the roleof basic science to provide the fundamentalunderpinnings on which they are based.Those fundamental tools for quantitativeunderstanding are also necessary to the pro-gress of the science itself.

D. Key Barriers to Progress in Theory and Modeling inNanoscience

Much of the current mode of theoreticalstudy in nanoscience follows the traditionalseparation of the practice of experimentfrom the practice of theory and simulation,both separate from the underpinning appliedmathematics and computer science. This isnot a new observation, nor does it applyonly to theory, modeling, and simulation innanoscience. Nonetheless, it is a particularlyproblematic issue for this field.

By its very nature, nanoscience involvesmultiple length and time scales as well asthe combination of types of materials andmolecules that have been traditionally stud-ied in separate subdisciplines. For theory,modeling, and simulation, this means thatfundamental methods that were developed inseparate contexts will have to be combinedand new ones invented. This is the key rea-son why an alliance of investigators in nano-science with those in applied mathematicsand computer science will be necessary tothe success of theory, modeling, and simu-lation in nanoscience. A new investment intheory, modeling and simulation in nano-science should facilitate the formation ofsuch alliances and teams of theorists, com-putational scientists, applied mathemati-cians, and computer scientists.

A second impediment to progress intheory, modeling, and simulation in nano-

science arises because theoretical efforts inseparate disciplines are converging on thisintrinsically multidisciplinary field. Thespecific barrier is the difficulty, given thepresent funding mechanisms and policies, ofundertaking high-risk but potentially high-payoff research, especially if it involves ex-pensive human or computational resources.At this early stage in the evolution of thefield, it is frequently not at all clear whattechniques from the disparate subdisciplinesof condensed matter physics, surface sci-ence, materials science and engineering,theoretical chemistry, chemical engineering,and computational biology will be success-ful in the new context being created everyweek by novel experiments. The traditionaldegree of separation of the practice of ex-periment from the practice of theory furtherintensifies the challenge.

For these reasons, opportunities will bemissed if new funding programs in theory,modeling, and simulation in nanoscience donot aggressively encourage highly specula-tive and risky research. At least one experi-mentalist at this workshop complained thathigh-risk and speculative theoretical andcomputational efforts are too rare in thisfield, and that sentiment was echoed by anumber of theorists.

Page 13: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

7

E. Consensus Observations

A broad consensus emerged at the workshopon several key observations.

• The role of theory, modeling, and simu-lation in nanoscience is central to thesuccess of the National NanotechnologyInitiative.

• The time is right to increase federal in-vestment in theory, modeling, andsimulation in nanoscience to acceleratescientific and technological discovery.

• While there are many successful theo-retical and computational efforts yield-ing new results today in nanoscience,there remain many fundamental intel-

lectual and computational challengesthat must be addressed to achieve thefull potential of theory, modeling, andsimulation.

• New efforts in applied mathematics,particularly in collaboration with theo-rists in nanoscience, are likely to play akey role in meeting those fundamentalchallenges as well as in developingcomputational algorithms that will be-come mainstays of computational nano-science in the future.

F. Opportunity for an Expanded Role for the Department of Energy

The time is ripe for an initiative in theoryand modeling of nanoscale phenomena notonly because of the magnitude of the poten-tial payoff, but also because the odds ofachieving breakthroughs are rapidly im-proving.

Computational simulation is riding ahardware and software wave that has led torevolutionary advances in many fields repre-sented by both continuous and discrete mod-els. The ASCI and SciDAC initiatives of theDOE have fostered capabilities that makesimulations with millions of degrees of free-dom on thousands of processors for tens ofdays possible. National security decisionsand other matters of policy affecting the en-vironment and federal investment in uniquefacilities, such as lasers, accelerators, toka-maks, etc. are increasingly being reliablyinformed by simulation, as are corporate de-cisions of similar magnitude, such as whereto drill for petroleum, what to look for innew pharmaceuticals, and how to designbillion-dollar manufacturing lines. Universi-

ties have begun training a new generation ofcomputational scientists who understandscientific issues that bridge disciplines fromphysical principles to computer architecture.Advances in extensibility and portabilitymake major investments in reusable soft-ware attractive. Attention to validation andverification has repaired credibility gaps forsimulation in many areas.

Nanotechnologists reaching towardsimulation to provide missing understandingwill find simulation technology meeting newchallenges with substantial power. However,theorists and modelers working on the inter-face of nanotechnology and simulation tech-nology are required to make the connec-tions.

The workshop focused considerable at-tention on the role of applied mathematics innanoscience. In his presentation, one of thescientists working on molecular dynamicsstudies of objects and phenomena at thenanoscale expressed the sentiment of the

Page 14: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

8

workshop effectively by saying that the roleof applied mathematics should be to “maketractable the problems that are currently im-possible.” As mathematics is the language inwhich models are phrased, developments inapplied mathematics are central to the suc-cess of an initiative in theory and modelingfor nanoscience. A key challenge in nano-science is the range of length and time scalesthat need to be bridged. It seems likely thatfundamentally new mathematics will beneeded to meet this challenge.

The diverse phenomena within nano-science will lead to a plethora of modelswith widely varying characteristics. For thisreason, it is difficult to anticipate all the ar-eas of mathematics which are likely to con-tribute, and serendipity will undoubtedlyplay a role. As models and their supportingmathematics mature, new algorithms will beneeded to allow for efficient utilization andapplication of the models. These issues andsome significant areas of activity are dis-

cussed in Section 3 of this report. But thisdiscussion is not (and cannot be) fully com-prehensive due to the breadth of nanoscienceand the unknowable paths that modeling willfollow.

The Department of Energy is uniquelysituated to build a successful program intheory, modeling, and simulation in nano-science. Much of the experimental work innanoscience is already supported by the De-partment, and new facilities are being built.The Department also has an internationallyregarded program in applied mathematics,and much of the foundational work onmathematical modeling and computation hasemerged from DOE activities. Finally, theDepartment has unique resources and expe-rience in high performance computing andalgorithms. The conjunction these areas ofexpertise make the Department of Energy anatural home for nanoscience theory andmodeling.

G. Need for Computational Resources and Readiness to Use Them

The collection of algorithms and computa-tional approaches developed over the last 15years that form the basis of the revolution inmodeling and simulation in areas relevant tonanoscience have made this community in-tense users of computing at all levels, in-cluding the teraflop/s level. This field hasproduced several Gordon Bell Prize winnersfor “fastest application code,” most recentlyin the area of magnetic properties of materi-als. That work, by a team led by MalcolmStocks at Oak Ridge National Laboratory, isonly one example of computing at the tera-scale in nanoscience.

The workshop did not focus much dis-cussion specifically on the need for compu-tational resources, since that need is ubiqui-tous and ongoing. The presentations showedcomputationally intensive research using the

full range of resources, from massively par-allel supercomputers to workstation andcluster computing. The sentiment that notenough resources are available to the com-munity, at all levels of computing power,was nearly universally acknowledged. Thiscommunity is ready for terascale computing,and it needs considerably more resources forworkstation and cluster computing.

A significant amount of discussion in thebreakout sessions focused on scalable algo-rithms, meaning algorithms that scale wellwith particle number or dimension as well aswith increasing size of parallel computinghardware. The simulation and modelingcommunity in nanoscience is one of themost computationally sophisticated in all ofthe natural sciences. That fact was demon-strated in the talks and breakout sessions of

Page 15: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

9

the workshop, and is displayed particularlyin the sections of this report devoted to fast

algorithms and well characterized nanobuilding blocks.

H. Summary of Specific Challenges and Opportunities

The sections that follow identify a largenumber of challenges in theory, modeling,and simulation in nanoscience together withopportunities for overcoming those chal-lenges. Because an assembly of 50 scientistsand applied mathematicians is too small anumber to represent all the active areas ofnanoscience and related mathematics, thelist compiled at the workshop is necessarilyincomplete. However, a summary of eventhe partial catalog accumulated during theworkshop and described briefly in the fol-lowing sections should make a compellingcase for investment in theory, modeling, andsimulation in nanoscience. The challengesand opportunities include:

• To determine the essential science oftransport mechanisms, including electrontransport (fundamental to the functional-ity of molecular electronics, nanotubes,and nanowires), spin transport (funda-mental to the functionality of spintron-ics-based devices), and molecule trans-port (fundamental to the functionality ofchemical and biological sensors, mo-lecular separations/membranes, andnanofluidics).

• To simulate with reasonable accuracythe optical properties of nanoscalestructures and to model nanoscale opto-electronic devices, recognizing that inconfined dimensions, optical propertiesof matter are often dramatically alteredfrom properties in bulk.

• To devise theoretical and simulation ap-proaches to study nano-interfaces, whichare necessarily highly complex and het-erogeneous in shape and substance, and

are often composed of dissimilar classesof materials.

• To simulate complex nanostructures in-volving many molecular and atomic spe-cies as well as the combination of “soft”biological and/or organic structures and“hard” inorganic ones.

• To devise theoretical and simulation ap-proaches to nano-interfaces betweenhard and soft matter that will play cen-tral roles in biomolecular materials andtheir applications.

• To address the central challenge ofbridging a wide range of length and timescales so that phenomena captured inatomistic simulations can be modeled atthe nanoscale and beyond.

• To simulate self-assembly, the key tolarge-scale production of novel struc-tures, which typically involves manytemporal and spatial scales and manymore species than the final product.

• To devise theoretical and simulation ap-proaches to quantum coherence and de-coherence, including tunneling phenom-ena, all of which are central issues forusing nanotechnology to implementquantum computing.

• To devise theoretical and simulation ap-proaches to spintronics, capable of accu-rately describing the key phenomena insemiconductor-based “spin valves” and“spin qubits.”

• To develop self-validating and bench-marking methodologies for modelingand simulation, in which a coarser-grained description (whether it is

Page 16: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

10

atomistic molecular dynamics or meso-scale modeling) is always validatedagainst more detailed calculations, sinceappropriate validating experiments willoften be difficult to perform.

For each entry in this list, and the manythat can be added to it, there is an estab-lished state of the art together with an arrayof specific technical issues that must befaced by researchers. For all of the chal-lenges listed, there is also an array of fun-damental questions to be addressed by re-searchers in applied mathematics.

This report is a brief synthesis of theeffort of approximately 50 experts in thenanosciences, mathematics, and computerscience, drawn from universities, industry,and the national laboratories. Following thescientific presentations and a panel discus-sion on the roles of applied mathematics andcomputer science, the principal recommen-dations of the workshop were developed in

breakout sessions. Three of them focused onnanoscience directly:

• Well Characterized Nano BuildingBlocks

• Complex Nanostructures and Interfaces

• Dynamics, Assembly, and Growth ofNanostructures

Three others focused on the role of ap-plied mathematics and computer science:

• Crossing Time and Length Scales

• Fast Algorithms

• Optimization and PredictabilityThere were, of course, other possible

ways to organize the discussions, but theoutcome would likely have been the sameno matter what the organization of the top-ics. Nevertheless, the structure of this reportreflects this particular selection of categoriesfor the breakout sessions.

Page 17: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

11

Giant Magnetoresistance in Magnetic StorageThe giant magnetoresistance (GMR) effect wasdiscovered in 1988 and within a decade was inwide commercial use in computer hard disks(Figure 3) and magnetic sensors. The technol-ogy significantly boosted the amount of informa-tion that could be recorded on a magnetic sur-face (Figure 4). The unprecedented speed ofapplication (less than 10 years from discovery todeployment) resulted largely from advances intheory and modeling that explained the micro-scopic quantum-mechanical processes respon-sible for the GMR effect.

Figure 3. GMR and MR head structures. (IBM)

Figure 4. Magnetic head evolution. (IBM)

The effect is observed in a pair of ferromag-netic layers separated by a (generally) nonmag-netic spacer. It occurs when the magnetic mo-ment in one ferromagnetic layer is switched frombeing parallel to the magnetic moment of theother layer to being antiparallel (Figure 5). Whena read head senses a magnetic “bit” on the stor-age medium, it switches the magnetic momenton one layer and measures the resistance, thussensing the information on the disk.

Soon after the discovery of GMR, the phe-nomenon was seen to be related to the different“resistances” of electrons having different spins

iaRantiparallelip

Rparallel iaRantiparallel iaRantiparallelip

Rparallel ipRparallel

Figure 5. Schematic of GMR indicating change inresistance accompanying magnetization reversalupon sensing an opposing bit. (IBM)

(up or down). In fact, the magnetic moment itselfresults from an imbalance in the total number ofelectrons of each spin type. Modern quantum-mechanical computational methods based ondensity functional theory (for which Walter Kohnwas awarded the 1998 Nobel Prize) then pro-vided a detailed picture of the electronic struc-ture and electron (spin) transport in these sys-tems. Indeed, some unexpected effects, such asspin-dependent channeling, were first predictedon the basis of first-principles calculations andwere only later observed experimentally.

In fact, GMR is only one aspect of the richphysics associated with the magnetic multilayersnow used in read heads. Equally important areoscillatory exchange coupling (which is used toengineer the size of the magnetic field requiredto switch the device) and exchange bias (whichis used to offset the zero of the switching field inorder to reduce noise). In exchange coupling, adetailed understanding of the mechanisms re-sponsible has been obtained on the basis offirst-principles theory. Less is known about ex-change bias, but significant progress has beenmade on the basis of simplified models and withfirst-principles calculations of the magnetic struc-ture at the interface between a ferromagnet andan antiferromagnet.

Impressive as these advances in theory andmodeling have been, their application to nano-magnetism has only just begun. New synthesistechniques have been discovered for magneticnanowires, nanoparticles, and molecular mag-nets; magnetic semiconductors with high Curietemperatures have been fabricated; and spin-polarized currents have been found to drivemagnetic-domain walls. When understoodthrough theory and modeling, these findings alsoare likely to lead to technological advances andcommercial applications.

Page 18: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

12

II. Theory, Modeling, and Simulation in NanoscienceThe challenges presented by nanoscienceand nanotechnology are not simply re-stricted to the description of nanoscale sys-tems and objects themselves, but extend totheir design, synthesis, interaction with themacroscopic world, and ultimately large-scale production. Production is particularlyimportant if the technology is to becomeuseful in society. Furthermore, the experi-ence of the participants in the workshopfrom the engineering community stronglysuggests that a commercial enterprise willnot commit to large-scale manufacture of aproduct unless it can understand the materialto be manufactured and can control the pro-cess to make products within well-definedtolerance limits.

For macroscopic systems—such as theproducts made daily by the chemical, mate-rials, and pharmaceutical industries—thatknowledge is often largely or exclusivelyempirical, founded on an experimental char-acterization over the ranges of state condi-tions encountered in a manufacturing proc-ess, since macroscopic systems are intrinsi-cally reproducible. Increasingly, however,this characterization is based on molecularmodeling, including all of the tools relevantto modeling nanoscale systems, such aselectronic structure, molecular simulation,and mesoscale modeling methods. Indeed,the report Technology Vision 2020: The U.S.Chemical Industry1 has identified molecularmodeling as one of the key technologies thatthe chemical industry needs to revolutionizeits ability to design and optimize chemicalsand the processes to manufacture them.

Molecular modeling already plays amajor role in the chemical, materials, and

1 American Chemical Society et al., 1996, http://membership.acs.org/i/iec/docs/chemvision2020.pdf.

pharmaceutical industries in the design ofnew products, the design and optimizationof manufacturing processes, and the trouble-shooting of existing processes. However, theexquisite dependence on details of molecu-lar composition and structures at the nano-scale means that attempting to understandnanoscale systems and control the processesto produce them based solely on experi-mental characterization is out of the ques-tion.

The field of nanoscience is quite broadand encompasses a wide range of yet-to-be-understood phenomena and structures. It isdifficult to define nanoscience precisely, buta definition consistent with the NationalNanotechnology Initiative is:

The study of structures, dynamics,and properties of systems in whichone or more of the spatial dimen-sions is nanoscopic (1–100 nm),thus resulting in dynamics andproperties that are distinctly differ-ent (often in extraordinary and un-expected ways that can be favora-bly exploited) from both small-molecule systems and systems mac-roscopic in all dimensions.

Rational fabrication and integration ofnanoscale materials and devices offers thepromise of revolutionizing science and tech-nology, provided that principles underlyingtheir unique dynamics and properties can bediscovered, understood, and fully exploited.However, functional nanoscale structuresoften involve quite dissimilar materials (forexample, organic or biological in contactwith inorganic), are frequently difficult tocharacterize experimentally, and must ulti-mately be assembled, controlled, and util-ized by manipulating quantities (e.g., tem-perature, pressure, stress) at the macroscale.

Page 19: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

13

This combination of features puts un-precedented demands on theory, modeling,and simulation: for example, due to nano-scale dimensionality, quantum effects areoften important, and the usual theories valideither for bulk systems or for small mole-cules break down. Qualitatively new theo-ries are needed for this state of matter inter-mediate between the atomic scale and alarge-enough scale for collective behavior totake over, as well as methods for connectingthe nanoscale with finer and coarser scales.

Within the context of this definition ofnanoscience and the need for significant ad-vances in our understanding of the nano-scale, three breakout sessions on theory,modeling, and simulation in nanoscale sci-ence considered three broad classes of nano-systems:

• The first class consisted of nano buildingblocks (such as nanotubes, quantumdots, clusters, and nanoparticles), someof which can be synthesized quite repro-ducibly and well characterized experi-mentally.

• The second class consisted of complexnanostructures and nano-interfaces,2reflecting the importance of nano-interfaces in a wide variety of nanoscalesystems. This session was specificallyasked to consider steady-state propertiesof complex nanostructures and nano-interfaces.

2 We can distinguish between interfaces in generaland nano-interfaces as follows: Typically, an inter-face separates two bulk phases; we define, consistentwith the above definition, a nano-interface as one inwhich the extent of one or more of the phases beingseparated by the interfaces is nanoscopic. In nano-interfaces we include nano-interconnects, which jointwo or more structures at the nanoscale. For nano-interfaces, traditional surface science is generally notapplicable.

• The focus of the third session was dy-namics, assembly, and growth of nano-structures, and so was concerned withthe dynamical aspects of complex nano-structures. Hence, transport properties(such as electron and spin transport andmolecular diffusion) of complex nano-structures, as well as the dynamic proc-esses leading to their creation, particu-larly self-assembly, were central to thissession.

The reports presented below are summa-ries drafted by the session chairs. Given theostensibly different charges presented toeach group, there is remarkable unanimity intheir conclusions with respect to the out-standing theory, modeling, and simulationneeds and opportunities. Among the com-mon themes are:

• the need for fundamental theory of dy-namical electron structure and transport

• algorithmic advances and conceptualunderstanding leading to efficient elec-tronic structure calculations of largenumbers of atoms

• new theoretical treatments to describenano-interfaces and assembly of nano-structures

• fundamentally sound methods forbridging length and time scales and theirintegration into seamless modeling envi-ronments

• commitment to an infrastructure forcommunity-based open-source codes.

These common themes have been incor-porated into the recommendations and chal-lenges outlined in Section I of this report.

Page 20: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

14

A. Nano Building Blocks

Well characterized building blocks for nano-science need to be created and quantitativelyunderstood for a number of crucial reasons.A key aspect of construction, whether at themacroscopic or microscopic scale, is the no-tion of a “building block.” Office buildingsare often made from bricks and girders,molecular components from atoms. Just asknowledge of the atom allows us to makeand manipulate molecular species, knowl-edge of bricks and mortar allows us to con-struct high-rise buildings.

Well-characterized nano building blockswill be the centerpiece of new functionalnanomechanical, nanoelectronic, and nano-magnetic devices. A quantitative under-standing of the transport, dynamics, elec-tronic, magnetic, thermodynamic, and me-chanical properties is crucial to building atthe nanoscale. As an example, current theo-retical approaches to nanoelectronics oftencannot accurately predict I-V curves for thesimplest molecular systems. Without a co-herent and accurate theoretical descriptionof this most fundamental aspect of devices,progress in this area will necessarily be lim-ited.

Theoretical studies for these systemswill necessarily have to be based on ap-proximate methods; the systems are simplytoo large and too complex to be handledpurely by ab initio methods. Even if theproblems are tractable, we will make muchgreater progress with well-validated ap-proximate models. Thus a variety of bench-marking methodologies and validation test-ing are needed to establish the accuracy andeffectiveness of new models and approxi-mate theory.

At the nanoscale, what are the nanobuilding blocks that would play an analo-gous role to macro building blocks? Several

units come to mind which range from clus-ters less than 1 nm in size to nanoparticles,such as aerosols and aerogels much largerthan 100 nm. However, we believe the best-characterized and most appropriate buildingblocks are:

• clusters and molecular nanostructures

• nanotubes and related systems

• quantum wells, wires, films and dots.These blocks are well defined by ex-

periment and frequently tractable usingcontemporary theory. Moreover, they havebeen demonstrated to hold promise in exist-ing technology. We believe that buildingblocks would have an immediate impact inthe five areas described below.

Transport in Nanostructures:Electronic DevicesAny electronic device needs to control andmanipulate current. At the nanoscale, newphenomena come into play. Classical de-scriptions become inoperable compared toquantum phenomena such as tunneling,fluctuations, confined dimensionality, andthe discreteness of the electric charge. Atthese dimensions, the system will be domi-nated entirely by surfaces. This presentsfurther complications for electronic materi-als, which often undergo strong and com-plex reconstructions. Defining accuratestructure at the interface will be difficult.However, without an accurate surfacestructure, it will be even more difficult tocompute any transport properties. Moreover,current-induced changes within the structuremay occur. This situation will require thecomplexity, but not intractable approach, ofa careful self-consistent solution in a non-equilibrium environment.

Page 21: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

15

Optical Properties on the Nanoscale:Optoelectronic DevicesAt confined dimensions, optical propertiesof matter are often dramatically altered. Forexample, silicon is the dominant electronicmaterial, but it has poor optical propertiesfor optoelectronic devices such as solar cellsor lasers. However, at small dimensions theproperties of silicon can be dramatically al-tered; the band gap in silicon can be blue-shifted from the infrared to the optical re-gion. One of the first manifestations of thiseffect occurs in porous silicon, which ex-hibits remarkable room temperature lumi-nescence. To properly capitalize on suchphenomena, a deeper quantitative under-standing of the optical properties of matterwill be required.

Optical excitations are especially chal-lenging because most commonly used meth-ods for structural energies, such as densityfunctional theory, are not well suited for ex-cited state properties. The problem is exac-erbated for nanoscale systems, where themany-body effects are enhanced by thephysical confinement of the excitation.

Coherence/Decoherence Tunneling:Quantum ComputingWhile silicon dominates electronic devices,there is a fundamental limit to this technol-ogy. As the size of a transistor is reducedeach year, eventually one will approach thenew size regimes where traditional elec-tronics must be viewed in different terms.The approach to this regime will happen inthe not too distant future. Current litho-graphic techniques can create semiconductorchips with characteristic lengths of only afew hundred nanometers; quantum effectsare likely to dominate at tens of nanometers.At this latter length scale, transistor technol-ogy will be solidly in the regime of quantumphenomena.

Components will eventually functionsolely through quantum effects, for whichwe will need the capability to accuratelysimulate or predict in order to control. It hasbeen suggested that new computers can bedevised at nanoscales that will function byusing the creation of coherent quantumstates. By creating and maintaining such co-herent states, it should be possible to storeenormous amounts of information with un-precedented security. In order to constructsuch new quantum computers, new fabrica-tion methods will need to be found andguided by theory. Current technology is farremoved from the routine generation ofquantum logic gates and quantum networks.Without a better quantitative description ofthe science at the nanoscale, the emergenceof quantum computing will not happen.

Soft/Hard Matter Interfaces: BiosensorsNanotechnology has the potential to makegreat advances in medical technology andbiosecurity. The technologies for controllingand processing materials for semiconductordevices can be used to make types of nano-interfaces. Nanostructure synthesis andjoining of biological or soft matter with tra-ditional semiconductors like silicon wouldoffer new pathways to the construction ofnovel devices. For example, one could gainthe ability to develop new types of sensorsdirectly integrated into current device tech-nology. A number of clear applications suchas biosensors for national security, drug de-livery and monitoring, and disease detectionare plausible.

However, numerous complex and diffi-cult issues must be addressed. These includequantitative descriptions of nanofluids, re-action kinetics, and a selectivity to particularbiological agents. Perhaps no systems are ascomplex at the nanoscale as these, becausethey require the understanding of the detailsof interactions between quite diverse sys-tems, i.e., soft, biological materials inter-

Page 22: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

16

faced with semiconductors or other inorganic solids.

Spintronics: Information TechnologySpintronics centers on controlling the spin ofelectrons in addition to controlling othertransport properties. For example, dilutemagnetic semiconductors have attractedconsiderable attention, because they hold thepromise of using electron spin, in addition tocharge, for creating a new class of spintronicsemiconductor devices with unprecedentedfunctionality. Suggested applications includespin field effect transistors, which could al-low for software reprogramming of the mi-croprocessor hardware during run time;semiconductor-based spin valves, whichwould result in high-density, non-volatilesemiconductor memory chips; and even spinqubits, to be used as the basic building blockfor quantum computing.

At the nanoscale, spintronic devices of-fer special challenges owing to the enhancedexchange terms arising from quantum con-finement. Few quantitative studies exist onmagnetic properties of quantum dots and theevolution of ferromagnetic properties withsize.

Implications for Theory and ModelingThese five areas of research offer greatpromise but will require new theories (ap-proximate models) and computationally in-tensive studies. Current algorithms must bemade more efficient and sometimes moreaccurate, or new algorithms must emerge inorder to address the challenges highlightedabove. For example, ab initio methods canaddress systems with hundreds of atoms,and empirical methods can handle tens ofthousands of atoms. To fully exploit theseapproaches, algorithms need to be madescalable and take full advantage of currentcomputer technology, i.e., highly parallelenvironments. Fully ab initio methods are

especially useful in providing benchmarksfor these building blocks, where experi-mental data is lacking or is unreliable due tounknown variability or uncontrolled criticalfunction characteristics. Embedding meth-ods whereby one can take the best featuresof a particular approach will be crucial.

Algorithms for particular aspects ofnanoscale systems need to be developed. Forexample, ab initio methods for optical prop-erties such as GW and Bethe-Salpeter meth-ods are known to be accurate and robust, yetthese systems are limited in their applicabil-ity to rather small systems, e.g., to systemsof less than 50 atoms. The essential physicalfeatures of these methods need to be adaptedto address much larger systems.

Nonequilibrium and quantum dynamicspresent special problems. These systems of-ten lack clear prescriptions for obtaining re-liable results. Some recent attempts to matchexperiments on electron transport throughmolecular systems often get the correcttransport trends, but not quantitative num-bers. A number of entanglements exist: Isthe experimental data reliable owing toproblems with contacts? Or is there somefundamental component missing from ourunderstanding? New experiments will needto be designed to ensure reproducibility andthe validity of the measurements. New theo-ries will need to be constructed and cross-checked by resorting to the fundamentalbuilding blocks.

A chronic and continuing challengearising in most nanosystems concerns a lackof understanding of structural relationships.Often nanosystems have numerous degreesof freedom that are not conducive to simplestructural minimizations. Simulated anneal-ing, genetic algorithms, and related tech-niques can be used for some systems, but ingeneral it will require new optimizationtechniques in order to obtain a quantitative

Page 23: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

17

description of the structure of nanoscalesystems.

While great strides have been made insimulation methods, a number of funda-mental issues remain. The diversity of timeand length scales remains a great challengeat the nanoscale. Rare event simulationmethods will be essential in describing anumber of phenomena such as crystalgrowth, surface morphologies, and diffu-sion. Of course, the transcription of quantumforce fields to classical interatomic poten-tials is a difficult task. Intrinsic quantum at-

tributes like hybridization and charge trans-fer remain a challenge to incorporate intoclassical descriptions. At best, current clas-sical-like descriptions can be used for tar-geted applications.

In order to facilitate dissemination ofnew algorithms and numerical methods,open-source software should be encouragedand made widely available. Toolkits and ef-ficient codes should be offered without re-strictions for distribution to as wide an audi-ence as possible.

B. Complex Nanostructures and Interfaces

Central to nanoscience is the assembly andmanipulation of fundamental buildingblocks on the nanoscale to create functionalstructures, materials, and devices (Figure 6).Unlike traditional materials, nanostructuredmaterials that will enable the nanotechnol-ogy revolution will be highly complex andheterogeneous in shape and substance, com-posed of dissimilar materials, and dominatedby nano-interfaces (Figure 7). These char-acteristics require that new theoretical ap-proaches and computational tools be devel-oped to provide a fundamental understand-ing of complexity at the nanoscale, thephysics and chemistry of nano-interfaces,and how interfaces and complexity at thenanoscale control properties and behavior atlarger scales.

Nano-interfaces and complexity at thenanoscale play a central role in nearly allaspects of nanoscience and nanotechnology.For example, in molecular electronic de-vices, DNA and other biomolecules such asproteins are being explored both as assem-blers of molecular components like carbonnanotubes and semiconductor quantum dots(Figure 8), and as active components them-selves (Figures 9 and 10). In such devices,soft biomolecular matter and hard matter

will come together at complex nano-interfaces across which transport of chargemay occur, or which may provide structuralstability. In new spintronic devices, thenano-interfaces between ferromagnetic andantiferromagnetic domains may control thespeed of switching needed for faster com-puters. In polymer nanocomposites, wherenanoscopic particles are added to a polymer,interactions at the polymer/particle nano-interface can have profound effects on bulkproperties, leading to stronger yet lighter-weight structural and other materials. Themanipulation of interfaces in soft mattersystems at the nanoscale provides a wealthof opportunities to design functional, nano-structured materials for templating, scaffoldsfor catalysis or tissue growth, photonic de-vices, and structural materials (Figure 11).

The high surface-to-volume ratio result-ing from the prevalence of nano-interfaces,combined with the complexity of thesenano-interfaces and of nanostructures, pro-vides much of the challenge in developingpredictive theories in nanoscience. In thisrespect, materials designed at the nanoscaleare distinctly different from traditional bulkmaterials. One of the largest driving forcesbehind the need for new theory and simula-

Page 24: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

18

Figure 6. Three-dimensional view of CharlesLieber's concept for a suspended crossbar arrayshows four nanotube junctions (device elements),with two of them in the “on” (contacting) state andtwo in the “off” (separated) state. The bottom nano-tubes lie on a thin dielectric layer (for example,SiO2) on top of a conducting layer (for example,highly doped silicon). The top nanotubes are sus-pended on inorganic or organic supports (grayblocks). Each nanotube is contacted by a metal elec-trode (yellow blocks). (IBM)

Figure 7. Schematic of an assembled structure ofnanoscopic building blocks tethered by organic“linkers.” The structure is dominated by interfacialconnections. (S. C. Glotzer, 2002)

Figure 8. Top: Schematic of nanoscale buildingblocks tethered by DNA. The “soft/hard” interfacein this complex nanostructure controls the transportof information through the structure. Bottom:Schematic of nanoscopic gold nanoparticles assem-bled onto a surface with DNA. Such guided assem-bly can be used, e.g., to position the nanoparticlesfor fabrication into a nanowire.

Figure 9. Atomistic simulation of a nanostructurecomposed of polyhedral oligomeric silsesquioxanecages. (J. Kieffer, 2002)

Figure 10. Simulation of organic oligomers attractedto the surface of a nanoscopic quantum dot. (F. W.Starr and S. C. Glotzer, 2001)

Figure 11. Simulation of nanostructured domains ina polymer blend where the interfaces are controlledat the nanoscale. (S. C. Glotzer, 1995)

Page 25: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

19

tion in nanoscience is the fact that there arefew experimental techniques capable of di-rectly imaging or probing nano-interfaces.Typically, those techniques having atomic-scale resolution require extensive theory toproduce more than qualitative information.Indeed, it is likely that the fundamental in-terfacial and spatial information required topredict the behavior and performance ofnanostructures will come from simulationand theory in many cases.

Focused theory and simulation at a widerange of scales is needed to develop a quan-titative understanding of nano-interfaces andtheir consequent influence on properties andemergent behavior of nanoscale materialsand devices. Breakthroughs in theory andsimulation will also be necessary in order todescribe and predict the complexity of na-nostructures and the relationship betweenstructural complexity and emergent proper-ties and behavior.

A major challenge for nanoscience inwhich theory, modeling, and simulation willplay a pivotal role lies in the definition ofthe nano-interface, which is nontrivial be-cause the definition may depend upon theproperties or behavior of interest. For exam-ple, the relevant nano-interface leading to aparticular mechanical behavior may be dif-ferent than that for transport properties, suchas electrical transport. Strategies for ad-dressing nano-interfaces will also be needed.With an appropriate definition of nano-interfaces, new approaches can be developedthat characterize relevant nano-interfaces inorder to construct quantitatively predictivetheories that relate the nano-interfaces tointeresting nanoscale properties and behav-ior.

To develop theories and quantitative un-derstanding of nano-interfaces, it will benecessary to obtain information on the or-ganization of matter at nano-interfaces, thatis, the arrangement, composition, conforma-

tion, and orientation of matter at hard-hard,hard-soft, and soft-soft nano-interfaces.Much of this information may only be ac-cessible by simulation, at least for the fore-seeable future. The geometry and topologyof nano-interfaces, bonding at nano-interfaces, structure and dynamics at or neara nano-interface, transport across nano-interfaces, confinement at nano-interfaces,deformation of nano-interfaces, activity andreactivity at nano-interfaces, etc. must all beinvestigated by simulation. Such informa-tion is needed before very accurate theoriesrelating all of these to material and deviceoperations and behavior will be fully for-mulated and validated.

For example, to properly design, opti-mize and fabricate molecular electronic de-vices and components, we must quantita-tively understand transport across soft/hardcontacts, which in turn will likely depend onthe arrangement of components on the sur-face and the complexity of structures thatcan be assembled in one, two, and three di-mensions. Comprehending transport willlikely require a quantitative understandingof the nature of nano-interfaces betweenbiological matter and hard matter, betweensynthetic organic matter and hard matter,and between biological matter and syntheticorganic matter. It will also require optimiza-tion of structures and tailoring of nano-interfaces, which in turn will require aquantitative understanding of the stability ofnano-interfaces.

As we move towards the goal of first-principles-based fully theoretical predictionof the properties of nanoscale systems, thereis an urgent need to critically examine theapplication of bulk thermodynamic and sta-tistical mechanical theories to nanostructuresand nano-interfaces, where violations of thesecond law of thermodynamics have beenpredicted and observed, and where mechani-cal behavior is not described by continuumtheories.

Page 26: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

20

Theories are required that abstract frombehavior at a specific nano-interface to pre-dict the behavior of extended structures ormaterial that is comprised of primarily nano-interfaces. Additionally, when describingnanostructures that are collections of acountable number of fundamental buildingblocks, one must question whether the con-cept of temperature has the same meaning asin a bulk system. An important task for theo-reticians and simulationists will be to iden-tify what can be brought to nanosciencefrom traditional methods of investigationand description, and then determine whatnew theories or approaches must be invokedat the nanoscale.

When materials or devices are domi-nated by nano-interfaces, new physics islikely to result. Examples of nano-interface-dominated physics include the electric fieldat surfaces, luminescence of rare-earthorthosilicates, surface-enhanced Ramanscattering, and deformation of nanocrystal-line materials. Nanocomposites and nano-assemblies are just two examples of nano-structures where nano-interfaces are preva-lent and may dominate behavior at largerscales. The high surface-to-volume ratio ofcomplex nanostructures may also lead tonovel collective or emergent phenomena,which would require new theory and simu-lation efforts to address and understand.

Because of the present lack of theoriesto fully describe nano-interfaces at thenanoscale and the ramifications of nano-interfaces on larger scales, and the lack ofexperimental data on nano-interfaces, simu-lations will play a pivotal role in advancingknowledge in this area. Ab initio calcula-tions in particular will be required, but the

present state of the art limits the number ofatoms to numbers too small to obtainmeaningful data in many instances. Conse-quently, breakthroughs in methodologieswill be required to push ab initio simulationsto larger scales. Ab initio simulations con-taining of the order of 10,000 atoms wouldenable, for example, the study of spinstructure at antiferromagnetic-ferromagneticnano-interfaces.

New or improved computational meth-ods are needed to simulate also the dynam-ics of nano-interfaces from first principles.Such simulations will be needed to predict,for example, electronic transport acrossnano-interfaces, as well as to provide vali-dation or benchmarks for coarser-grainedtheoretical descriptions of nano-interfaces inwhich electronic structure is not explicitlyconsidered. For molecular dynamics meth-ods and other classical, particle-based meth-ods, new (for many purposes reactive) forcefields capable of describing nano-interfacesbetween dissimilar materials are urgentlyneeded. For some problems, further coarse-graining will be required to capture com-plexity on all relevant length and timescales, and thus new mesoscale theories andcomputational methods will be needed. Toconnect the properties and behavior of com-plex nanostructures and/or nano-interfacesto properties and behavior at the macroscale,it may be necessary to develop new consti-tutive laws (and ask if it is even appropriateto talk about constitutive laws in systemswhere properties and behavior are domi-nated by nano-interfaces) and then developand integrate theories at different levels.

Page 27: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

21

C. Dynamics, Assembly, and Growth of Nanostructures

The focus of this section is the time-dependent properties of nanostructures (suchas transport properties) and the time-dependent processes used to produce nano-structures and nanostructured materials(such as directed self assembly, nucleationand growth, and vapor deposition methods).

The primary route to manufacturablefunctional nanoscale systems is universallyconceded to be self-assembly, which maybe directed or not. Self-assembly of nano-structured materials is scalable to industriallevels of production because it does not re-quire control at the nanoscale (such as, forexample, manipulation via AFM tip). Thereis a wide variety of transport mechanismsrelevant to nanoscience, including electrontransport (relevant to molecular electronics,nanotubes, and nanowires), spin transport(relevant to spintronics-based devices, seeFigure 12), and molecule transport (relevantto chemical and biological sensors, molecu-lar separations/membranes, and nanoflu-idics).

Examples of nanostructures and nano-structured systems formed by (or capable offormation by) self-assembly include quan-tum dot arrays, nano- and microelectrome-chanical systems (NEMS and MEMS),nanoporous adsorbents and catalytic materi-als (produced by templated self-assembly,see Figure 13), nanocrystals, and biomimeticmaterials. This list is by no means exhaus-tive.

The role that mathematics and computerscience researchers can play is to applymathematical and computational tools de-veloped in other contexts to the understand-ing, control, design, and optimization ofnanostructured materials and functionalsystems based on nanoscale structures. Eachof these elements—understanding, control,

FipoXzaC19

Figure 12. Spintronics devices utilize an elec-tron’s spin rather than its charge. Shown hereis an artist’s depiction of a proposal by BruceKane of the University of Maryland for aquantum computer based on the nuclear spinof phosphorus atoms (green), which interact bymeans of spin-polarized electrons (red). Thequantum properties of superposition and en-tanglement may someday permit quantumcomputers to perform certain types of compu-tations much more quickly using less powerthan is possible with conventional charge-based devices. (From S. Das Sarma, “Spin-tronics,” American Scientist 89, 516 (2001))

gure 13. Templated self assembly of nano-rous material—in this case, MCM-41. (From

. S. Zhao, “Synthesis, modification, characteri-tion and application of MCM-41 for VOControl,” Ph.D. thesis, University of Queensland,98)

Page 28: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

22

design, and optimization—is essential to themanufacturability of systems composed ofnanoscale objects, as noted by Stan Wil-liams of Hewlett-Packard in the opening talkof the workshop.

Among the major challenges facingtheory, modeling, and simulation (TMS)in the dynamics, assembly, and growth ofnanostructures is simulation of self-assembly (which typically involves manytemporal and spatial scales, and many morespecies than the final product), methods forcalculating electron and spin transport prop-erties in nanoscale systems, and the proc-essing and control (particularly to achievemore uniform size distribution) of self-assembly of heterostructures composed ofhard materials (such as group IV semicon-ductors, obtained by combined ion and mo-lecular beam epitaxial growth).

The TMS methods needed to faithfullymodel the dynamics, assembly, and growthof nanostructures consist of the usual meth-ods we associate with TMS (electronicstructure methods, atomistic simulation,mesoscale and macroscale methods) butwith additional variations specific to theseproblems. These variations include:

• Hybrid electronic structure/moleculardynamics methods scalable to very largesystems are needed to model combinedreaction and structure evolution.

• Coarse graining/time spanning methodsare crucial, owing to the large times as-sociated with self-assembly and the largemicroscopic and mesoscopic domainsthat exist in many systems.

• Since many self-assembly processes takeplace in solution, electronic structuremethods incorporating solvation are nec-essary.

• Molecular and nanoscale methods thatincorporate phase equilibria and non-

equilibrium multiphase systems (in-cluding interfaces) are required.

Participants also saw the necessity oftraditional finite element elastic/plasticcodes for stress/strain description at themacroscale, since ultimately nanostructuredmaterials must connect with the macro-scopic world. It was noted that upscaling(going to coarser length and time scales)typically results in the introduction of sto-chastic terms to reflect high-frequency mo-tion at smaller scales; thus, more rigorousmethods for including stochasm and moreeffective methods for characterizing andsolving stochastic differential equations arerequired.

New developments in nonequilibriumstatistical mechanics, both of classical andquantum systems, are needed. For example,the recent discovery by nonequilibrium mo-lecular dynamics, confirmed by nonlinearresponse theory, of violations of the secondlaw of thermodynamics for nanoscale sys-tems indicates the importance of new theo-retical developments focused on nanoscalesystems. Quantifying errors in calculatedproperties was identified as a significantproblem that consists of two parts—charac-terization of uncertainty due to inaccuraciesinherent in the calculation (such as limita-tions of a force field in molecular dynamicsor of a basis set in electronic structure cal-culations) and performing useful sensitivityanalyses.

The important aspect of nanoscale sys-tems that makes them such a TMS challengeis that the characterization of uncertaintywill often have to be done in the absence ofexperimental data, since many of the prop-erty measurement experiments one wouldlike to perform on nanoscale systems areimpossible in many cases or unlikely to bedone in cases where they are possible.Hence, the concept of self-validating TMSmethods arises as a significant challenge in

Page 29: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

23

nanoscience. By self-validating TMS meth-ods we mean that a coarser-grained descrip-tion (whether it be atomistic molecular dy-namics or mesoscale modeling, for example)is always validated against more detailedcalculations (e.g., electronic structure cal-culations in the case of atomistic moleculardynamics, and atomistic molecular dynam-ics in the case of mesoscale modeling).

Various computational issues were iden-tified as important for computational nano-science. In particular, an open source soft-ware development framework for multiscalemodeling was identified as a real need. TheOCTA project in Japan (http://octa.jp) maypoint the way towards development of suchan open source software developmentframework, or may indeed be the appropri-ate framework on which to expand simula-tion capabilities. Visualization of nanoscalesystems is challenging, since the spatio-temporal scales cover such large orders ofmagnitude. Truly effective parallelization ofMonte Carlo was identified as an importantissue. Specifically, efficient domain decom-position parallelization of Monte Carlo onvery large systems with long range forceswas seen as an unsolved problem with rami-fications for computational nanoscience.Finally, parallel codes for any TMS methodthat scales efficiently to tens of thousands ofprocessors were targeted. This is because thehighest-performance computers are headedtowards this number of processors, yet it isnot clear that any scalable codes exist at thispoint for such large machines.

We identified three grand-challenge-scale problems within self-assembly thatwould require theoretical, algorithmic, andcomputational advances, and which mightbe specifically targeted in any future pro-gram in computational nanoscience. Theseare the simulation of self-assembly of tem-plated nanoporous materials, self-assemblyof nanostructures directed by DNA and

DNA-like synthetic organic macromole-cules, and directed self-assembly of quan-tum dot arrays and other nanoscale buildingblocks using physical synthesis and assem-bly methods. Each of these problems hasclear relevance to the DOE mission, creatingpossible paradigm shifts in low-energy sepa-rations, catalysis, sensors, electronic de-vices, structural and functional hybrid mate-rials, and computing. Each involves signifi-cantly different chemical and physical sys-tems, so that the range of TMS methodsneeded varies with each problem: the firstinvolves self-assembly from solution; thesecond the use of biologically or bio-inspired directed self-assembly, again insolution; while the third generally does notinvolve solutions.

In a related example, Figure 14 shows anew computational method for producingrandom nanoporous materials by mimickingthe near-critical spinodal decompositionprocess used to produce controlled-poreglasses (CPGs). While producing modelCPGs which have similar pore distributionsand sizes to actual CPGs, the method is not afully detailed simulation of CPG synthesis.Refinement of such techniques to the pointwhere new nanoporous materials can be dis-covered and synthesized computationallywould be a paradigm-altering step forwardin nanostructured materials synthesis.

Figure 14. Schematic of controlled-pore glassmodel preparation. The phase separation processused to prepare real materials is imitated via mo-lecular dynamics simulation, resulting in molecu-lar models with realistic pore networks and struc-tures. (After Gelb et al., Reports On Progress inPhysics, Vol. 62, 1999, p. 1573–1659)

Page 30: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

24

III. The Role of Applied Mathematics and Computer Sciencein Nanoscience

“I am never content until I haveconstructed a mechanical model ofwhat I am studying. If I succeed inmaking one, I understand; other-wise I do not. . . . When you meas-ure what you are speaking aboutand express it in numbers, youknow something about it, but whenyou cannot express it in numbersyour knowledge about it is of ameagre and unsatisfactory kind.”—William Thompson (LordKelvin), 1824–1907

Kelvin’s mathematical triumphalism is froma pre-quantum-mechanical era and must bediscounted as chauvinistic in view of theimmense understanding of complex phe-nomena that can be accumulated prior tosatisfactory mathematization. Such hard-won understanding of phenomena, in fact,leads eventually to their mathematization.Nevertheless, Kelvin’s agenda of quantita-tive understanding remains as a prerequisitefor simulation and predictive extrapolationof a system, if not for substantial under-standing.

Given the enormous power of computa-tional simulation to transform any science ortechnology supported by a mathematicalmodel, the mathematization of any new fieldis a compelling goal. The very complexityand breadth of possibilities in nanoscienceand technology cry out for an increasing rolefor computational simulation today. Thissection, organized around three themes, in-dicates some of the directions this increasingrole might take: in bridging time and lengthscales, in fast algorithms, and in optimiza-tion and predictability. In each of these rep-

resentative areas, there are instances of“low-hanging fruit,” of problems that arereasonably well defined but whose solutionspresent significant technical challenges, andof phenomena that are not yet at all well de-fined by Kelvinistic standards.

In a complementary way, mathematicsand simulation would be enormously stimu-lated by the challenges of nanoscale model-ing. While some rapidly developing areas ofmathematics, such as fast multipole andmultigrid algorithms, are ready to apply (infact, are already being used) in nanoscalemodeling, there is much work to do just toget to the frontier of the science in otherareas.

It is important to keep in mind that noone can predict where the next mathematicalbreakthrough of consequence to nanosciencewill come from: it could be from computa-tional geometry or from graph theory; itmight come from changing the mathematicalstructure of existing models. Changing themathematical structure creates a new lan-guage, which often can expose an emergentsimplicity in a seemingly complex land-scape. For instance, no one could have pre-dicted the significance of space-fillingcurves (a topological fancy) to the optimallayout of data in computer memories.

However, it is predictable that the for-ward progress of mathematical modelingwill carry first one, then another area ofnanoscience forward with it—provided thatthe minds of nanoscientists and mathemati-cians alike are prepared to experiment, tolook for the points of connection, and torecognize opportunities as they arise. Themany obvious connections described below

Page 31: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

25

should not overshadow the potential for theemergence of perhaps more important con-nections between nanoscience and other ar-eas of mathematics in the near future.

Hence, it will be important at this earlystage not to define too narrowly what formsof applied mathematics will be important tonanoscience applications. Furthermore,the collaborative infrastructure and cross-fertilization between projects that can befacilitated in a DOE initiative fosters crea-tivity while making recognition of otherwisenon-obvious connections and opportunitiesmore likely.

One of the simplest initial objectives of acollaborative dialog between scientists ofthe nanoscale regime and mathematicalmodelers is to be sure that the low-hangingfruit is regularly plucked. Mathematical re-sults and software libraries to implementthem—many sponsored by the Departmentof Energy—arrive in a steady stream andreadily find application in fields that tradi-tionally engage applied mathematicians,such as engineering mechanics, semicon-ductor device physics, signal processing,and geophysical modeling. The differentialand integral operator equations underlyingmathematical physics have been placed on afirm foundation. Issues of existence anduniqueness, convergence and stability ofnumerical approximation, accuracy and al-gorithmic complexity, and efficiency andscalability of implementation on theagency’s most powerful computers are wellunderstood in many scientific areas.

The variety of simulation tools availableis enormous and includes Eulerian-type andLagrangian-type adaptive methods for theevolution of fields, level sets, and particles.Complex multiscale algorithms have beendeveloped for optimal solution of equilib-rium and potential problems. Wonderfultools exist for creating low-dimensional rep-resentations of apparently high-information

content phenomena. Stochastic modelingtechniques pull averages and higher mo-ments from intrinsically nondeterministicsystems. Moreover, the limits of applicabil-ity of these tools and techniques are wellunderstood in many cases, permitting theiruse in a cost-effective way, given a require-ment for accuracy. Sensitivity and stabilityanalyses can accompany solutions.

The interaction of scientists and engi-neers with applied mathematicians leads tocontinual improvement in scientific andmathematical understanding, computationaltools, and the interdisciplinary training ofthe next generation of each. Due to the nov-elty of nanoscale phenomena, such interac-tion with applied mathematicians is oftenmissing. Interactions must be systematicallyimproved in existing and new areas.

At the next level of objectives for a na-noscience-mathematics collaboration lie theproblems that are apparently well defined bynanoscientists but for which there are noroutinely practical mathematical solutionstoday. The number of particles that interact,the number of dimensions in which theproblem is set, the range of scales to be re-solved, the density of local minima in whichthe solution algorithm may be trapped, thesize of the ensemble that needs to be exam-ined for reliable statistics, or some otherfeature puts the existing model beyond therange of today’s analysts and the capacitiesof today’s computers. In these areas, somefundamentally new mathematics may be re-quired.

Finally, there exists a set of problems forwhich mathematicians must come alongsidenanoscientists to help define the modelsthemselves. A critical issue for revolutionarynanotechnologies is that the components andsystems are in a size regime about whosefundamental behavior we have little under-standing. The systems are too small foreasily interpreted direct measurements, but

Page 32: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

26

too large to be described by current first-principles approaches. They exhibit toomany fluctuations to be treated monolithi-cally in time and space, but are too few to bedescribed by a statistical ensemble. In manycases, they fall between the regimes of ap-plicability of trusted models. Perhaps exist-ing theories can be stretched. More likely,new theories and hence new mathematicalmodels will be required before robust fun-damental understandings will emerge.

Once a theoretical model becomes atrusted companion to experiment, it canmultiply the effectiveness of experimentsthat are difficult to instrument or interpret,take too long to set up, or are simply too ex-pensive to perform. Powerful computationalengines of simulation can be called upon toproduce, to twist the words of R. W. Ham-ming, not just insight, but numbers as well.

Those numbers are the essence of con-trol and predictability.

A. Bridging Time and Length Scales

Most phenomena of interest in nanoscienceand nanotechnology, like many phenomenathroughout science and engineering, demandthat multiple scales be represented formeaningful modeling. However, phenomenaof interest at the nanoscale may be extremein this regard because of the lower end ofthe scale range. Many phenomena of im-portance have an interaction with their envi-ronment at macro space and time scales, butmust be modeled in any first-principlessense at atomistic scales.

The time scale required to resolvequantum mechanical oscillations may be assmall as a femtosecond (10-15 s), whereasprotein folding requires microseconds (arange of 109), and engineered nanosystemsmay require still longer simulation periods,e.g., creep of materials relevant to reliabilityconsiderations may be measured in years.Counting trials in probabilistic analysescontributes another time-like factor to thecomputational complexity, although thislatter factor comes with welcome, nearlycomplete algorithmic concurrency, unlikeevolutionary time. Meanwhile, spatial scalesmay span from angstroms to centimeters (arange of 108). The cube of this would becharacteristic of a three-dimensional contin-uum-based analysis. Counting atoms ormolecules in a single physical ensemble

easily leads to the same range; Avogadro’snumber is about 1024.

Exploiting the concurrency availablewith multiple independent ensembles multi-plies storage requirements. Space-time re-source requirements are staggering for manysimulation scenarios. Therefore, nano-science poses unprecedented challenges tothe mathematics of multiscale representationand analysis and to the mathematics of syn-thesizing models that operate at differentscales.

Existing techniques may certainly yieldsome fruit, but we are undoubtedly in needof fundamental breakthroughs in the form ofnew techniques. Faced with similar types ofchallenges in macroscopic continuum prob-lems, applied mathematicians have devised awide variety of adaptive schemes (adapta-tion in mesh, adaptation in discretizationorder, adaptation in model fidelity, etc.).Bringing these schemes to a “science” hasrequired decades, and is not complete in thesense of predictable error control for a giveninvestment, e.g., for some hyperbolic prob-lems. Probably the greatest range of scalesresolved purely by mesh adaptivity to date isabout 10 decades in cosmological gravita-tion problems. Of course, this range is prac-tical only when the portion of the domaindemanding refinement is small compared to

Page 33: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

27

the domain size required to include all of theobjects of interest or, what is often the moredemanding requirement, to graft on to relia-bly posed boundary conditions in the farfield. Migrating such adaptive schemes intoproduction software (typically throughcommunity models, sometimes in commer-cial analysis packages) lags their scientificdevelopment by at least an academic gen-eration and usually more. The challenge fornanoscience is therefore not only great, butalso urgent. Applied mathematicians beingtrained today can acquire a doctoral degreewithout ever having been exposed to nano-scale phenomena.

The mathematical models and algo-rithms of nanoscience cover the gamut ofcontinuous and discrete, deterministic andrandom. Ultimately, all simulations are exe-cuted by deterministic programs (at least, towithin the limits of race conditions on par-allel processors) on discrete data. The mostcapable computer platforms of 2002 (costingin excess of $100M) execute at most tens ofteraflop/s (1013 floating point operations persecond) and store in fast memory at mosttens of terabytes and in disk farms at mosthundreds of terabytes. Moreover, these ma-chines do not perform efficiently for modelsat these capability extremes. A typical ex-treme problem in computational science to-day encompasses only billions of discretedegrees of freedom (filling storage largerthan this with auxiliary data, including ge-ometry, physical constitutive data, and linearalgebra workspaces) and runs for days attens to hundreds of gigaflop/s. Routineproblems encompass only millions of de-grees of freedom and run for hours at around1 gigaflop/s.

The wildest extrapolations of Moore’sLaw for silicon cannot sate the appetite forcycles and memory in nanoscience model-ing. It is clear that a brute force, first-principles approach to nanoscale phenomenawill not succeed in bridging scales—or per-

haps, given the promise of nanotechnology,we should say that it can only succeedthrough some “spiral” process, wherebynanotechnology yields superior computers,which yield superior nanoscale models,leading to improved nanotechnology andbetter computers, and so forth.

We begin with a brief inventory of suc-cessful scale-bridging models in appliedmathematics today.

Mathematical homogenization (or “up-scaling”)—whereby reliable coarse-scaleresults are derived from systems with unre-solvable scales, without the need to generatethe (out of reach) fine scale results as inter-mediates—is a success story with extensivetheory (dating at least to the 1930s) and ap-plications. The trade-off between accuracyand efficiency is well understood. It is par-ticularly fruitful, for instance, in simulatingflows through inhomogeneous porous mediaor wave propagation through inhomogene-ous transmissive media with multiple scat-terers. One begins by considering just twoscales. Given a 2 × 2 matrix of interactions:“coarse-coarse,” “coarse-fine,” “fine-coarse,” and “fine-fine,” where the firstmember denotes the scale of an output be-havior and the second member the scale ofthe input that drives the behavior, one canformally solve to eliminate the “fine-fine”block. (If the interactions are all linear, forinstance, this can be done with block Gaus-sian elimination.) This leaves a Schur com-plement system for the “coarse-coarse”block, where the interaction matrix is theoriginal “coarse-coarse” block and a tripleproduct term with the “fine-fine” inverse inthe middle. The effect of the triple-productterm can often be reliably approximated,yielding a model for coarse scales that in-corporates the dynamics of both scales.

In principle, this process can be repeatedrecursively. The standard theory succeedswhen there is good separation of scales be-

Page 34: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

28

tween the inhomogeneous structure to be“homogenized” and the scales of interest inthe solution. When there is significant inter-action across length scales (when the“coarse-fine” and the “fine-coarse” interac-tions are large), the computation of effectiveproperties at the coarse scales becomes moredifficult. Work continues on such conceptsas “multilevel renormalization,” and ho-mogenization should certainly be studied forfruitful approaches to nanoscale science.

Another successful and now quotidianmeans of bridging scales in modeling is theconcept of “space sharing” of different mod-els, with different resolution requirements.For instance, crack propagation in a solidcan be effectively modeled with adaptivefinite elements for linear elasticity in themid and far fields, grafted onto moleculardynamics models at and immediately sur-rounding the crack tip and the newly ex-posed surface. As another example, theRichtmyer-Meshkov instability in fluid me-chanics has been studied with a conventionalcomputational fluid dynamics analysis forNavier-Stokes in the mid and far fields anddirect simulation Monte Carlo on “lumped”fluid particles at the mixing interface. At-mospheric reentry problems have similarlybeen successfully modeled with sparse con-tinuum models away from the body andBoltzmann models in the boundary layer atthe body surface.

Very significant scientific and engi-neering progress can be made with suchmodels on today’s high-end computers withsuch two-physics scale bridging. Less dra-matically, one can also cite a multitude ofsuccessful dual-continuum model ap-proaches with different asymptotic forms ofthe governing equations in different regimes,e.g., classical boundary layer theory, inwhich viscosity terms are computed only inregions of steep gradients. In all of these ex-amples, it is important that the dual modelshave a regime of overlap, where their solu-

tions can be simultaneously represented andjoined. In this sense, these approaches aresomewhat complementary to that of homog-enization. An even less dramatic example,still, is the scenario in which the physicalmodel remains the same throughout the do-main, but the mathematical representation ofit changes for computational efficiency. Theclassic example of this is a combination offinite elements to resolve the geometryaround a complex object (e.g., scatterer) in adifferential equation approach and boundaryelements to resolve the far field in an inte-gral equation approach. These space-sharingapproaches sometimes (depending upon thedifficulty of the application) lack themathematical rigor of adaptive approacheswithin a single physical and mathematicalformulation, but with care to monitor andenforce conservation properties at the inter-face, they are practical and possibly to beemulated in nanoscale phenomena.

A somewhat more trivial instance ofbridging scales is what we might call “timesharing” of models, in analogy to “spacesharing.” In such cases, different dynami-cally relevant physics “turns on” at differentpoints in the simulation, so that it is accept-able to step over fast scales during one pe-riod, but necessary to resolve them in an-other. Modeling ignition in a combustionsystem provides one industrially importantexample. Cosmology enters again here withvastly different physics relevant in the veryearly universe (e.g., first three minutes) thanin the long-term evolution.

It is conceivable, with good software en-gineering, to marry the concepts of spacesharing and time sharing of multiple physicsmodels in a large and complex system, sothat each region of space-time adaptivelychooses the most cost-effective model andadvancement algorithm. One concept in thecontinuum modeling of multirate systemswhose applicability to nanoscale multiratesystems should be explored is that of im-

Page 35: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

29

plicit integration methods to “step over” fasttime scales that are not dynamically relevantto the result, even though they are timesteplimiting, due to stability considerations, inexplicit integration. Examples abound, in-cluding acoustic waves in aerodynamics,gravity waves in climate modeling, Alfvenwaves in magnetohydrodynamics, and fastreactions between intermediate species indetailed kinetics of combustion.

In these and many other cases, accepta-bly accurate results on slower (dynamicallyrelevant) time scales can be achieved bymaking some equilibrium assumption aboutthe fine dynamics in order to filter out thefast scales, enabling the integration to takeplace in coarse timesteps. The price for thistransformation of the equations to be inte-grated is usually an implicit solve, where amathematical balance is enforced on eachtimestep. The algorithmic cost of an implicitintegrator is much higher per step than thatof an explicit integrator, but one can takemany fewer steps to get to a desired point insimulation time. Whether this trade-off is awinning one for the overall simulation de-pends upon the cost of the implicit solve. Asa rule of thumb in large-scale partial differ-ential equations (PDEs), if the fast scales aretwo orders of magnitude or more faster thanthe dynamical scales of interest, implicitmethods usually pay off. If the ratio ofscales is 10 or less, they likely do not. It maybe that many ranges of time scales requiredby nanoscientists (e.g., integration intervalslasting billions of quantum mechanical os-cillation periods) can be condensed by im-plicit algorithmics without loss of any rele-vant physics, following some statistical orequilibrium assumptions.

Retreating further into simple, but oftenadequate, scale-bridging scenarios, there aremany examples of one-way cascades ofmodels, whereby a fine-scale problem issolved off-line to develop constitutive laws

or material properties for a coarser-scaleproblem.

It is increasingly practical, for instance,to simulate properties of novel alloys or ex-plosives—materials with simple structurecompared to fiber composites or nanotubenetworks—and then to employ these prop-erties as proxies for experimental data incontinuum simulations. Sometimes theproperties are quite ad hoc, for instance,eddy models in turbulence, to account formixing at larger scales than molecular by anenhancement of the molecular viscosity inturbulent regimes. Of course, the turbulencesimulation community, reflecting a time-honored specialty with a modeling historyfar too complex to summarize in the scopeof this section, copes with multiple scalesthrough as many different means as there aremodeling groups. Large-eddy simulation, afairly rigorous approximation, is emergingas very popular, and its practitioners mayhave fresh ideas when faced with nanoscalephenomena. Source terms in turbulent re-acting flows attempt to account for addi-tional reaction intensity along fractallywrinkled flame fronts by parameterizingsmoother, resolvable flame front area. Bothdeterministic and probabilistic tools haveproved fruitful in tuning to experiments andlinking up with the fundamental conserva-tion principles. This is a very application-specific technology, as will be, undoubtedly,many of those appropriate for bridgingnanoscales.

Although it is not a modeling techniquebut a solution technique, one should alsomention multigrid as a paradigm. Multigridhas already found success in molecular dy-namics and in electronic structure algo-rithms and is likely to be embraced across awider range of nanoscale applications.Multigrid has the beautiful property of con-quering numerical ill conditioning (essen-tially caused by the presence of too manyscales in view of the numerics all at once)

Page 36: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

30

by treating each scale on its own terms. Thisimproves linear conditioning and nonlinearrobustness of the solution method, and oftenleads to physical insight as well. For in-stance, problems in geometric design areoften best approached by resolving large-wavelength geometric parameters on coarsegrids, and small-wavelength geometric pa-rameters on finer ones. Multigrid dependson a (usually purely mathematical) decom-position into ranges of scales, and on theability to construct intergrid transfer opera-tors and coarsened versions of the funda-mental fine-scale operator on coarser scales.

Another mathematical paradigm thatmay prove to have a resonance with thenanoscale community is the method knownvariously as proper orthogonal decomposi-tion or principal component analysis. In bothlinear and nonlinear problems, it is oftenpossible to represent the underlying dynam-ics of interest with a relatively small numberof dynamical coefficients representing theweight of complex, system-dependentmodes. The challenge is to discover themodes of interest, which are almost never asnumerous as the number of characteristicbasis functions in the finest grid. The finelyresolved setting may be required to deter-mine the principal modes, but once found,production computing can be done in a re-duced basis.

Some degree of stochasticity is intrinsicat atomistic scales; hence, a Monte Carlosimulation will yield a fluctuation with sta-tistics that must be carried to larger lengthscales. The numerical analysis of stochasticpartial differential equations could enterhere, the goal being a coarse solution withthe right statistics. The theory of large de-viation for stochastic PDEs allows the de-signing of sampling methods for so-called“rare events.” The long time integrationsthat are burdensome to the accuracy of de-terministic models may actually be a benefitto the accuracy of statistical models.

It is axiomatic to a mathematician thatthere are better ways than brute-force propa-gation of fine scales to attack a multiscaleproblem. A mathematician would argue thatanyone requiring a billion degrees of free-dom for a week of wall clock execution timeis running the wrong algorithm. Indeed, thenanoscale community must learn to compute“smarter” and not just “harder.” Part of thiswill come from optimal algorithmics (e.g.,linear scaling in DFT), but a larger and yetunknown portion of it will come from intrin-sically multiscale models, created collabo-ratively with mathematical scientists. It iseasy to construct scenarios in which suchmodeling is attractive, and probably neces-sary. We mention here an example that ap-pears to demand vast bridging of scales forunified scientific understanding, the opera-tion of a biosensor to detect trace quantitiesof a biologically active molecule.

• At the smallest scale, a molecule can besensed by the luminescence characteris-tic of an optical energy gap. QuantumMonte Carlo could be used for a high-accuracy comparison with experiment.

• At the next scale, where one needs togain a quantitative understanding of sur-face structure and interface bonding,DFT and first-principles moleculardynamics could be employed.

• At the next higher scale, one would needto gain a quantitative understanding ofthe atomistic coupling of the molecule tobe detected with the solvent that trans-ports it to the sensor. This modelingcould be modeled with classical mo-lecular dynamics at finite temperatureand classical Monte Carlo.

• Finally, at the macroscale, the solventitself could be characterized with a con-tinuous model, based on finite elements.

Each phenomenon depends upon thephenomenon at the scale above and the scale

Page 37: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

31

below, which means that there is two-waycoupling between mathematically differentmodels at different scales.

The Holy Grail of nanoscale modeling isa seamless bridging of scales with tight bi-directional coupling between every scale-adjacent pair of a sequence of models, fromnano through micro and meso to macro. Thisbridging necessarily must involve scale-appropriate algorithmics at each scale, with

well-understood error dependence on thecomputational resolution in each model, andwith error control for information transferbetween scales. For optimality, algorithmsshould be prudent in their use of fine-scaleinformation—using the coarsest scale suit-able for obtaining the requisite error in eachresult.

B. Fast Algorithms

Due to the inherent complexity of nano-structures, theoretical models will generallyneed to be instantiated as computationalsimulations. Many such simulations will in-volve intensive computation, requiring so-phisticated algorithms and software runningon state-of-the-art parallel computers. Algo-rithmic advances can be a critical enablingcomponent for progress in nanosciencemodeling. New and more efficient algo-rithms advance scientific frontiers in severalways:

• Larger simulations can be performed.For example, more atoms or basis func-tions can be included in a model, in-creasing fidelity and accuracy. Or dy-namical models of the same size can berun for longer physical times. Enablinglarger simulations is particularly impor-tant in the nanosciences to allow meth-ods at one time or length scale to extendinto the range of applicability of other,more coarse-grained methodologies inorder to validate or pass information tothe coarser-grained methodologies.

• Physical models with more detailed sci-ence can be simulated, increasing accu-racy and reliability and hence confidencethat simulation results represent reality.

• With faster kernel computations, newtypes of analysis become tractable. For

instance, fast linear solvers enable thesolution of nonlinear systems. Efficientmodeling of forward problems allowsfor the solution of inverse problems.Hence, faster kernel computations en-able a sufficient number of simulationsto be performed in order to improve thestatistical sampling.

The potential benefits from new capa-bilities in modeling in the nanosciences re-quire continuous advances in the key algo-rithms that underlie nanoscale computations.This progress may come about in any num-ber of ways. Many of the key algorithms(fast transforms, eigensolvers, etc.) have ap-plications in other settings, but their impactin the nanosciences alone could justify con-tinued development. Entirely new ap-proaches for some of these problems are dif-ficult to anticipate but could have tremen-dous impact on not only the feasibility of thecomputations, but also what will be learnedfrom the computations. For example, wave-lets or other new bases might be able to re-duce the complexity of some nanosciencecalculations and may expose features of thescience in new ways. An alternative ap-proach for making progress is to specialize ageneral algorithm to the precise needs of anapplication. This approach is the most com-mon way to advance the application of algo-rithms and often allows for dramatic im-

Page 38: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

32

provements in performance. Recent progressin linear scaling methods for electronicstructure provides one such example.

Developments in electronic structurecalculations provide a particularly tellingillustration of the enabling power of algo-rithmic advances. First-principles DFTmethods for this problem traditionally scaleas the cube of the system size, N. This rapidgrowth with system size limits applicationsof these powerful techniques to systemscontaining a few hundred atoms on tradi-tional computers. More efficient cubic algo-rithms and parallel implementations haveallowed larger systems to be simulated, butsubstantial progress is still limited by the N3

complexity. Recent algorithmic progress isremoving this barrier. For system sizes of upto a thousand electrons, ab initio plane waveelectronic structure codes using fast Fouriertransforms now effectively scale as N2logN.Developments using local representationsare close to achieving linear scaling for verylarge systems, at least in some applicationdomains.

A second important example comesfrom molecular dynamics simulations,where advances in fast multipole methodshave reduced the time for computing Cou-lombic or other long-range forces from N2 toNlogN or N. Recent innovations in time in-tegration for molecular dynamics are ena-bling simulations for much longer physicaltimes. Continued developments and ad-vances will be needed to enable the neces-sary simulations for complex nanoscale dy-namics.

These types of advances generate newchallenges. The asymptotically more effi-cient algorithms are generally more complexthan their antecedents, and in order to havehigh fidelity and enough accuracy, somemay only be more efficient for very largeproblems. High-quality implementations area significant undertaking, and parallel ver-

sions will likely require additional algo-rithmic advances. Modern software practicesand open-source parallel libraries can havean enormous impact in easing the develop-ment of new functionality and then deliver-ing that new capability to scientists in atimely and efficient manner. More generally,the asymptotic complexity of an algorithm isonly one of several aspects relevant to analgorithm’s potential impact in nanoscience.Algorithmic researchers must also considerease of implementation, parallelizability,memory locality, and robustness.

In an area as diverse as nanoscience, anenormous range of computational techniquesand algorithms have a role to play, and theworkshop was able to touch on only a subsetof them. The following areas were identifiedas particularly important. Many of them aregenerally applicable for a wide range of ap-plications; however, they will be particularlyimportant to enabling discoveries and quan-titative understanding in the nanosciences.

Linear Algebra in Electronic StructureCalculationsSimulation methods based on ab initio elec-tronic structure will unquestionably be criti-cal in future investigations of the propertiesof matter at the nanoscale. The capability ofsuch methods to predict structural and elec-tronic properties without prior empiricalknowledge or experimental input makesthem very attractive. It is therefore an im-portant priority to accelerate the develop-ment and the deployment of fundamentalalgorithms used in large-scale electronicstructure calculations.

Electronic structure calculations basedon DFT and extension to first-principlesmolecular dynamics (FPMD) have tradition-ally benefited greatly from advances in fastalgorithms. An example is the fast Fouriertransform, which essentially lies at the rootof the efficiency of the plane-wave elec-

Page 39: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

33

tronic structure method, and thus of the fea-sibility of plane-wave-based FPMD. De-pending on the numerical approach taken,the solution of the Kohn-Sham equations iseither cast as an eigenvalue problem—inwhich case the non-linearity due to theCoulomb and exchange-correlation interac-tions is included as an outer loop—or as aconstrained optimization problem—in whichcase the non-linear Kohn-Sham energyfunctional is directly minimized with respectto all degrees of freedom describing theelectronic wavefunctions, with the constraintthat one-particle wavefunctions must remainorthogonal.

Thus the development of efficient eigen-value solvers on one hand, and of con-strained optimization algorithms on theother, directly impacts the efficiency ofelectronic structure codes. Optimization isdiscussed elsewhere in this report, but theneed for fast eigensolvers is pervasive.Some applications require the k lowesteigenpairs where k is large—perhaps in thethousands. But there is also a need for codesthat compute “interior” eigenvalues (eigen-values inside the spectrum), typically nearthe Fermi level. Also in time-dependentDFT (TDDFT), a large dense matrix issometimes diagonalized (in which case alleivenvalues/vectors are computed).

Electronic structure calculations willalso likely benefit from all advances thatenhance our capability to solve PDEs nu-merically. In particular, the development ofgrid-based methods, adaptive mesh refine-ment approaches, grid partitioning methods,non-linear multigrid solvers, domain de-composition methods, etc. may play an im-portant role in future scalable Kohn-Shamsolvers. Specifically, the version of TDDFTthat uses ordinary differential equation(ODE) schemes requires the solution ofhundreds or thousands (one for each groundstate) of systems of ODEs (actually time-dependent Schrödinger equation) which are

coupled by the potential. Potentially enor-mous gains are possible if specialized meth-ods are developed for solving such systems.

A problem shared by all electronicstructure methods is that of efficiently solv-ing the Poisson equation. Algorithms forsolving this problem are widespread andwell developed, although further progress inthe complex geometries relevant to nano-science applications would be welcome. Thecommunity of theorists/modelers involved innanoscience would benefit from better (andpublicly available) software for this prob-lem.

Linear scaling methods and related tech-niques avoid the eigenvalue problem alto-gether. Although this allows for a great re-duction in asymptotic complexity, currentmethods can lack accuracy and/or be nar-rowly applicable. A wealth of algorithmshave been proposed and explored, and theopportunities for fruitful collaborative inter-actions are abundant.

Monte Carlo TechniquesMonte Carlo methods are a class of stochas-tic optimization techniques that are particu-larly faithful to the transition probabilitiesinherent in statistical mechanics or quantumphysics. This faithfulness to the underlyingphysics allows them to be applied to manyproblems, not just those commonly consid-ered as optimization problems. Improve-ments in the applicability and (especially)the efficiency of Monte Carlo techniquescould have a big impact on several aspectsof nanoscience modeling.

Quantum Monte Carlo is currently themost accurate method for electronic struc-ture that can be extended to systems in thenanoscience range. It serves as a benchmarkand a source of insight. It is capable ofhighly accurate numerical solution of theSchrödinger equation. This progress has re-quired resolution of the “fermion sign prob-

Page 40: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

34

lem,” a challenge to computational mathe-matics. That resolution has recently beenmade in principle, but substantially moredevelopment is needed to make it practical.

Kinetic Monte Carlo (KMC) methodsare an important class of techniques thattreat a range of atomistic and aggregate par-ticle phenomena above that of electronicstructure. Essentially they use “Metropolis-Hasting” stochastic dynamics, ascribingphysical time intervals to the moves. A fun-damental question that needs mathematicalattention is the relationship of the time scaleto the form of the moves used. If this can besolved, then KMC will be both more reliableand much more widely applicable.

Additional important technical issuesconnected with KMC include the need forefficient scalably parallel approaches, multi-scale methods, and efficient quantitativemethods for estimating the probabilities andrates of rare events. In many of the applica-tions, the a priori probability of a rate con-trolling event is very small; appropriate im-portance sampling or other techniques needto be developed to make such studies practi-cal.

Monte Carlo methods are also widelyused in statistical mechanical studies ofnanosystems. All of these variants of MonteCarlo can be made more powerful and use-ful by incorporating developments in com-putational geometry and by a theory of op-timal moves—that is, by mathematical in-vestigation of the dependence of the spec-trum of the Markov transition matrix uponthe form of trial moves in dynamics of theMetropolis-Hasting. In addition, since manynanosystems are driven more by entropythan by energy, methods that are entropy orfree-energy friendly are badly needed: cur-rent methodologies for free energy are usu-ally rather awkward.

Equally important is the promise from arecent development in an O(N) Monte Carlo

method which merits substantial additionaldevelopment. Other deep technical problemsinclude the need for developments in pertur-bation methods and techniques for treatingrelativistic effects.

Finally, continued advances and devel-opment in embedding, that is, for using ahierarchy of methods of varying accuracyand computational complexity to treat largersystems, are urgently needed.

Data Exploration and VisualizationLarge simulations can generate copiousquantities of data. A physical scientist is of-ten interested in exploring these large data-sets for interesting features. This process isdifficult to automate, since often the mostimportant features are unexpected and theytend to be very application specific. Suchexplorations are valuable because they gen-erally reveal deficiencies in the simulationitself or unanticipated and interesting as-pects of the physics. This kind of data-driven exploring requires rich and easy-to-use environments for visualization and datainteractivity. Good tools will hide the com-plexity of the data management from theuser and will respond in real time to userinteractivity, even though the data analysismay be occurring on a distant parallel ma-chine.

Although there is considerable researchactivity in this general area, there will besignificant benefit from customization toeffectively impact the research of specificnanoscientists. On the other extreme, strate-gies for deriving maximal information froma limited data environment are needed, sinceexperiments may be difficult or expensive toconduct. This will be critical, for example,in getting a handle on management of un-certainty. In both these contexts, strategiesthat include experiments for the purpose ofsupporting the theoretical objective are

Page 41: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

35

needed, in contrast, say, to modeling for thepurpose of explaining experiments.

Computational GeometryConsiderable geometric complexity is aninherent aspect of many objects from theatomistic to the nanoscale. Examples includethe potential energy isosurfaces of a polymerand the structure of a self-assembled nano-

composite. Efficient methodologies for rep-resenting and manipulating such geometriescan have a significant impact on the overallperformance of some computations. Com-putational geometry is a mature area of algo-rithmic computer science, but the specificgeometries arising in nanoscience wouldbenefit from specialized algorithms.

C. Optimization and Predictability

In virtually every branch of science, there isa need for calculations to estimate the pa-rameters of a system, to find the extremumof an objective function such as minimumenergy or maximum entropy states of a sys-tem, and to design and control systems.Since mathematical models are often used toguide experimentation and to predict oradapt to behaviors, there is an equally strongneed to understand the errors in these mod-els and to be able to make mathematicallysound and precise statements regarding theaccuracy and precision of the predictions.

Since all of these needs pervade science,it is natural to ask in what way nanoscienceis special. Why, for example, cannot exist-ing techniques be used for the models innanoscience? The short answer to this ques-tion is implicit in the earlier sections of thisreport: the degree of detail and dynamiccomplexity including wide-ranging, non-separable scales in both space and timemake the formulation and solution of prob-lems in this area fundamentally more chal-lenging than most and arguably all problemssuccessfully tackled to date. The payoff todeveloping a new set of computational mod-els and tools would be better understanding,prediction, design, optimization, and controlof complex nanosystems with confidence inthe reliability of the results. Successes in thisfield would almost certainly have impact onother fields that share the complexity of a

large range of complex behavior on non-separable temporal and spatial scales.

OptimizationMany of the problems mentioned above relyon formulating and solving optimizationproblems. For example, in order to gain aquantitative understanding of nanosystemsat a fundamental level, it is essential to havethe capability to calculate the ground state ofthe system. This can be formulated as aminimum energy problem where the solu-tion gives the configuration of the particlesof the system at the lowest energy, given anenergy functional. Complicated nanosystemsof interest can have millions to billions ofparticles, resulting in huge optimizationproblems characterized by an equally hugenumber of local minima with energy levelsclose to the ground state. There is no hope ofsolving this problem with brute force meth-ods. To make any headway on such prob-lems requires careful formulation of the ob-jective function and the constraints. For ex-ample, progress in the protein-folding prob-lem has been achieved by understanding thatcertain sequences of amino acids in the un-folded molecule almost always end up in astandard form, e.g., an alpha helix. By in-ducing the optimization path towards thisconformation, better results have been at-tained. Constraints, based on additionalknowledge, can also be formulated to furtherreduce the search space.

Page 42: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

36

Optimization methods that exploitphysical insight could have a dramatic im-pact on efficiency. Monte Carlo methods arediscussed elsewhere in this report, but otherstochastic or deterministic optimizationmethods can also be accelerated by domainknowledge. For example, when trying tosimulate a novel material, knowledge aboutthe structural properties and energetics ofrelated materials could be used to preferen-tially steer the computation towards prom-ising regions of the state space. Such meth-ods are likely to be much more efficient thangeneral optimization strategies, but quitelimited in their applicability.

Self-assembly is a central feature ofnanoscience. Understanding, predicting, andcontrolling this process is crucial for the de-sign and manufacture of viable nanosystems.Clearly the subsystems involved in this pro-cess are assembling themselves according tosome minimum energy principle. Once anunderstanding of the underlying physics isattained, optimization problems can be for-mulated to predict the final configurations.Since these systems are also huge and likelyto have many local minima, a careful devel-opment of the models, the constraints, andthe algorithms will also be required here.

Given the necessity to create models thatincorporate many scales, it has been sug-gested that a hierarchy of models be createdto span this range, i.e., a connected set ofmodels that capture the quantum effectsthrough the micromorphic effects. In thiscase, the parameters at a given level shouldbe attainable from the next lowest levelusing parameter-determining techniques.Optimization formulations for such parame-ter estimation problems are ubiquitous, butconstructing the correct error models and theassociated optimization problem is not obvi-ous. Simply using least squares may not beappropriate; more general maximum-likelihood estimators will likely be required.

As alluded to above, the design andcontrol of nanosystems is vital to the even-tual manufacture of products. The problemhere is to optimize the design to achieve adesired characteristic or to control the proc-ess so that a given state is reached. In thesecases, the constraints of the optimizationproblem contain the hierarchy of models de-scribing these systems. In particular, theconstraints contain coupled systems of par-tial differential equations (PDE). There isconsiderable interest in such problems inother fields, e.g., aerodynamics, and someefficient algorithms have been created, butthe number of levels involved in nano-science is far higher than in other fields.

Progress has been made by understand-ing that these coupled systems of PDEs donot have to be satisfied at each step of theoptimization process, but only need to besatisfied in the limit. This has resulted in anoptimized solution in a modest multiple ofthe time required for one nonlinear solve.Such approaches, however, require that theformulation of the optimization problem in-corporate some or all of the state variables inaddition to the control variables. Effectiveoptimization methods in these cases musthave more control over the PDE softwarethan is typically the case, e.g., gradientsand/or adjoints must be computed alongwith other quantities. Thus, the formulationof the models, the algorithms, and the soft-ware with the ultimate goal of design andoptimization is crucial.

One can easily envision optimizationproblems arising in many other aspects ofnanoscience. For example, methods forshape optimization for determining the op-timal geometry of both tools and products,non-smooth optimization due to derivativediscontinuities between the models at ad-joining levels, and discrete optimization forparticles restricted to discrete locations willbe needed. All of the developments in theseareas, as with those mentioned above,

Page 43: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

37

should be done collaboratively betweenoptimizers and nanoscientists.

PredictabilityAlthough part of the predictability problemcan be addressed using optimization tech-niques (e.g., maximum likelihood), otherstatistical methodologies will be necessaryto develop a final statement of confidence inthe answers. First, however, it is importantto recognize all of the possible sources oferrors in the process. This starts with therecognition that none of our models is exact.And, since most of the models are based onPDEs, these equations cannot be solved ex-actly. There will be a combination of discre-tization errors in the formation of the alge-braic systems that the computer sees androundoff errors in the actual computation.There are errors associated with the physicalmodels and errors related to parameters andinitial conditions that define these models.Some of these errors are observational, re-lated to field or laboratory data, and someare numerical or modeling errors, related tosimulation data. These two types of errorsare actually of a similar form, as laboratoryerrors are often the result of a simulationneeded to interpret the experiment. Further-more, errors at one scale will be propagatedto other scales in ways that will be difficultto track and manage.

It is important to develop methods fordetermining comprehensive and inclusivebounds on the final errors. Such methodsinclude sampling technologies for develop-ing sensitivities to parameters, analytic tech-niques for propagation of errors, and prob-ability models for numerical or modelingerrors in simulations. The predictions of sci-ence are not a simple forward map of an in-put to an output in a simulation. Parametersand input values needed in the forwardsimulation are determined as an inverse

problem through statistical inference basedon measurements of outputs. Thus propaga-tion of errors and uncertainty is mappedboth ways, from input to output in directsimulation and from output to input in pa-rameter determination. Statistical inferenceis the tool that connects the two. This infer-ence can be conveniently formulated in aBayesian framework, so that the inverseproblem acquires a probabilistic expression,and disparate data collected from differentsources can then be used to reduce uncer-tainty and to aid in the predictions. The ul-timate goal of such an approach is the de-termination of error bars or confidence in-tervals associated with a prediction, basedon all sources (data and simulation) onwhich the prediction is based. A necessaryintermediate goal is the association of errorbars or other measures of quantified uncer-tainty with simulations. In analogy to errorbars, which indicate the precision of an ex-perimental measurement, error bars shouldbe used to quantify the precision and accu-racy of a simulation.

SoftwareElsewhere in this report, the issue of soft-ware has been raised. The need for well-designed, object-oriented, open-sourcecodes that can be used on a variety of plat-forms is essential to the overall success ofthe nanoscale initiative. There are simplytoo many modules necessary for simulationand analysis than can be developed in oneplace. Modular optimization and statisticalalgorithms and software need to be devel-oped in such a framework to provide toolsfor scientific understanding leading to ad-vanced engineering and analysis tools withguaranteed accuracy for design and manu-facture.

Page 44: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise
Page 45: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

39

Appendix A: Workshop Agenda

Friday, May 10, 2002

Speaker Title/Subject

8:00 a.m. Walt Stevens, DOE, Basic Energy Sciences, andWalt Polansky, DOE, Advanced Scientific Com-puting Research

“The DOE Perspective on Theory and Mod-eling in the Nanoscience Initiative”

8:30 C. William McCurdy, LBNL Goals and Purpose of This Workshop

Session Chair: Bill McCurdy

9:00 Stanley WilliamsHewlett-Packard Laboratories

“Big Theory as the Engine of Invention forNanotechnology: Losing the Born-Oppenheimer Approximation”

9:45 Uzi LandmanGeorgia Institute of Technology

“Small Is Different: Computational Micros-copy of Nanosystems”

10:30 Break

Session Chair: Ellen Stechel

11:00 Steven LouieUniversity of California, Berkeley

“Theory and Computation of Electronic,Transport and Optical Properties on the Na-noscale”

11:45 Dion VlachosUniversity of Delaware

“Bridging Length and Time Scales in Materi-als Modeling”

12:30 p.m. Lunch – Working, Catered in Conference Room

Session Chair: David Keyes

1:45 Phil ColellaLBNL

“Computational Mathematics and Computa-tional Science: Challenges, Successes, andOpportunities”

2:30 Jerry BernholcNorth Carolina State University

“Quantum Mechanics on the Nanoscale: fromElectronic Structure to Virtual Materials”

3:15 Break

Session Chair: Peter Cummings

3:45 Sharon GlotzerUniversity of Michigan

“Computational Nanoscience and Soft Matter”

4:30 Alex ZungerNational Renewable Energy Laboratory

“Progress and Challenges in Theoretical Un-derstanding of Semiconductor Quantum Dots”

5:15 Adjourn

7:00 Conference Dinner – Dinner Speaker:Bernd Hamann, University of California, Davis

“Massive Scientific Data Sets: Issues and Ap-proaches Concerning Their Representationand Exploration”

Page 46: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

40

Saturday, May 11, 2002

Panel Subject

8:30 a.m. Paul Boggs, SNL/LivermoreJim Glimm, SUNY Stony BrookMalvin Kalos, LLNLGeorge Papanicolaou, Stanford UniversityAmos Ron, University of Wisconsin-MadisonYousef Saad, University of Minnesota

The Role of Applied Mathematics and Com-puter Science in the Nanoscience Initiative

Panel Moderator: Paul Messina, ANL

Breakout Sessions Chair

10:30 Well Characterized Nano Building Blocks James Chelikowsky, University of MinnesotaComplex Nanostructures and Interfaces Sharon Glotzer, University of MichiganDynamics, Assembly and Growth of Nanostruc-

turesPeter Cummings, University of Tennessee

12:00 p.m. Lunch – Working, Catered in Conference Room

1:00 Reports from Breakout Sessions

Breakout Sessions Chair

1:30 Crossing Time and Length Scales George Papanicolaou, Stanford UniversityFast Algorithms Malvin Kalos, LLNLOptimization and Predictability Paul Boggs, SNL/Livermore

3:00 Reports from Breakout Sessions

3:15 Closing Discussion of the Report of the Workshop

4:30 Workshop Adjourns

Page 47: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

41

Appendix B: Workshop Participants

Jerry BernholcDepartment of Physics406A Cox HallNorth Carolina State UniversityRaleigh, NC [email protected]

John BoardDuke UniversityElectrical and Computer Engineering178 Engineering BuildingBox 90291Durham, North Carolina [email protected]

Paul BoggsComputational Sciences and MathematicsResearch DepartmentSandia National Laboratories, MS 9217Livermore, CA [email protected]

Russel CaflischUniversity of California, Los AngelesMath, Box 7619D MSD, 155505Los Angeles, CA [email protected]

James ChelikowskyUniversity of MinnesotaChemical Engineering and Material ScienceAmundson Hall, Room 283421 Washington Avenue SEMinneapolis, MN [email protected]

Daryl ChrzanUniversity of California, BerkeleyComputational Materials Science465 Evans HallBerkeley, CA [email protected]

Phil ColellaLawrence Berkeley National LaboratoryOne Cyclotron Road, MS-50A-1148Berkeley, CA [email protected]

Larry CurtissArgonne National Laboratory9700 S. Cass AvenueArgonne, IL [email protected]

James DavenportBrookhaven National LaboratoryP.O. Box 5000Building 463BUpton, NY [email protected]

Michel DupuisBatelle, Pacific Northwest National LaboratoryP.O. Box 999/K8-91Richland, WA [email protected]

Herbert EdelsbrunnerDuke UniversityComputer Sciences DepartmentBox 90129Durham, NC [email protected]

Page 48: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

42

Anter El-azabBatelle, Pacific Northwest National LaboratoryP.O. Box 999/K8-91Richland, WA [email protected]

Stephen FoilesSandia National LaboratoriesP.O. Box 5800, Mail Stop 1411Albuquerque, NM [email protected]

Giulia GalliLawrence Livermore National Laboratory7000 East Ave.P.O. Box 808, MS-L415Livermore, CA [email protected]

Jim GlimmDept. of Applied Mathematics and StatisticsP-138A Math TowerState University of New York at Stony BrookStony Brook, NY [email protected]

Sharon GlotzerDepartments of Chemical Engineering,Materials Science and Engineering,Macromolecular Science and EngineeringUniversity of MichiganRoom 3014, H. H. Dow Building2300 Hayward StreetAnn Arbor, MI [email protected]

Francois GygiLawrence Livermore National Laboratory7000 East Avenue.P.O. Box 808Livermore, CA [email protected]

Bernd HamannDepartment of Computer Science3035 Engineering II BuildingUniversity of California, Davis1 Shields AvenueDavis, CA [email protected]

Steve HammondNational Renewable Energy Laboratory1617 Cole Blvd., SERF/C5300Golden, CO [email protected]

Bruce HarmonAmes Laboratory, USDOEIowa State University311 TASFAmes, Iowa [email protected]

Grant HeffelfingerSandia National Laboratories1515 Eubank SEAlbuquerque, NM [email protected]

John HulesLawrence Berkeley National LaboratoryOne Cyclotron Road, MS-50CBerkeley, CA [email protected]

Malvin KalosLawrence Livermore National Laboratory7000 East Ave.P.O. Box 808Livermore, CA [email protected]

Page 49: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

43

John KiefferMaterials Science and EngineeringUniversity of MichiganRoom 2122, H. H. Dow Building2300 Hayward StreetAnn Arbor, MI [email protected]

Dale KoellingU.S. Department of EnergyMaterials Science and Engineering19901 Germantown Road, SC-13Germantown, MD [email protected]

Joel KressLos Alamos National LaboratoryP.O. Box 1663, MS B268Los Alamos, New Mexico [email protected]

Uzi LandmanPhysics 0430Georgia Institute of TechnologyAtlanta, GA [email protected]

William A. Lester, Jr.Department of ChemistryUniversity of California, BerkeleyBerkeley, CA [email protected]

Steven LouieUniversity of California, BerkeleyPhysics, 545 BirgeBerkeley, CA [email protected]

Paul MessinaArgonne National Laboratory9700 S. Cass AvenueArgonne, IL [email protected]

Juan MezaLawrence Berkeley National LaboratoryOne Cyclotron Road, MS 50B2239Berkeley, CA [email protected]

Raul MirandaU.S. Department of EnergyBasic Energy Sciences19901 Germantown Road, SC-142Germantown, MD [email protected]

Fred O’HaraP.O. Box 4273Oak Ridge, TN [email protected]

George PapanicolaouMathematics DepartmentBuilding 380, 383VStanford UniversityStanford, CA [email protected]

Walt PolanskyU.S. Department of EnergyMathematical, Information, and ComputationalSciences19901 Germantown Road, SC-32Germantown, MD [email protected]

Art RamirezLos Alamos National LaboratoryP.O. Box 1663, MS K764Los Alamos, New Mexico [email protected]

Page 50: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

44

Chuck RomineU.S. Department of EnergyMathematical, Information, and ComputationalSciences19901 Germantown Road, SC-31Germantown, MD [email protected]

Amos RonDepartment of Computer SciencesUniversity of Wisconsin-Madison1210 West Dayton StreetMadison, Wisconsin [email protected]

Yousef SaadUniversity of MinnesotaDept. of Computer Science and Engineering4-192 EE/CSci Building200 Union Street S.E.Minneapolis, MN [email protected]

Tamar SchlickCourant InstituteNew York University509 Warren Weaver HallNew York, NY [email protected]

Thomas SchulthessOak Ridge National LaboratoryP.O. Box 2008, MS 6114Oak Ridge, TN [email protected]

Walt StevensU.S. Department of EnergyBasic Energy Sciences19901 Germantown Road, SC-14Germantown, MD [email protected]

Malcolm StocksOak Ridge National LaboratoryP.O. Box 2008, MS 6114Oak Ridge, TN [email protected]

Anna Tsao, PresidentAlgoTek, Inc.3601 Wilson Boulevard, Suite 500Arlington, VA [email protected]

Dion VlachosDepartment of Chemical Engineering andCenter for Catalytic Science and TechnologyUniversity of DelawareNewark, DE [email protected]

Lin-Wang WangLawrence Berkeley National LaboratoryOne Cyclotron Road, MS-50FBerkeley, CA [email protected]

Stanley WilliamsHP Fellow and Director,Quantum Science ResearchHewlett-Packard Laboratories1501 Page Mill Road, MS 1123Palo Alto, CA [email protected]

Andrew WilliamsonLawrence Livermore National Laboratory7000 East Avenue.P.O. Box 808Livermore, CA [email protected]

Page 51: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

45

Hua-Gen YuBrookhaven National LaboratoryP.O. Box 5000, Building 463BUpton, N.Y. [email protected]

Alex ZungerNational Renewable Energy Laboratory1617 Cole Blvd., SERF/C109Golden, CO [email protected]

Organizers

Peter CummingsDepartment of Chemical EngineeringUniversity of Tennessee419 Dougherty HallKnoxville, TN [email protected]

Bruce HendricksonParallel Computing Sciences DepartmentSandia National LaboratoriesAlbuquerque, NM [email protected]

David KeyesDepartment of Mathematics and StatisticsOld Dominion University500 Batten Arts & LettersNorfolk, VA [email protected]

C. William McCurdyLawrence Berkeley National LaboratoryOne Cyclotron Road, MS-50B-4230Berkeley, CA [email protected]

Ellen StechelFord Motor Co.Powertrain Operations and Engine Engineering21500 Oakwood Boulevard, MD#3Dearborn, MI [email protected]

Page 52: Theory and Modeling in Nanoscience...spread applications in fields of nanotechnol-ogy ranging from molecular electronics to biomolecular materials. To realize the un-mistakable promise

46

DISCLAIMER

This document was prepared as an account of work sponsored by the United States Government. While this document isbelieved to contain correct information, neither the United States Government nor any agency thereof, nor The Regents ofthe University of California, nor any of their employees, makes any warranty, express or implied, or assumes any legal re-sponsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, orrepresents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, proc-ess, or service by its trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its en-dorsement, recommendation, or favoring by the United States Government or any agency thereof, or The Regents of theUniversity of California. The views and opinions of authors expressed herein do not necessarily state or reflect those of theUnited States Government or any agency thereof or The Regents of the University of California.