tacc 2011 summer features

31

Upload: hedda-prochaska

Post on 28-Mar-2016

220 views

Category:

Documents


1 download

DESCRIPTION

A collection of feature stories produced by the Texas Advanced Computing Center during the summer of 2011.

TRANSCRIPT

Page 1: TACC 2011 Summer Features
Page 2: TACC 2011 Summer Features
Page 3: TACC 2011 Summer Features

Quantum effects in “III-V” materials allow for faster, lower-energy nanotransistors

This race, inside the R&D centers of multinational corporations like Intel, IBM, GlobalFoundaries, Advanced Micro Devices, Samsung, and others, and also in academia, has led to several promising ideas. Nanotransistors made of graphene and quantum computers [featured in Part 1 and 2 of this series] are leading contender for future devices, but both involve unproven materials and processes.

A promising design being explored at the Midwest Institute for Nanoelectronics Discovery (MIND) are “tunneling” transistors that use “III-V” materials, comprised of elements from the 3rd and 5th columns of the periodic table. These materials consume less energy and can be made smaller than silicon without degrading.

“III-V materials have been studied extensively,” said Gerhard Klimeck, director of the Network for Computational Nanotechnology (which hosts nanohub.org) and a professor of electrical and computer engineering at Purdue University. “But they have not reached Intel or IBM because industry has been able to build transistors with silicon and it’s expensive to completely retool.”

The III-V materials have made inroads in certain niche applications like optical and high-speed communications. However, it has not cracked the CPU market where estimates for building fabrication plants based on new materials or technologies are in the range of several billion dollars. Because of the size of the investment, a great deal of preliminary research needs to be done before any manufacturer will make the leap.

What’s wrong with silicon? you ask. First, silicon chips use unsustainable amounts of power; second, by packing so many transistors on a chip, they can reach temperatures high enough to melt metal; and third, an odd quantum characteristic called tunneling allows electrons, at small length scales, to burrow under a barrier and leak charge.

Tunneling is considered a major problem in CMOS semiconductor design. "It's a leakage path that we don't want," Klimeck said. "But maybe tunneling can turn from an obstacle into a virtue in these devices."

A transistor's actions are two-fold. Not only does the device have to switch on and off, it must also be able to distinguish between the two states. Since the off state is always little leaky, the goal is to increase the ratio of "on" current to "off" current to at least 10,000.

Tunneling Transistors

Texas Advanced Computing Center | Feature StoryFor more info, contact: Aaron Dubrow, Science and Technology Writer, [email protected] Page 1 of 2

Imagine if the rapid technological progress we’ve become accustomed to suddenly leveled off. Many experts believe this could occur if silicon transistors — the basis for nearly all electronics — reach their miniaturization limit, which is believed to be less than a decade away.

This scenario may come as a relief to some — no need to buy the latest gadget. But economically it would be a disaster for the United States. Not only has the semiconductor industry been the U.S.’s biggest export over the last five years, it is widely recognized as a key driver for economic growth globally.

According to the Semiconductor Industry Association, in 2004, from a worldwide base of $213 billion, semiconductors enabled the generation of some $12 trillion in electronic systems business and $5 trillion in services, representing close to 10% of world gross domestic product.

Economic progress like this cannot be slowed without a fight. Consequently, a massive scientific effort is underway to find new materials, new methods, or even new paradigms that can replace silicon transistors in a fast, cost-effective way.

Carbon nanotubes have novel properties that make them potentially useful in many nanotechnology applications, including electronics, optics and other fields of materials science. Simulations of new nanoscale materials help advance research and

assist industry in the transition from silicon to alternative transistors materials.

Animations and visualization are generated with various nanoHUB.org tools to enable insight into nanotechnology and nanoscience. Above, a simulations shows a graphene nanoribbon that can be either a zig-zag (left image) or arm-chair (right image) type. Both zig-zag and armchair type GNR are shown with varying widths. Additional animations are also available at http://nanohub.org/resources/8882

Page 4: TACC 2011 Summer Features

atom, in terms of geometries and growth.

“We try understand on the simulation side what can be done and provide the experimentalists with ideas,” Klimeck said.

“The tunnel FETs look fairly similar to the CMOS transistor that we see today, though they use very different materials and actually turn off and on by a quantum effect called tunneling,” said Jeff Welser, Director of the Nanotechnology Research Initiative, which funds the studies at the MIND center.

“It turns out that by using tunneling, you can get transistors to turn on much more quickly.”

Though esoteric, the search for new nanotransistors is incredibly important for national competitiveness and economic security. Semiconductors are not only the U.S.’s largest export, they are the foundation for the last four decades of incredible growth in wealth, health and scientific advancement.

“Making sure that the nation continues to be on the leading edge of this export is of utmost importance, and it’s timely to do that because we know that the industry does not have a solution at the 8 nanometer level,” Klimeck said. “If we do not find a solution to continue to improve computers, the technical advancement that we’ve seen in the last 40 years will stop.”

June 29, 2011

With the scaling down of metal oxide semiconductor field-effect transistors (better known as a MOSFET), researchers are looking at new transistor designs. Among them is the gate-all-around nanowire MOSFET. Due to quantum mechanical confinement in both the transverse directions, an inversion channel is formed at the center of the device. This phenomenon iscalled volume inversion. Threshold voltage for the simulated nanowire device in the accompanying image is ~0.45V.

There are fundamental limits in this regard for today’s CMOS technology, but III-V materials, and specifically the tunnel FET (TFET) transistors that Klimeck is exploring, can perform better. They are often called “steep sub-threshold swing devices,” because they swing from almost no current to full current with a very steep slope. As a consequence, they would require less power while still performing the same number of operations.

Recent simulations on the Ranger supercomputer at the Texas Advanced Computing Center (TACC) and the Jaguar supercomputer at the National Center for Computational Sciences, led to a greater understanding of the quantum, and atomic-lever dynamics at play in the nanoscale device. Determining the energetics and electron pathways of these new nanoscale forms of III-V materials required more than 15 million processor hours on Ranger and 23 million hours on Jaguar between 2008-2011.

The research group, led by Alan Seabaugh of the University of Notre Dame, found that the sub-threshold conduction problem is related to the way electrons gather in the device. The group started out with a design developed at the University of California, Berkeley that was released to much excitement in Nature magazine in 2010. Using the computational tools they developed, the researchers found that the off current for the transistor was extremely high — a big problem for the device design.

To explain the physics of the problem, Klimeck likened the electrons involved in computing to water molecules in a bucket. The bucket has a hard bottom, but it has a fuzzy upper layer where electrons act like water vapor. The vapor cannot be controlled or "gated," resulting in a large voltage range to turn the switch on and off.

Band-to-band tunneling transistors have (figuratively speaking) a top on the bucket. Therefore the flow of the electrons can be tightly controlled without any temperature dependent "vapor" and the devices can turn on and off with a smaller voltage swing.

Klimeck et al filed a patent sponsored by the Nanoelectronics Research Initiative (NRI) for their improved tunneling design, and published several papers on the subject in the Journal of Applied Physics and the IEEE Electron Device Letters in 2010-2011.

"If you can switch from on to off in a smaller swing, you can reduce the whole swing from .9 volts, which we have today, to .5 or .4, volts, which is what we're aiming for," Klimeck said. That factor of two reduction in voltage results in four times less power required. "That's a huge improvement if you can maintain the same current flowing through your valve."

Computer modeling and simulation help the researchers explore the design space and physical properties of the materials, showing how one constructs a real device, atom by

Texas Advanced Computing Center | Feature StoryFor more info, contact: Aaron Dubrow, Science and Technology Writer, [email protected] Page 2 of 2

Gerhard Klimeck, director of the Network

for Computational Nanotechnology

and a professor of electrical and computer

engineering at Purdue University.

Page 5: TACC 2011 Summer Features
Page 6: TACC 2011 Summer Features

TACC’s Ranger supercomputer helps the Southern California Earthquake Center simulate most realistic earthquake to date

At 10 a.m. on Nov. 13, 5.3 million people in Southern California participated in the largest earthquake pre-paredness activity in U.S. history.

The Great Southern California ShakeOut — a collabo-ration between the United States Geological Society, the National Science Foundation, and the Southern California Earthquake Center (SCEC), among others — provided detailed information to the public about what would happen in the event of a magnitude 7.8 earthquake, (approximately 5,000 times larger than the magnitude 5.4 earthquake that shook southern California on July 29), and asked Californians to “drop, cover, and hold on,” practicing the behavior that could save their lives during an actual event.

Because, according to Thomas Jordan, director of SCEC, “the big one” is coming.

“We know that strain is accumulating on the San An-dreas Fault, the stresses are increasing, and we expect that sucker to go at any time,” Jordan said.

If history is any indication, in the next thirty years there will be an earthquake in the region on a scale that hasn’t been experienced in 150 years — one that is expected to cause as many as 2,000 deaths, 50,000 injuries, $200 billion in damage, and long-lasting dis-ruption. The ShakeOut, both the precise predictions and the readiness exercise, was designed to prepare Southern California for this eventuality.

From an emergency preparation perspective, the activity was an extraordinary success. But perhaps equally impressive was the work that went into creat-ing the most detailed simulation of an earthquake ever accomplished, in order to predict how a massive

500-square-mile Los Angeles basin.

-mologists, thousands of years of collective research, and the combined computational capability of some of the world’s most powerful supercomputers, includ-ing Ranger at the Texas Advanced Computing Center (TACC), Kraken at the National Institute of Computa-tional Sciences (NICS) at University of Tennessee, and Datastar and Intimidata at San Diego Supercomputer Center at University of California, San Diego. All three supercomputer sites are part of TeraGrid, the

-

ture.In computing terms, the simulation was unprec-edented. But according to Jordan, it was not nearly comprehensive enough, making the exercise a dry run for even larger predictions in the future, with the abil-ity to determine exactly how each meter of earth will shake when the San Andreas Fault gives.

When the Earth MovesTraditionally, earthquake predictions are based on empirical data from past tremors, which scientists an-

Page 1 of 4Texas Advanced Computing Center | Feature StoryFor more info, contact: Faith Singer-Villalobos, Public Relations, [email protected], 512.232.5771

Anticipating “The Big One”

This movie shows a view of southern California with the seismic waves radiating outward from the fault as the rupture propagates towards the northwest along the San Andreas fault. Simulations developed

by the Southern California Earthquake Center ShakeOut Simulation workgroup. Animations courtesy of the U.S Geological Survey and the

Southern California Earthquake Center.

TACC partners with local smart grid demonstration project to pave the way to a more sustainable future.If you had the power to improve your life, your community, and to make a significant contribution to future generations, would you? One hundred Austin residents in the Mueller development declared a resounding

“yes” and have joined forces with Pecan Street Project to learn about smart grid technology and how to use energy more effectively in their homes.

As global energy prices continue to soar and with power generation accounting for 40 percent of the U.S. carbon footprint, energy efficiency is an increasingly important consideration. Now, more than ever, there is significant momentum from both the general public and government to make

“smart grids” a high priority.

According to Michael Webber, associate director of the Center for International Energy and Environmental Policy at The University of Texas at Austin, utilities and energy companies are expected to spend $1-2 trillion over the next few decades on building, updating, and upgrading their grids nationwide. At the same time, energy consumers are expected to spend tens of billions of dollars on energy-related appliances in the home.

“Austin is a great test bed because we have energy-conscious, savvy residents who

Being "Smart" at

Home

Texas Advanced Computing Center | Feature StoryFor more info, contact: Faith Singer-Villalobos, Public Relations, [email protected], 512.232.5771 Page 1 of 3

are willing to be partners in the process,” Webber said. “In addition, we have an energy mix with similar diversity to the nation as a whole (nuclear, coal, natural gas, wind, etc.). And, we have very high peak loads in the summer because of the need for air conditioning. These peak loads create problems for the grid; therefore, we have more to gain by finding innovative ways to manage energy consumption.”

A smart grid is a system that delivers electric power to consumers in a more intelligent manner than is now possible, and has enhanced controls that protect equipment and foster the safe integration of distributed energy sources throughout a neighborhood, a city, a region, and even a continent. By adding monitoring, analysis, control, and communication capabilities to the electrical delivery system, smart grids hold the potential to maximize throughput while reducing energy consumption.

“Before smart grid advocates and companies ask customers to invest in new products and services, we all need a better understanding of what they want, what they’ll use and what they’ll get excited about,” said Pecan Street Project Executive Director Brewster McCracken.

“Our work at Mueller is the most comprehensive energy consumer research being conducted anywhere in the country. It’s the perfect place for real-world energy research, and we’re thrilled that the Mueller residents have invited us into their community.”

Bert Haskell, technology director for Pecan Street Project, is responsible for reviewing different technologies involved in smart grid research, selecting the best architecture, and developing the optimal solution for consumer smart grid usage.

“Our objective for the smart grid demonstration project at Mueller is to understand how the grid is going to benefit the consumer, and that makes us very unique compared to other smart grid projects,” Haskell said. “Most of them are planned and run by a utility, and the

Mueller residents Garreth Wilcock and Kathy Sokolic serve on the Pecan Street Project’s executive committee and have an Incenergy monitoring system installed in their homes.

Page 7: TACC 2011 Summer Features

the utility is trying to benefit itself. We have the full cooperation and support of Austin Energy who is very interested in discovering how they can best serve their customers.”

In a smart grid world, the consumer is given real-time and accurate information about their energy use, and can make decisions on how much to use, what time of day to use it, and how much to pay for the energy. For example, you may want to keep your house set at 75 degrees Fahrenheit when prices are low, but you may decide to increase your thermostat to 78 degrees if prices are high. Similarly, you may want to dry your clothes for $.05 per kilowatt-hour at 9:00p.m. instead of $.15 per kilowatt-hour at 2:00p.m.

Real estate agent Garreth Wilcock moved to the Mueller development from a sprawling 1960’s ranch house and quickly realized the benefits of living in a planned community that promotes energy efficiency. “If we can use what we’re learning here to impact the way homes are built and the way people can take advantage of changes in energy rates…that’s exciting.”

Communities like Mueller are also positioned to take advantage of new technologies, including plug-in hybrid

electric vehicles, various forms of distributed generation, solar energy, smart metering, lighting management systems, and distribution automation.

As early adopters, Pecan Street Project participants Wilcock and Kathy Sokolic participate on an advisory council to review new ideas and products and help decide what goes into the houses. Sokolic recently had her home evaluated for an electric car charger, which could have implications for the rest of the houses in the Mueller development.

“It’s really important for me to practice what I preach,” Sokolic said. “I drive my car a lot so I don’t feel I live the lifestyle that I should. But, being able to move here, I was able to get solar panels, and I can move forward with all kinds of green initiatives. Living in an energy-efficient house and having the ability to participate in this program is fantastic.”

How Supercomputing Plays a Role

As you can imagine, the smart grid demonstration project at Mueller is generating complex and large datasets that require powerful supercomputers to capture, integrate, and verify the information, and to make sure that it is properly synchronized and analyzed.

Enter the Texas Advanced Computing Center (TACC), one of the leading supercomputer centers in the nation, located in Austin at The University of Texas.

“TACC has some of the world’s fastest computers, so we’re confident they can do any kind of crunching, rendering, or data manipulation,” Haskell said. “They have the technical expertise to look at different database structures and know how to organize the data so it’s more efficiently managed. We’re very excited to work with TACC to come up with new paradigms on how to intuitively portray what’s going on with the gird and energy systems.”

With the sensor installations in place at each of the 100 homes, new data is generated every 15 seconds showing precisely how much energy is being used on an individual circuit. Initially, TACC developed a special data transfer format to pull all of the data into a database on TACC’s “Corral” storage system. To date, the database contains approximately 400 million individual power readings and continues to grow.

Currently, TACC is collaborating with Austin Energy to compare their readings with the instantaneous usage readings from the participating homes at the Mueller development. Together, TACC and Austin Energy are calibrating the data to develop an accurate baseline about energy usage in the entire city of Austin.

“We’re trying to create very rich resources for people to use in analyzing patterns of energy usage,” said Chris Jordan, a member of TACC’s Advanced Computing Systems group. “We’re helping to enable forms of research that we can’t even foresee right now, and over time as the resources grow and become more varied, we expect whole new forms of research to be conducted. We’re really interested to see what people can do with it, such as how the data stream can transfer itself into a decision-making device for city planners and individual consumers.”

One of the weaknesses in smart grid systems is the way they visualize data, which is often not intuitive. Since TACC is a leader in providing visualization resources and services to the national science community, and even conducts its own research into visualization software and algorithms, they were a perfect partner for the Pecan Street Project’s research at Mueller.

“Everything can provide information, but to give the information context it needs to be meaningful,” said Paul Navratil, research associate and manager of TACC’s Visualization Software group. “In most scientific contexts, you’re simulating a real phenomenon. With this project, we have abstract information, so we have to work to make the information

Texas Advanced Computing Center | Feature StoryFor more info, contact: Faith Singer-Villalobos, Public Relations, [email protected], 512.232.5771 Page 2 of 3

Installation of the monitoring system is simple and quick. A licensed electrician needs access to the resident's circuit panels (breaker boxes) inside and outside the home.

Page 8: TACC 2011 Summer Features

meaningful to the researchers, consumers, and industry partners who want to demonstrate the value of this project.”

Navratil says it is a massive data mining problem, but this is something he and the other experts at TACC work with on a daily basis.

“For us to maintain the data at this rate and give a good response time for someone who wants to query the site is certainly a challenge,” Navratil said. “We not only have to show the information in an understandable format, we have to show it quickly, and that’s the soups-to-nuts challenge of having an interactive data and visualization site.”

To date, TACC staff members have developed an interface that graphs energy usage characteristics of the homes spread out across the city.

Phase Two of Pecan Street Project’s Demonstration Project at Mueller

While Phase One of the project is a blind study focusing on data collection alone, in 2012, Pecan Street Project will focus on behavior change and integrate more customer-centric technologies.

“Customer-centric to me is really customer value – what do they want and need?” Haskell said. “I guess there’s a certain ‘wow’ factor around the idea that you can control a power system in your house from your iPhone, but to me that’s not customer-centric. To me, customer-centric is that it all takes care of itself, and the customer doesn’t have to think about it, but has the lowest possible power bill each month.” And, ever new services they can elect to pay for.

Overall, Pecan Street Project is trying to understand how energy management systems can be integrated into our lifestyle. Haskell continued: “That’s what we want to figure out―how that future automated home environment will interface to the smart grid to provide the peak energy demand characteristics that the utility needs to run their network without creating a burden on the consumer.”

A deep curiosity has been awakened within Wilcock, Sokolic, and the other project participants; they are a group of people who are on a learning journey. They share articles with one another, meet regularly to discuss the smart grid effort, and often help each other with projects at each other’s homes.

“It’s about finding other ways of looking at things,” said Sokolic, “and we definitely have to work together.”

Learn More

Pecan Street Project is interested in getting more people to make use of this data. The availability of clean, affordable, reliable energy is central to our economic and societal objectives. For more information on how to get involved, please visit: http://www.pecanstreetproject.org/

June 15, 2011Faith Singer-Villalobos

Texas Advanced Computing Center | Feature StoryFor more info, contact: Faith Singer-Villalobos, Public Relations, [email protected], 512.232.5771 Page 3 of 3

Page 9: TACC 2011 Summer Features
Page 10: TACC 2011 Summer Features

TACC’s Ranger supercomputer helps the Southern California Earthquake Center simulate most realistic earthquake to date

At 10 a.m. on Nov. 13, 5.3 million people in Southern California participated in the largest earthquake pre-paredness activity in U.S. history.

The Great Southern California ShakeOut — a collabo-ration between the United States Geological Society, the National Science Foundation, and the Southern California Earthquake Center (SCEC), among others — provided detailed information to the public about what would happen in the event of a magnitude 7.8 earthquake, (approximately 5,000 times larger than the magnitude 5.4 earthquake that shook southern California on July 29), and asked Californians to “drop, cover, and hold on,” practicing the behavior that could save their lives during an actual event.

Because, according to Thomas Jordan, director of SCEC, “the big one” is coming.

“We know that strain is accumulating on the San An-dreas Fault, the stresses are increasing, and we expect that sucker to go at any time,” Jordan said.

If history is any indication, in the next thirty years there will be an earthquake in the region on a scale that hasn’t been experienced in 150 years — one that is expected to cause as many as 2,000 deaths, 50,000 injuries, $200 billion in damage, and long-lasting dis-ruption. The ShakeOut, both the precise predictions and the readiness exercise, was designed to prepare Southern California for this eventuality.

From an emergency preparation perspective, the activity was an extraordinary success. But perhaps equally impressive was the work that went into creat-ing the most detailed simulation of an earthquake ever accomplished, in order to predict how a massive

500-square-mile Los Angeles basin.

-mologists, thousands of years of collective research, and the combined computational capability of some of the world’s most powerful supercomputers, includ-ing Ranger at the Texas Advanced Computing Center (TACC), Kraken at the National Institute of Computa-tional Sciences (NICS) at University of Tennessee, and Datastar and Intimidata at San Diego Supercomputer Center at University of California, San Diego. All three supercomputer sites are part of TeraGrid, the

-

ture.In computing terms, the simulation was unprec-edented. But according to Jordan, it was not nearly comprehensive enough, making the exercise a dry run for even larger predictions in the future, with the abil-ity to determine exactly how each meter of earth will shake when the San Andreas Fault gives.

When the Earth MovesTraditionally, earthquake predictions are based on empirical data from past tremors, which scientists an-

Page 1 of 4Texas Advanced Computing Center | Feature StoryFor more info, contact: Faith Singer-Villalobos, Public Relations, [email protected], 512.232.5771

Anticipating “The Big One”

This movie shows a view of southern California with the seismic waves radiating outward from the fault as the rupture propagates towards the northwest along the San Andreas fault. Simulations developed

by the Southern California Earthquake Center ShakeOut Simulation workgroup. Animations courtesy of the U.S Geological Survey and the

Southern California Earthquake Center.

TACC partners with local smart grid demonstration project to pave the way to a more sustainable future.

If your house ever caught fire, it’s a safe bet you would want the firefighters who show up to have met Craig Weinschenk. A master’s-turned-doctoral student in mechanical engineering, he has been studying fire science since he arrived at the University in 2006.

After applying to graduate programs across the country, the New Jersey native began making the rounds to visit the universities. Almost immediately after he set foot on UT’s campus, Weinschenk met mechanical engineering professor Ofodike Ezekoye, an expert in combustion and heat transfer, and realized what he was meant to do — even if it came as a surprise to him at the time.

“We want to improve science and technology, but we’re trying to adapt everything we do to assist firefighters,” says Craig Weinschenk.

Student Applies Engineering Science to Help Firefighters

Texas Advanced Computing Center | Feature StoryFor more info, contact: Aaron Dubrow, Science and Technology Writer, [email protected] Page 1 of 2

“I met Dr. Ezekoye and we started talking about some research he was doing,” says Weinschenk. “I saw a melted fire helmet and asked him about it. We talked about my friends, many of whom are firefighters. Dr. Ezekoye said, ‘We have a grant coming through. Do you want to work on it?’ And now I’m in Texas.”

Weinschenk, a recipient of theMeason/Klaerner Endowed Graduate Fellowship in Engineering and other endowed funds, is using engineering science to better understand firefighting tactics and to bridge the gap between science and real-life firefighting situations.

Though there is a dedicated “burn building” on UT’s Pickle Research Campus, he uses supercomputers in theTexas Advanced Computing Center (TACC) to simulate fire situations in order to study how flames grow and how air impacts fires. He also gets

Mueller residents Garreth Wilcock and Kathy Sokolic serve on the Pecan Street Project’s executive committee and have an Incenergy monitoring system installed in their homes.

House fire using gasoline as accelerant.

Page 11: TACC 2011 Summer Features

hands-on training from the Austin Fire Department to better understand how fires work and how best to handle them.

“I attack the fire problems from both the experimental and computational side,” he says. “We want to improve science and technology, but we’re trying to adapt everything we do to assist firefighters.”

Early on in his time at UT, Weinschenk had a profound moment while attending a firefighting cadet class. “They give fire science lectures, and instructors were quizzing students and asking questions,” says Weinschenk.

“The fact that they asked for my input made me think, ‘Wow, there’s really a chance to do something good here.’”

And do something he has. Weinschenk, who Ezekoye calls “a rising star” in fire research, has co-authored a paper that was published in Fire Technology and served as president of the UT chapter of the Society of Fire Protection Engineering. He says it has been

“amazing” to be at UT for the past four-plus years. He has greatly enjoyed working with Ezekoye, who he calls “one of the smartest people I have ever met.”

“There are so many opportunities” for graduate students to grow at the University, says Weinschenk, who earned a master’s degree in 2007 and is on track to receive his PhD in May 2011. “In addition to conferences, there are discussions with other academics and professors. There is so much work going on. If you talk to anyone around the country and say that you do research at UT, it has a credibility that is rare.”

June 8, 2011by Lauren Edwards for the UT Giving site.

Weinschenk, a doctoral student in Mechanical Engineering, who has been studying fire science since he arrived at the University of Texas in 2006. [Photo courtesy of Marsha Miller.]

Texas Advanced Computing Center | Feature StoryFor more info, contact: Aaron Dubrow, Science and Technology Writer, [email protected] Page 2 of 2

Page 12: TACC 2011 Summer Features
Page 13: TACC 2011 Summer Features

The Texas Advanced Computing Center celebrates 10 years of enabling discoveries through the use of advanced computing technologies

On June 1, 2001, the newly reorganized Texas Advanced Computing Center (TACC) officially began supporting computational researchers at The University of Texas at Austin (UT Austin) and throughout the national academic community.

Now home to some of the most powerful and recognized supercomputers in the open science community, TACC began 10 years ago by building from a predecessor organization, and by inheriting a dozen employees, a space on the J.J. Pickle Research Campus, and a small 88-processor, liquid-cooled Cray T3E, the original Lonestar system.

From these humble beginnings, TACC began a rapid ascent to become one of the leading supercomputing centers in the world. Born from the shared vision of leadership at UT Austin and TACC’s director, Jay Boisseau, TACC has become an epicenter for research that advances science and society through the application of advanced computing technologies.

"The University of Texas, situated in Austin, presented a tremendous opportunity to build a world-class advanced computing center that supported outstanding science not just at UT, but across the nation," Boisseau said. "The quality of the university, the depth of the talent pool, the high profile of the university and the city, and the small, but dedicated staff that were already on hand, presented the elements for a new plan, a new center, and laid the foundation for what we've accomplished thus far."

Over the past decade, TACC's expert staff and systems have supported important scientific work, from emergency simulations of the Gulf oil spill, which helped the Coast Guard protect property

A Decade of Discovery

Texas Advanced Computing Center | Feature StoryFor more info, contact: Aaron Dubrow, Science and Technology Writer, [email protected] Page 1 of 3

and wildlife, to the first models of the H1N1 virus, which enabled scientists to understand the virus’s potential resistance to antiviral medication, to the clearest picture yet of how the early universe formed. In addition, TACC helped predict the storm surge from Hurricane Ike, delivered geospatial support during the Haiti disaster, and is currently providing emergency computing resources to Japanese researchers who are unable to access their own systems in the wake of the earthquake and tsunami.

The center has deployed increasingly powerful computing systems, which have enabled important scientific accomplishments. These include three systems that debuted in the top 30 “most powerful in the world” on the Top 500 list for open science: Lonestar 2 (#26 in 2003); Lonestar 3 (#12 in 2006); and Ranger (#4 in 2008). At $59 million, the Ranger award also represented the largest single grant to The University of Texas from the National Science Foundation (NSF).

Additionally, TACC currently operates the world’s highest-resolution tiled display (2008: Stallion), and the largest remote and collaborative interactive visualization cluster (2010: Longhorn).

TACC did not emerge in a vacuum. UT Austin had operated supercomputers through a variety of institutes and centers since 1986, including the UT System Center for High Performance Computing, the UT Austin High Performance Computing Facility, and the UT Austin Advanced Computing Center for Engineering and Science (ACCES).

Prior to the formation of TACC, the staffing and systems for advanced computing was at an all-time low on campus. An external

The Texas Advanced Computing Center (TACC) at The University of Texas at Austin is one of the leading centers of computational excellence in the United States. Located on the J.J. Pickle Research Campus, the center’s mission is to enable discoveries that advance science and society through the application of advanced computing technologies.

Page 14: TACC 2011 Summer Features

review board had reported to UT Austin leadership in 1999 that if it wanted to sustain and extend leadership in research in the 21st century, the University needed to develop its computational capacity. As a first step, the Vice President for Research, Juan Sanchez, hired Jay Boisseau, who got his PhD from UT Austin and who had previously worked at the San Diego Supercomputing Center and the Arctic Region Supercomputing Center, to lead the effort. Boisseau rapidly set about expanding the core team inherited from ACCES, and recruiting additional talented staff to broaden TACC’s technology scope and to help realize his vision.

Said Sanchez: “TACC grew from a vision to the reality it is today thanks to the strong commitment of The University of Texas at Austin to become a leading player in advanced computing, and the dedication, focus and expertise of its director, Dr. Boisseau, and his outstanding staff.”

Leveraging top-tier research faculty at the University, local technology partners like Dell Inc., and funding from the NSF, TACC developed rapidly from a small center to a leading provider of computational resources nationwide. TACC currently has nearly 100 employees and continues to expand.

As TACC resources grew in capability and the center hired additional staff, bringing great expertise, the center’s position in the high performance computing community grew as well. In 2002, the High Performance Computing Across Texas (HiPCAT) consortium was formally established by researchers at Rice University, Texas A&M, Texas Tech, University of Houston, and UT Austin, with Boisseau as the first director. In 2004, TACC was selected to join the NSF TeraGrid, the world’s largest distributed infrastructure for open scientific research.

In 2007, TACC began providing resources on Lonestar 3 to other UT System institutions, a role that has now grown in scale with Lonestar 4 and with the UT Research Cyberinfrastructure project. In 2009, the NSF awarded a $7 million grant to TACC to provide a new compute resource (Longhorn), and the largest, most comprehensive suite of visualization and data analysis services to

Deployed in February 2011, Lonestar 4 is TACC’s newest supercomputer and the third largest system on the NSF TeraGrid. It ranks among the most powerful academic supercomputers in the world with 302 teraflops peak performance, 44.3 terabytes total memory, and 1.2 petabytes raw disk.

the open science community. And in 2010, TACC was selected as one of four U.S. advanced computing centers awarded $8.9 million for eXtreme Digital (XD) Technology Insertion Service (TIS) award to evaluate and recommend new technologies as part of the NSF TeraGrid and its follow-on initiative.

In February 2011, TACC deployed a powerful new supercomputer, Lonestar 4, for the national scientific community. The center also received word in May that the National Science Board had approved $121 million for the follow-on to the NSF TeraGrid, known as Extreme Science and Engineering Discovery Environment (XSEDE), in which TACC will play a leading role.

The emergence of TACC as a world-class supercomputing center has arisen in the context of computational science becoming the third method of investigation, which, in conjunction with theory and experimentation, is driving advances in all fields of research. The resources that TACC deploys enable scientists to explore phenomenon too large (i.e. black holes), small (quarks), dangerous (explosions), or expensive (drug discovery) to investigate in the laboratory.

High performance computing is also used to predict the outcome of complex natural phenomena. This is the case for Clint Dawson, one of the leaders in forecasting storm surges associated with tropical storms.

“Ranger” Principal Investigator (PI), Jay Boisseau, and Co-PIs, Karl Schulz, Tommy Minyard and Omar Ghattas (not pictured) brought the 579.4 teraflop supercomputer to The University of Texas at Austin where it helps the nation’s top scientists address some of the world’s most challenging problems.

“We rely on our partnership with TACC because, without them, we wouldn’t be able to do real-time forecasting of extreme weather events,” said Dawson, head of the Computational Hydraulics Group housed in the Institute for Computational Engineering and Sciences (ICES) at UT Austin, and a longtime user of the center’s systems.

This sentiment is shared by nearly all of the scientists and engineers who use TACC’s systems. The majority of computational cycles are allocated by the NSF to the most promising computational science research; some cycles are reserved for researchers at Texas institutions of higher learning, including community colleges and minority-serving institutions. As much as a new telescope or

Texas Advanced Computing Center | Feature StoryFor more info, contact: Aaron Dubrow, Science and Technology Writer, [email protected] Page 2 of 3

Page 15: TACC 2011 Summer Features

electron microscope drives discoveries in astronomy or biology, advanced computing systems allow for new kinds of investigations that push knowledge forward across all scientific disciplines.

The particles in the visualization represent portions of the oil spill and their position is either hypothetical or reflect the observed position of the oil on the surface. The data is visualized using Longhorn and MINERVA, which is an open source geospatial software. [Credits: Univ. North Carolina at Chapel Hill, Institute of Marine Sciences; Univ. Notre Dame, Computational Hydraulics Laboratory; Univ. Texas, Computational Hydraulics Group, ICES; Univ. Texas, Center for Space Research; Univ. Texas, Texas Advanced Computing Center; Seahorse Coastal Consulting]

“We wouldn’t be able to do anything without TACC,” said Mikhail Matz, a professor of integrative biology at UT Austin who combines the power of supercomputers with next-generation gene sequencers.

“We can generate massive amounts of genetic sequences, but then what? The main challenge here is to figure out the most appropriate and effective way of dealing with this huge amount of data, and extracting the information you want. To do that, we need very powerful computers.”

But TACC is more than the host of powerful computing systems. It is also home to an inimitable group of technologists who are instrumental in accelerating science, often by working directly with researchers to make sure their codes run quickly and effectively.

“In order to do these large-scale science runs, it’s a big team effort,” said Philip Maechling, information architect for the Southern California Earthquake Center, who uses Ranger to simulate earthquakes and predict their impact on structures in the Los Angeles basin. “You need the help of a lot of people on our end, but also the help of the staff at TACC in order to get all the pieces to come together.”

Working with Maechling’s team, TACC has helped advance earthquake science and contributed to the development of updated seismic hazard estimates and improved building codes for California.

For users like Dawson, Matz, and Maechling, access to TACC’s Ranger supercomputer and other systems means faster time-to-solution, higher-resolution models, more accurate predictions, and the ability to do transformative science with the potential for social impact.

“We’ve made our systems reliable, high performance and scalable, and we’ve provided great user support,” said Boisseau. “Our systems are constantly in demand — often far in excess of what

what we can even provide — because we’ve established a reputation for making TACC a great environment for scientific research.”

TACC supports more than 1,000 projects, and several thousand researchers, each year, on its diverse systems.

On Friday, June 24, TACC will commemorate its 10th Anniversary with a half-day celebration and colloquium event on the J.J. Pickle Research Campus. The event will bring together experts in the high performance computing community, top scientific researchers who use TACC’s resources, and leadership from the center to discuss the past, present and future of advanced computing, and the ways in which high performance computing is advancing science and society.

[A full description and calendar of events is available online at: http://www.tacc.utexas.edu/10-year-celebration/.]

June 1, 2011

Galaxy Formation in the Early Universe: This is a visualization of a galaxy formation dataset, about 5 million particles simulated for 631 timesteps on Ranger. This simulation and corresponding visualizations help answer questions about the formation of the early Universe, about 100 million years after the Big Bang. This research also helps guide the observations of the James Webb Space Telescope, the replacement for the Hubble Space Telescope, scheduled for launch in 2013. [Image credit: Christopher Burns, Thomas Greif, Volker Bromm, and Ralf Klessen]

Texas Advanced Computing Center | Feature StoryFor more info, contact: Aaron Dubrow, Science and Technology Writer, [email protected] Page 3 of 3

Page 16: TACC 2011 Summer Features
Page 17: TACC 2011 Summer Features

International research team uses TACC supercomputers to push the frontiers of quantum computing simulations

Have you noticed that no one seems to talk about clock rate anymore? Instead of boosting speed at the microprocessor level, companies are packing more cores on a chip and more blades in a server, adding density to make up for the fact that, in the current paradigm, transistors cannot operate much faster.

This leveling of the speed curve has led many to pursue alternative models of computation, foremost among them quantum computing. Practically speaking, the field is in its infancy, with a wide range of possible implementations. Theoretically, however, the solutions share the same goal: using the quantum nature of matter — its ability to be in more than one state at once, called superposition — to speed up computations a million-fold or more.

Where typical computers represent information as 0s and 1s, quantum computers use 0, 1, and all possible superpositions to perform computations.

“Instead of basing your calculation on classical bits, now you can use all kinds of quantum mechanical phenomena, such as superposition and entanglement to perform computations,” said Helmut Katzgraber, professor of physics at Texas A&M University. “By exploiting these quantum effects, you can, in principle, massively outperform standard computers.”

Problems that are currently out of reach, such as simulating blood flow through the entire body; designing patient-specific drugs; decrypting data; or simulating complex materials and advanced quantum systems, would suddenly be graspable. This tantalizing possibility drives researchers to explore quantum computers.

The behavior of quantum particles is difficult to see or test in the traditional sense, so the

Overcoming Quantum Error

Texas Advanced Computing Center | Feature StoryFor more info, contact: Aaron Dubrow, Science and Technology Writer, [email protected] Page 1 of 2

primary ways scientists probe quantum systems is through computer simulations.

Studying the theoretical stability of a quantum computer via computer simulations is exceptionally difficult. However, an international team of researchers coming from diverse fields of physics found common ground on a new way of determining the error tolerance of so-called topological quantum computers using high-performance computing.

The approach involves modeling the interactions of a class of exotic, but well-known materials known as spin glasses. These materials have been shown to correlate to the stability of a topologically-protected quantum computer. By simulating these spin-glass-like systems on the Ranger and Lonestar4 supercomputers at the Texas Advanced Computing Center (TACC), the team has been able to explore the error threshold for topological quantum computers – a practically important aspect of these systems.

Lattice for the topological color codes with 3-colored vertices. Physical qubits of the error-correcting code correspond to triangles (stars mark the ‘boundaries’ of the sets of triangles displayed). (a) Boundary of a vertex v. The stabilizer operators Xv , Zv have support on the corresponding qubits. (b) Error pattern in the form of a string net. The three vertices that form its boundary are all the information we have to correct the error. (c) Two error patterns with the same boundary. Because together they form the boundary of the three vertices marked with a circle, they are equivalent.

In topological quantum error correction information is stored in a nonlocal property of the system. This can, for example, be visualized as a string that lives on the surface of a torus: It either wraps around the torus (blue) or it can be contracted to a point (red), thus denoting two distinct topological sectors. The closeup shows that loops are formed by connecting different stabilizers Z with a closed line.

“It’s an inherent property of quantum mechanics that if you prepare a state, it is not going to stay put,” Katzgraber said.

“Building a device that is precise enough to do the computations with little error, and then stays put long enough to do enough computations before losing the information, is very difficult.”

Page 18: TACC 2011 Summer Features

Katzgraber's work focuses on understanding how many physical bits can ‘break' before the system stops working. "We calculate the error threshold, an important figure of merit to tell you how good and how stable your quantum computer could be," he explained. Katzgraber and his colleagues have shown that the topological design can sustain a 10 percent error rate and even higher, a value considerably larger than for traditional implementations.

So what is a topological quantum computer? Katzgraber likens the topological design to "a donut with a string through it." The string can shift around, but it requires a good deal of energy to break through the donut or to cut the string. A 0 state can then be encoded into the system when the string passes through the donut, a 1 when it is outside the donut. As long as the donut or string do not break, the encoded information is protected against any external influences, i.e., wiggling of the string.

In the topological model, one uses many physical bits to build a logical bit. In exchange, the logical bit is protected against outside influences like decoherence. In this way, the system acts much like a DVD reader, encoding information at multiple points within the topology to prevent errors.

Helmut Katzgraber, professor of physics at Texas A &M University.

“It’s a quantum mechanical version of a 0 and 1 that is protected to external influences,” Katzgraber said.

The amount of error a topological quantum computing system can sustain corresponds to how many interactions in the underlying spin glass can be frustrated before the material stops being ferromagnetic. This insight informs the scientists about the stability potential of topological quantum computing systems.

“Topological codes are of great interest because they only require local operations for error correction and still provide a threshold,” states Hector Bombin, a research collaborator at the Perimeter Institute in Canada. “In addition, they have demonstrated an extraordinary flexibility. In particular, their topological content allows one to introduce naturally several ways of computing and correcting errors that are specific to these class of codes.”

According to Katzgraber, the non-intuitive approach to quantum simulations was driven by the unique nature of the collaboration between his group at Texas A&M University and ETH Zurich (Switzerland), as well as collaborators at the Perimeter Institute and the Universidad Complutense de Madrid in Spain.

“You have so many disparate fields of physics coming together on this one problem,” Katzgraber said. “You have the statistical mechanics of disordered systems; quantum computation; and lattice gauge theories from high-energy physics. All three are completely perpendicular to each other, but the glue that brings them together is high performance computing.”

On TACC's high-performance computing systems, Katzgraber simulated systems of more than a 1,000 particles, and then repeated the calculations tens of thousands of times to statistically validate the virtual experiment. Each set of simulations required hundreds of thousands of processor hours, and all together, Katzgraber used more than two million processor hours for his intensive calculations. Results of the simulations were published in Physical Review Letters, Physical Review A and Physical Review E, and are inspiring further study.

Even as trailblazing companies fabricate the first quantum computing systems, many basic questions remain—most notably how to read in and read out information without error. The majority of people in the field believe a prototype that significantly outperforms silicon at room temperature is decades away. Nonetheless, the insights drawn from computational simulations like Katzgraber's are playing a key role in the conceptual development of what may be the next leap forward.

"We're going to make sure that one day we'll build computers that do not have the limits of Moore's law," said Katzgraber. "With that, we'll be able to do things that right now we're dreaming about."

May 27, 2011

Texas Advanced Computing Center | Feature StoryFor more info, contact: Aaron Dubrow, Science and Technology Writer, [email protected] Page 2 of 2

Page 19: TACC 2011 Summer Features
Page 20: TACC 2011 Summer Features

Researchers at The University of Texas at Austin simulate graphene transistors for tomorrow’s electronic devices

To reach Dr. Bhagawan Sahu’s offices at the Microelectronic Research Center in Austin, Texas, one passes glass-faced clean rooms filled with wafer processors and stainless-steel cauldrons. Scientists in white jumpsuits peer through electron microscopes at novel materials, hoping to find ways to improve the electrical characteristics of tomorrow’s technologies.

In contrast, Dr. Sahu’s office consists of a pair of cubicles behind a heavy door. A few of them together in the hallway make up the offices of the Southwest Academy of Nanoelectronics, or SWAN, a research center exploring next-generation nanotransistors and other future applications.

SWAN is one of four nanoelectronics centers funded through the Nanoelectronics Research Initiative (NRI), a program of the Semiconductor Research Corporation (SRC), the world’s leading technology research consortium. SRC is comprised of global semiconductor companies with a vested interest in safeguarding and going beyond Moore’s law. The NRI member companies — Texas Instruments, Intel, Global Foundries, IBM and Micron — partner with the National Institute of Standards and Technology and the National Science Foundation to fund research projects throughout the country.

Silicon has long been the workhorse of our digital world, but as

The Frontiers of Future Nanotransistors

Texas Advanced Computing Center | Feature StoryFor more info, contact: Aaron Dubrow, Science and Technology Writer, [email protected] Page 1 of 3

Schematic of the four-terminal BisFET device from the SWAN center at UT-Austin. Two independent graphene layers are connected on the left and right by metal contacts for injection of electron (e) and holes (h) for charge transport. This device has the potential for ultra-low power computing and is one of the devices industry is hoping to launch in 2020, if it can be realized. [Image courtesy: Banerjee et al, Electron device letters, 30, 158 (2009)]

transistors made of the material shrink to the nanoscale, they cease to improve at the same rate. This is due to excessive power consumption in the device and the consequent degradation of its performance.

"The scaling of silicon transistors has driven the economy around the world for the past half century," said Dr. Jeff Welser, director of the Nanoelectronics Research Initiative at the SRC. "The U.S. is the leader in microelectronics, and to maintain that leadership and to continue to drive the economy, we need to find a way to keep the device scaling going."

One of the solutions being pursued to continue to improve performance is the adoption of new device architectures or new materials. Dr. Sahu, a Research Physicist at The University of Texas at Austin, is part of a nationwide search composed of hundreds of scientists and engineers at universities, research centers, and technology companies. Their goal: to find new nanoscale materials and effects that can be used to replace silicon transistors by the year 2020.

Today's smallest semiconductor transistors are about 32 nanometers (nm) long. Dr. Sahu and the SWAN team aim to make 10nm transistors, with a thickness of less than one nanometer, using graphene. Since it was discovered in the mid-2000s, graphene

Page 21: TACC 2011 Summer Features

has been lauded as the savior of the semiconductor industry. In 2010, Andre Geim and Konstantin Novoselov, of the University of Manchester, UK, were awarded the Nobel Prize in Physics “for groundbreaking experiments regarding the two-dimensional material.”

Made up of a single layer of graphite, graphene is the thinnest material in the world and possesses electron mobilities (a measure of how fast electrons in a material can move in response to external voltages) higher than silicon. These characteristics are attractive features and have generated tremendous interest from the semiconductor industry. However, as scientists learned more about graphene and proved it could be used as a potential material in transistors, initial excitements gave way to a greater appreciation of the design and fabrication challenges ahead.

After five years of dedicated study, the SWAN center’s novel, graphene-based design was selected by the SRC as one of only a few device ideas to be studied further. Each of these design possibilities provides a different challenge at the atomic level.

“Understanding the device components atomistically through simulations has become inevitable in these nanoscale devices,” Dr. Sahu said. “Our efforts at SWAN provide the community with the simulation results, which are obtained by virtual experiments before any real experiments are performed.”

Source: U.S. International Trade Commission.

“In this structure, all of the electrons want to be in one layer or the other,” Dr. Welser explained. “By applying a very small voltage — on the order of 25 millivolts

— you can get all of the charge to jump from one side to the other. It acts like a switch, which is exactly how we want our transistors to act.”

Prof. Allan MacDonald at The University of Texas at Austin proposed the theory behind the BisFET device, and Prof. Sanjay K. Banerjee (the director of the SWAN center), Prof. Leonard F. Register, and Dr. Emanuel Tutuc, also from UT-Austin, explored the device design and metrics in depth. The simulations necessary to understand the formation of the superfluid phase in graphene bilayers are now carried out by Prof. Register’s group.

Dr. Sahu’s simulations have been crucial to understanding the internal and external variables that can affect the device performance.

“Atomistic simulations are necessary to understand the nanoscale effects arising from metal-graphene and metal-dielectric contacts,” Dr. Sahu Said.

The collective motion of the charge carriers in BisFET devices have an advantage over current silicon systems: more charges can be collected using less voltage, increasing

The initial philosophy of the NRI was to “let a thousand flowers bloom,” funding researchers to explore many ideas, including different devices based on graphene and other novel materials. But as the timeline for launching a new processor approaches, the research competition is shifting into overdrive. In January 2011, the project moved into “Phase 1.5”.

“We asked each of the centers to take the devices they thought were most promising and focus all of their efforts on those,” said Dr. Welser.

SWAN selected a graphene-based collective charge system. The device structure, which they call the bilayer pseudospintronic field-effect transistor (or BiSFET), is based on two layers of graphene separated by a super-thin insulator or air or a vacuum [see figure, above]. In the device, pseudospin refers to the presence of charge either on the top layer or the bottom layer, much like electron “spin” in quantum mechanics, which can take two possible values.

The physics of the device is based on collective charge motion, which forms a superfluid state at room temperature under certain conditions.

Dr. Bhagawan Sahu, research scientist at the Southwest Academy of Nanoelectronics (SWAN).

Texas Advanced Computing Center | Feature StoryFor more info, contact: Aaron Dubrow, Science and Technology Writer, [email protected] Page 2 of 3

Page 22: TACC 2011 Summer Features

the performance and decreasing the overall power consumption of the device. If the SWAN researchers can overcome the challenges involved in fabricating and demonstrating the BisFET devices, this particular transistor may be the game changer that the semiconductor industry is betting on.

The Next Phase of Discovery

In 2013, the SRC hopes that one or two nanotransistor designs will emerge as promising enough to justify expanded work on proof-of-concept demonstrations. This will require extensive research, initially in the university centers and eventually in industry labs. Even if all goes well, it will be challenging to introduce these devices into products by 2020, since it often takes 10 years or more from initial discovery to commercial implementation for new technology.

ribbons and flakes, to see how these variables influence the electronic properties including the electron band gap, magnetism and other related factors.

“The simulations are playing a major role in elucidating the interplay of the structure and the electronic properties of graphene,” Dr. Sahu said. “We’re building component by component, so we have an integrated view of what each part does and how it affects the whole device.”

The flurry of research into graphene has led to other niche applications that may have equally wide-reaching effects. Graphene is believed to be a potential material for memory systems, and for clean energy technology such as lithium-ion battery and solar photovoltaic cells. The common denominator in all of these applications is a need for smaller, faster, more energy efficient devices that make use of novel materials.

“Energy is one of the pressing problems for the society, and a lot of energy is consumed in present digital devices,” said Dr. Sahu. “If we can design a device that uses a billion times less power than a silicon transistor, as BisFET seems to promise, we can build on that kind of technology for the next few generations after 2020.”

May 18, 2011

If the devices use very different materials or structures, this will be no small endeavor. To convert fabrication facilities from silicon to graphene, for example, is expected to cost billions of US dollars. That is, if it is even possible to produce graphene in large enough quantities to realize carbon age electronics. (There are efforts by another SWAN member, Prof. Rodney S. Ruoff of The University of Texas at Austin, working with Texas Instruments’ SWAN assignee, Dr. Luigi Colomboto, to grow large area graphene films on metal substrates by chemical vapor deposition, which is critical for the success of the center’s BisFET device.)

Over the course of the last four years, Dr. Sahu has developed much of the underlying knowledge of graphene behavior at the nanoscale through numerical simulations on the Ranger supercomputer at the Texas Advanced Computing Center (TACC).

Using one of the largest, most advanced supercomputers available to the scientific community has allowed Dr. Sahu to investigate single-layer, bilayer, and multilayer forms of graphene. It has also let him experiment, virtually, with different widths, lengths, layer orientations, layer stackings and external voltages for graphene

Graphene is an atomic-scale honeycomb lattice made of carbon atoms.

Texas Advanced Computing Center | Feature StoryFor more info, contact: Aaron Dubrow, Science and Technology Writer, [email protected] Page 3 of 3

Page 23: TACC 2011 Summer Features
Page 24: TACC 2011 Summer Features

Researchers perform simulations of “topological insulators” as another route to low-power computing

Graphene’s “heavy” cousins are also moving into the spotlight. Known as three-dimensional topological insulators, these new forms of atomically weighty materials are comprised of combinations of bismuth, selenium and tellurium, and can be peeled off, like graphene, using scotch-tape.

Unlike graphene, however, these materials appear to have an intriguing ability to allow electrons to move along their surface without scattering. Scattering is the bane of transistor designers. When electrons don’t go where they’re supposed to go, information can’t be processed properly or efficiently.

The scattering-free properties arise from the symmetry of the electronic states guaranteed by a particular type of quantum mechanical interactions and the crystal structure of the topological insulators. The electronic states behave more or less like those in graphene. This makes the insulators a promising candidate for the Semiconductor Research Corporation's device search. However, other aspects, like presence of magnetic impurities might make it less suitable, as these variables can disturb the very symmetries of the electronic states that protect it from scattering in the first place.

Dr. Sahu and his team have performed interesting simulations of these material systems as another route to low-power computing.

"If you're doing simulations on new materials with new physics, you need to do them on the atomistic level where you literally build the material up atom by atom with all of the base physics that go along with that to see what happens," Dr. Welser said, "and that can only be done at an HPC center like TACC."

May 18, 2011

A Scatter-Free Surface

The behavior of surface electron states in momentum space for a topological insulator Bi2Se3 film of thickness 6 nm. The linear dispersion at zero energy and at the Gamma point is clearly seen. This dispersion is much like that in graphene with an

A visualization of the gapless surface states in the three-dimensional topological insulator Bi1-xSbx. [Courtesy of Yazdani Lab, Princeton University.]

Texas Advanced Computing Center | Feature StoryFor more info, contact: Aaron Dubrow, Science and Technology Writer, [email protected] Page 1 of 1

Page 25: TACC 2011 Summer Features
Page 26: TACC 2011 Summer Features

UT professor advocates for early undergraduate research with advanced computing technologies Platinum-based fuel cells offer an attractive alternative to internal combustion engines as a future means of utilizing chemical energy. There are, however, shortcomings of such technologies that must be resolved if they are to become practical and widespread. Some of these difficulties include the short lifetime of electrodes in acidic environments, an energy loss due to slow oxygen reduction, the high material cost and the limited supply of platinum itself, according to Graeme Henkelman, professor of chemistry at The University of Texas at Austin.

The interesting case of gold suggests a possible alternative. When gold clusters on the scale of a few nanometers, it becomes a very good catalyst for the oxidation of carbon monoxide. But even small amounts of gold are still expensive and not as effective for oxygen reduction, so students in the Freshman Research Initiative (FRI) are seeking more efficient alternatives by looking at other cheaper metals such as silver, nickel and copper.

FRI is a program in the College of Natural Sciences at The University of Texas that allows freshmen to take part in cutting-edge research in chemistry, biochemistry, nanotechnology, molecular biology, physics, astronomy and computer sciences. Students who enjoy and make progress in their research have the opportunity to continue in the summer and throughout their undergraduate program.

Kelly Tran and Matt Welborn both participated in the program in their freshmen year and now investigate nanoparticles under Henkelman’s guidance. They are among an elite group of undergraduates in the U.S. who have access to the supercomputers of the National Science Foundation’s (NSF) TeraGrid for research.

When one thinks of chemical reactions, one usually imagines beakers and test tubes. However, Henkelman's researchers use computational chemistry, where the reactions occur "virtually" on the silicon chips of powerful computers. The Ranger and Lonestar supercomputers at the Texas Advanced Computing Center have been critical in helping to find potential candidates to replace platinum. Using these massive computing systems, calculations of nanoparticle properties are completed at a faster rate with many jobs processed at the same time.

The ‘Gold’ Standard - UT’s Freshman Research Initiative

Texas Advanced Computing Center | Feature StoryFor more info, contact: Aaron Dubrow, Science and Technology Writer, [email protected] Page 1 of 2

The Henkelman Research Group is trying to understand the relationships between the structure of nanoparticles and their catalytic activity in order to design new materials tailored to catalyze a particular reaction. The discovery of catalytic gold is inspiring, because it suggests that there could be new classes of nanoscale catalysts. The group is focusing on the development of non-platinum metal nanoparticles designed to catalyze the oxygen reduction reaction (ORR). Cheap ORR catalysts would be instrumental for next-generation fuel cells and the development of efficient alternative energies sources.

The issue, however, is somewhat of a goldilocks dilemma. The search for a cheaper, yet still effective catalyst depends on the binding energy of molecules, such as oxygen, to the nanoparticles. The binding energy can be neither too weak, nor too strong. In

Kelly Tran looking at binding of p-nitrophenol to a copper/ platinum slab.

Page 27: TACC 2011 Summer Features

both limits, the reaction is slowed by large energy barriers in thereaction path.

Tran, a senior chemistry major and undergraduate research assistant for FRI, is currently exploring methods of separating ethylene from ethane and propylene from propane through nanocomposite membranes. Working with Zachary Pozun, a graduate research assistant in the Henkelman Group, Tran has also been investigating the binding of ethylene to different metals of different sizes and compositions.

“The computer simulations allow us tight control factors, so we can ask questions like, ‘How do the properties of a metal on the surface of a core/shell particle change by replacing the core?” said Pozun.

FRI researchers in the computational nanoparticles stream are able to work anywhere as long as there is internet access. “What makes this stream special is that we can work on campus or at home as long as we’re able to log onto the supercomputer,” said Tran. “More than 75 percent of our research is done remotely on Ranger and Lonestar.”

Running calculations on Ranger and Lonestar introduces another valuable aspect of FRI. During the three-semester program, students are taught how to conduct independent research while receiving mentorship from graduate students and professors. After learning the computation skills needed, Tran began researching her own project. Currently, she is working on understanding nitrophenol reduction at nanoparticles and mentoring freshman in a lower division lab.

Welborn joined the computational nanotechnology stream in his sophomore year, which eventually led him to change his postgraduate career goals from pre-med to scientific research. Welborn, a dean's honor student, had no background in research but

(Left to right) Kelly Tran, Phani Dathar, Venkatesan Thimmekondu, and Matt Welborn in the Henkelman Group office.

received encouragement from the program and has made impactful discoveries. He has been working on combining experimental X-ray diffraction data and information about the electronic structure of platinum nanoparticles. Knowing about the disorder at the surface, Wellborn says, is critical for understanding reactions that are key to the success of hydrogen fuel cells.

Students in the FRI program have achieved national recognition. The University of Texas at Austin was recently represented at the Undergraduate Research Day at the Capitol in February with FRI alumnus Matt Welborn, who presented his work with Graeme Henkelman of the Computational Nanoparticles stream. Welborn was also recently awarded a prestigious NSF graduate fellowship, which will support his work in this area as he goes to graduate school next year. Pozun and Tran, along with Anna Shi and Ryan Smith, also students in FRI, published a paper titled “Why Silver is Effective for Olefin/Paraffin Separations.” The group presented their findings at the American Chemical Society in 2010.

“The FRI program helps young students experience what it is to do research,” Henkelman said. “And if they like it, it gives them the resources they need to excel.”

May 4, 2011

Mourin NizamScience and Technology WriterTexas Advanced Computing Center

Texas Advanced Computing Center | Feature StoryFor more info, contact: Aaron Dubrow, Science and Technology Writer, [email protected] Page 2 of 2

Page 28: TACC 2011 Summer Features
Page 29: TACC 2011 Summer Features

Longhorn Innovation Fund for Technology (LIFT) supports research into emerging computing environment

Information overload may have met its match. With Enabling Data-Intensive Research and Education at The University of Texas at Austin via Cloud Computing, a team of researchers and educators are not only examining the potential of processing vast quantities of data quickly, they are discovering how the use of large datasets can lead to all kinds of interesting questions.

BackgroundThe increasing ability to generate vast quantities of data presents technical challenges for both researchers and educators as data storage and transfer approaches critical mass and the exchange of large data sets puts established information practices to the test. Increasingly, institutions of higher education have to plan strategically for this fundamental shift in how scientific data analysis can be done. The paradigm of “cloud computing” provides an environment that is up to the task of doing data-intensive computation for research and educational purposes. In this context, cloud computing is a distributed computing paradigm that enables

Enabling Data-Intensive Research via Cloud Computing

Texas Advanced Computing Center | Feature StoryFor more info, contact: Aaron Dubrow, Science and Technology Writer, [email protected] Page 1 of 3

large datasets to be sliced and assigned to available computer nodes where the data can be processed locally, avoiding network-transfer delays. This makes it possible for researchers to query tables with trillions of rows of information or search across all the servers in a data center.

With its combination of internationally recognized research faculty and the world-class computing systems of the Texas Advanced Computing Center (TACC), The University of Texas at Austin is uniquely positioned to become a pioneer in this developing field of data-intensive research and education. Jason Baldridge, Assistant Professor, Department of Linguistics, College of Liberal Arts, Matthew Lease, Assistant Professor, School of Information and Weijia Xu, Research Associate, Texas Advanced Computing Center are part of an interdisciplinary team focused on enabling innovative solutions for existing research projects and adding new data-intensive computing content and courses to the University’s academic and professional development offerings.

Geographical association of words from Memoirs of the Union’s Three Great Civil War Generals by David Widger, from Dr. Jason Baldridge’s TextGrounder project.

Page 30: TACC 2011 Summer Features

ProgessFunding from the Longhorn Innovation Fund for Technology (LIFT) has made it possible for their project, “Enabling Data-Intensive Research and Education at UT Austin via Cloud Computing” to purchase and install 304 hard drives with a total storage capacity of 112 terabytes on TACC’s Longhorn computer cluster and begin “Hadooping” at the University. Apache Hadoop is open source software used for reliable, scalable, distributed computing. Hadoop is an implementation of the MapReduce programming paradigm originally developed by Google. It enables the creation of applications capable of processing huge quantities of data on large clusters of computing nodes and do it quickly.

Computational LinguisticsIn collaboration with Lease and Xu, Baldridge is using Hadoop for his research in computational linguistics to analyze large datasets of texts for geographical and temporal references. Expanding on work done under an award from The New York Community Trust to develop software called TextGrounder, Baldridge is conducting geo-referencing analysis of texts to ground language to place and time. Examples include geolocating multilingual Wikipedia pages and Civil War era texts, as well as working with the UT Libraries' Human Rights Documentation Initiative to analyze testimonies from the Rwandan genocide (in English, French, and Kinyarwanda). Baldridge observes: "Hadoop lets you ask interesting questions based on large data sets. It allows the text to speak in new ways." The information gathered is used to visualize these texts using geobrowsers like Google Earth; current visualizations available on the TextGrounder wiki show how the system connects language to time and space.

Hadoop on the Longhorn cluster has brough immediate benefits to research, teaching and learning around campus. Here, an illustration that shows the Longhorn cluster and its file system.

Computational JournalismLease is developing new methods for finding information in massive datasets by applying large-scale distributed computation to perform media analytics. His work in this new area of computational journalism focuses on using rich computational tools to analyze news articles, blogs and user comments and find ways to support journalists in coping with the massive amount of online information. By decomposing huge volumes of information into text “snippets,” Lease can track how a single idea or concept

“flows” from its originating source across multiple information providers, redistributors, and consumers. By following the way these text snippets evolve, it is possible to identify the creation and dissemination of specific ideas and information. His research is part of a project called “REACTION (Retrieval, Extraction and Aggregation Computing Technology for Integrating and Organizing News), a collaborative effort between the UT Austin Portugal program and several Portuguese news media companies and universities.

Data MiningXu is using Hadoop to investigate data mining methods for large-scale data visualizations. The goal of his work is to advance data-driven knowledge discovery by enabling interactive visualizations on terascale datasets. Using data mining in the field of astronomy is a good example. As more advanced simulations bring in ever increasing amounts of data, large scale computational research can filter the data and enable the researcher to analyze a more manageable amount of information. Xu is also using Hadoop as part of his work on a cooperative research agreement conducted by TACC and the National Center for Advanced Systems and Technologies (NCAST) at the National Archives and Records Administration. This project is designed to assist archivists in their work by developing methods for receiving and analyzing large amounts of electronic records.

Weijia Xu (TACC), Matthew Lease (iSchool), and Jason Baldridge (UTCL) are driving the testing and documentation for conducting research using the Hadoop cluster.

Texas Advanced Computing Center | Feature StoryFor more info, contact: Aaron Dubrow, Science and Technology Writer, [email protected] Page 2 of 3

Page 31: TACC 2011 Summer Features

An essential part of the project team’s vision is to promote education and training on the latest developments in data-intensive computing across campus. Lease and Baldridge are currently developing a class they will co-teach in the 2011 fall semester. LIN386M/ INF385T – Data-Intensive Text Processing with MapReduce will give students with graduate standing the opportunity to learn about Hadoop and gain valuable experience in data-intensive computing. Xu has already added a lecture about using Hadoop on the Longhorn cluster to his course Visualization and Data Analysis for Scientists and Engineers (SSC374E/SSC394E.) The team has also created a wiki and is documenting their initial experiences with Hadoop. They are hopeful that other UT researchers will contribute to the wiki and discussion forums as a means of building the Hadoop community on campus. Xu is also adding a tutorial on using Hadoop on the Longhorn cluster to the regular schedule of TACC training workshops. Additional research talks, course offerings and online documentation will help grow campus-wide cloud computing expertise for faculty, students and staff at the University.

BenefitsIntroduction of Hadoop on the Longhorn cluster has brought immediate benefits to research, teaching and learning around campus. As initial adopters, Baldridge, Lease and Xu are driving the testing and documentation for conducting research using the Hadoop cluster. Their initial results are informing the work of a broad cross section of committed adopters across campus and sparking interest and inquiry from the larger external research community. The addition of new content and courses related to data-intensive computing provides students with the opportunity to acquire cutting edge skills. This not only gives them a competitive edge in the job market but positions them to make significant research contributions at UT Austin and beyond. The expectation is that the current investment in cloud computing education provides critical professional training that will lead to scientific discovery based on massive data analysis.

Next StepsNow that implementing the Hadoop cluster and testing the software is complete, next steps in the project focus on promoting large scale data analysis research projects across campus. Developing education and training that will help faculty, students and staff acquire skills in data-intensive computing is ongoing. The investment in hardware will continue to enable the development environment for data-intensive computing after the project is over and TACC has agreed to update the Hadoop software and documentation on how to use it.

Before the end of the funded period, the team plans to compile usage statistics on data-intensive computations enabled by the project. To date, six projects have been launched; this number is expected to grow as the project moves out of the setup and testing phase and users discover that they can run large data analysis tasks using Hadoop through a remote login to the Longhorn cluster.

April 27, 2011

Betsy BusbyCommunications CoordinatorInformation Technology Services

Texas Advanced Computing Center | Feature StoryFor more info, contact: Aaron Dubrow, Science and Technology Writer, [email protected] Page 3 of 3