innovation and obstacles: the future of computing · innovation and obstacles: the future of...

10
0018-9162/98/$10.00 © 1998 IEEE January 1998 29 Innovation and Obstacles: The Future of Computing I n this excerpt from “Visions for the Future of the Fields,” a panel discussion held on the 10th anniversary of the US Computer Science and Telecommunications Board, experts identify crit- ical issues for various aspects of computing. In the accompanying sidebars, some of the same experts elaborate on points in the panel discussion in mini essays: David Clark, CSTB chairman, looks at the changes needed in computing science research, Mary Shaw of Carnegie Mellon University examines chal- lenges for software system designers, and Robert Lucky of Bellcore looks at IP dialtone, a new infra- structure for the Internet. Donald Greenberg of Cornell University rounds out the essays with an out- look on computer graphics. Finally, in an interview with William Wulf, president of the US National Academy of Engineering, Computer explores the roots of innovation and the broader societal aspects that will ultimately drive innovation in the near term. RECKLESS PACE OF INNOVATION Clark: We have heard the phrase “the reckless pace of innovation in the field.” I have a feeling our field has just left behind the debris of half-understood ideas in an attempt to plow into the future. Do you think we are going to grow up? Ten years from now, will we still say we have been driven by the reckless pace of inno- vation? Or will we, in fact, have been able to breathe long enough to codify what we have actually under- stood so far? Reddy: We have absolutely no control over the pace of innovation. It will happen whether we like it nor not. It is just a question of how fast we can run with it. Lucky: At Bell Labs, we used to talk about research in terms of 10 years. Now you can hardly see two weeks ahead. The question of what long-term research is all about remains unanswered when you cannot see what is out there to do research on. Nicholas Negroponte was saying recently that, when he started the Media Lab at the Massachusetts Institute of Technology, his competition came from places like Bell Labs, Stanford University, and the University of California at Berkeley. Now he says his competition comes from 16-year-old kids. I see researchers working on good academic problems, and then two weeks later some young kids in a small community are out there doing it. There must still be good academic fields where you can work on long-term problems in the future, but the future is coming at us so fast that I just sort of look in the rear- view mirror. Shaw: I think innovation will keep moving; at least I hope so, because if it were not moving this fast, we would all be really good IBM 650 programmers by now. What will keep it moving is the demand from out- side. We have just begun to get over the hump where people who are not in the computing priesthood, and who have not invested many years in figuring out how to make computers do things, can actually make com- puters do things. As that becomes easier—it is not easy yet—more and more people will be demanding services tuned to their own needs. They will generate the demand that will keep the field growing. Hartmanis: We can project reasonably well what sili- con technology can yield during the next 20 years; the growth in computing power will follow the established pattern. The fascinating question is, what is the next technology to accelerate this rate and to provide the growth during the next century? Is it quantum com- puting? Could it really add additional orders of mag- nitude? What technologies, if any, will complement and/or replace the predictable silicon technology? Clark: Are growth and demand the same as innova- tion? We could turn into a transient decade of inter- disciplinary something, but does that actually mean there is any innovation in our field? Shaw: We have had some innovation, but it has not been our own doing. Things like spreadsheets and word processors, for example, have started to open the door to people who are not highly trained com- In this multidisciplinary glimpse forward, some of this decade’s key play- ers offer opinions on a range of topics—from what has driven progress, to where innovation will come from, and to obstacles we have yet to overcome. Moderator: David D. Clark Massachusetts Institute of Technology, Chair of the Computer Science and Telecommuni- cations Board Panelists: Edward A. Feigenbaum US Air Force Juris Hartmanis Cornell University Robert W. Lucky Bellcore Robert M. Metcalfe International Data Group Raj Reddy Carnegie Mellon University Mary Shaw Carnegie Mellon University Cover Feature Excerpts of “Visions for the Future of the Fields,” Defining a Decade: Envisioning CSTB’s Second 10 Years, 1997 are reprinted with permission by the National Academy of Sciences. Courtesy of the National Academy Press, Washington, DC. . Authorized licensed use limited to: IEEE Xplore. Downloaded on December 12, 2008 at 14:43 from IEEE Xplore. Restrictions apply.

Upload: others

Post on 07-Jun-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Innovation and Obstacles: The Future of Computing · Innovation and Obstacles: The Future of Computing I n this excerpt from “Visions for the Future of theFields,” a panel discussion

0018-9162/98/$10.00 © 1998 IEEE January 1998 29

Innovation andObstacles: TheFuture of Computing

In this excerpt from “Visions for the Future of theFields,” a panel discussion held on the 10thanniversary of the US Computer Science andTelecommunications Board, experts identify crit-ical issues for various aspects of computing. In the

accompanying sidebars, some of the same expertselaborate on points in the panel discussion in miniessays: David Clark, CSTB chairman, looks at thechanges needed in computing science research, MaryShaw of Carnegie Mellon University examines chal-lenges for software system designers, and RobertLucky of Bellcore looks at IP dialtone, a new infra-structure for the Internet. Donald Greenberg ofCornell University rounds out the essays with an out-look on computer graphics. Finally, in an interviewwith William Wulf, president of the US NationalAcademy of Engineering, Computer explores theroots of innovation and the broader societal aspectsthat will ultimately drive innovation in the near term.

RECKLESS PACE OF INNOVATIONClark: We have heard the phrase “the reckless pace ofinnovation in the field.” I have a feeling our field hasjust left behind the debris of half-understood ideas inan attempt to plow into the future. Do you think weare going to grow up? Ten years from now, will we stillsay we have been driven by the reckless pace of inno-vation? Or will we, in fact, have been able to breathelong enough to codify what we have actually under-stood so far?Reddy: We have absolutely no control over the pace ofinnovation. It will happen whether we like it nor not.It is just a question of how fast we can run with it.Lucky: At Bell Labs, we used to talk about research interms of 10 years. Now you can hardly see two weeksahead. The question of what long-term research is allabout remains unanswered when you cannot see what

is out there to do research on. Nicholas Negropontewas saying recently that, when he started the MediaLab at the Massachusetts Institute of Technology, hiscompetition came from places like Bell Labs, StanfordUniversity, and the University of California at Berkeley.Now he says his competition comes from 16-year-oldkids. I see researchers working on good academicproblems, and then two weeks later some young kidsin a small community are out there doing it. Theremust still be good academic fields where you can workon long-term problems in the future, but the future iscoming at us so fast that I just sort of look in the rear-view mirror.Shaw: I think innovation will keep moving; at least Ihope so, because if it were not moving this fast, wewould all be really good IBM 650 programmers bynow. What will keep it moving is the demand from out-side. We have just begun to get over the hump wherepeople who are not in the computing priesthood, andwho have not invested many years in figuring out howto make computers do things, can actually make com-puters do things. As that becomes easier—it is not easyyet—more and more people will be demanding servicestuned to their own needs. They will generate thedemand that will keep the field growing.Hartmanis: We can project reasonably well what sili-con technology can yield during the next 20 years; thegrowth in computing power will follow the establishedpattern. The fascinating question is, what is the nexttechnology to accelerate this rate and to provide thegrowth during the next century? Is it quantum com-puting? Could it really add additional orders of mag-nitude? What technologies, if any, will complementand/or replace the predictable silicon technology?Clark: Are growth and demand the same as innova-tion? We could turn into a transient decade of inter-disciplinary something, but does that actually meanthere is any innovation in our field?Shaw: We have had some innovation, but it has notbeen our own doing. Things like spreadsheets andword processors, for example, have started to openthe door to people who are not highly trained com-

In this multidisciplinary glimpse forward, some of this decade’s key play-ers offer opinions on a range of topics—from what has driven progress, towhere innovation will come from, and to obstacles we have yet toovercome.

Moderator:DavidD. ClarkMassachusettsInstitute ofTechnology,Chair of theComputer Science andTelecommuni-cations Board

Panelists: Edward A.FeigenbaumUS Air Force

Juris HartmanisCornellUniversity

RobertW. LuckyBellcore

Robert M.MetcalfeInternationalData Group

Raj ReddyCarnegie Mellon University

Mary ShawCarnegie Mellon University

Cove

r Fea

ture

Excerpts of “Visions for the Future of the Fields,” Defining aDecade: Envisioning CSTB’s Second 10 Years, 1997 arereprinted with permission by the National Academy of Sciences.Courtesy of the National Academy Press, Washington, DC.

.

Authorized licensed use limited to: IEEE Xplore. Downloaded on December 12, 2008 at 14:43 from IEEE Xplore. Restrictions apply.

Page 2: Innovation and Obstacles: The Future of Computing · Innovation and Obstacles: The Future of Computing I n this excerpt from “Visions for the Future of theFields,” a panel discussion

.

30 Computer

puting professionals, who have come at the academiccommunity from the outside. I remember whennobody would listen if you wanted to talk about texteditors in an academic setting. Most recently, therehas been the upsurge of the World Wide Web. It is truethat Mosaic was developed in a university, but notexactly in the computer science department. These aregenuine innovations, not just nickel-and-dime things.Feigenbaum: I think the future is best seen not in termsof changing hardware or increased numbers of MIPS(or GIPS), but rather in terms of the software revolu-tion. We are now living in a software-first world. I thinkthe revolution will be in software building that is now

done painstakingly in a craftlike way by the major com-panies producing packaged software. They create asuite—a cooperating set of applications—that takes thecoordinated effort of a large team. What we need to donow in computer science and engineering is to invent away for everyone to do this at his or her desktop; weneed to enable people to “glue” packaged softwaretogether so that the packages work as an integrated system. This will be a very significant revolution.

I think the other revolution will be the one LeonardKleinrock called didactic or intelligent agents. Theagent allows you to express what you want to accom-plish, providing the agent with enough knowledge

T en years from now, we may concludethat this was the coming of age forcomputer science. Certainly, it is

going through a transition, which likemany transitions, may appear a littlepainful to some living through it.

It’s a lot like the fencing of the AmericanWest. Those who worked in the earlydecades of CS had wide-open spaces for orig-inal ideas to flourish. That space is now pop-ulated with mature innovations, successfulproducts, and large corporations, which I’msure seems confining to some. The currentcontext of CS is shaped by past success andchronic trouble spots.

Past success makes blue-sky innovationseem more daunting and risky. It can trapunwary researchers in the present and keepthem from exploring the future. Whatinnovative operating system could haveany impact, given the market presence ofMicrosoft? What novel idea for global net-working could displace the Internet? Whatnew paradigm of computing could com-pete with the PC?

At the same time, some of the intellectualproblems CS has struggled with, such asbuilding large and trustworthy systems,seem to have a timeless quality: The com-plaints about building systems seem thesame as a decade ago. But there has beenprogress—in the last decade networks haveinterconnected the world and made a new

range of large systems possible. Perhaps thereal issue is that aspirations to build biggerand more complex systems are potentiallyunbounded, kept in check only by the lim-its of what can actually be built. So systembuilders live in a constant state of frustra-tion, always hitting (and complainingabout) the same limitations, even though thesystems they can now build are indeed big-ger and more complex than a decade ago.

Past successes change the imperative forfuture research. Whereas the past wasdefined by articulating new objectives (Let’sbuild an Internet), past achievements nowdemand incremental innovation (Let’s addsupport for audio streams to the Internet).That sort of work, while less broad inscope, is critical to the progress of the field.Part of coming of age is learning to equallyrespect innovation and enhancement.

The locus of the hard CS problems hasalso shifted. Look at the systems area.Although there’s still room for innovationin traditional areas such as language design(consider Java), more and more of the hardproblems arise when people try to putcomputers to use. This implies that if com-

puter scientists are to contribute to impor-tant emerging problems, they mustincreasingly act as anthropologists and golive for a time in the land of the applica-tion builders. Computer scientists are nowworking on diverse problems: ensuringprivacy, rendering complex images forrealistic games, and understanding humanspeech. The Web is a wonderful exampleof an application that has spawned lots ofinteresting problems, such as the con-struction of powerful search engines, algo-rithms for caching and replicating Webpages, the naming of information objectsand secure networks.

Of course, parts of CS view their role lessas innovating new artifacts and more asdeveloping the underpinnings of CS as a sci-ence. There is much about CS that is not yetunderstood in any rigorous way, and therate of innovation turns up new problemsto understand faster than the old ones canbe solved. For those who build the founda-tions of the field, there is no immediate fearof running out of problems to work on,even if innovation were to slow down.

D oes the coming of age mean that theera of big change is over? I think not.In the next 10 years, user interfaces,

the PC, the Internet, and even Windowswill have mutated, perhaps almost beyondrecognition. What will actually character-ize the next decade is the sometimes tur-bulent interplay between improving thepast and overthrowing it at the same time.Anyone who thinks the fun is over has justwalked off the field at half time. ❖

— David D. Clark, Chair, Computer Sci-ence and Telecommunications Board

Outlook onComputerScience

Authorized licensed use limited to: IEEE Xplore. Downloaded on December 12, 2008 at 14:43 from IEEE Xplore. Restrictions apply.

Page 3: Innovation and Obstacles: The Future of Computing · Innovation and Obstacles: The Future of Computing I n this excerpt from “Visions for the Future of theFields,” a panel discussion

.

about your environment and your context for it toreason exactly how to accomplish it.

SILICON AND FRUSTRATIONClark: One statement made at the beginning of thisdecade was that the nineties would be the decade ofstandards. There is an old joke: the nice thing aboutstandards is that there are so many to pick from. Intruth, I think that one of the things that has happenedin the nineties is that a few standards happened to winsome sort of battle—and not because they are neces-sarily the best.Lucky: This is both a tragedy and a great triumph.You can build a better processor than Intel or a bet-ter operating system than Microsoft, but it does notmatter. It just does not matter.Clark: How can you hurtle into the future at a reck-less pace and simultaneously conclude that it is allover, it does not matter because you cannot do some-thing better, because it is all frozen in standards?Metcalfe: There seems to be reckless innovation onalmost all fronts except two, software engineering andthe telco monopolies.Clark: Yet if we look at the Web, we have a regret-table set of de facto standards in HTML and HTTP,both of which any technologist would love to hate.When you try to innovate by saying it would be bet-ter if URLs were different, the answer is, “Yes, wellthere are only 50 million of them outstanding, so goaway.” Therefore, I am not sure I believe your state-ment that there is rapid innovation everywhere, exceptfor these two areas.Lucky: It is possible that if all the dreams of the Javaadvocates come true, this will permit innovation ontop of a standard. It is one way to get at this problem.We do not know how it is going to work out, but atleast this would be the theory.Clark: I actually believe it might be true. A tremendousengine exists that is really driving the field and that isthe rate of at least performance innovation, if not costreduction, in the silicon industry. I think this enginedrove us forward, but I am not sure it’s the only engine.I wonder if [ten years from now] we will say, “Yes, sil-icon drove us forward,” or will there be other things?Is the Web a creation of silicon innovation?Shaw: No, it is a creation of frustrated people who didnot feel like dealing with ftp and telnet, but stillwanted to get to information.Clark: I think you just said that silicon and frustra-tion are our drivers.Lucky: Silicon has really made everything possible.This is undeniable, even though we spend most of ourtime, all of us, working on a different level.Clark: I once described setting standards on theInternet as being chased by the four elephants of theapocalypse. The faster I ran, the faster they chased mebecause the only thing between them and making a

billion dollars was the fact that we had notworked this thing out. We cannot outrunthem. If it is a hardware area, we can halluci-nate something so improbable we just cannotbuild it today. Then, of course, we cannotbuild it in the lab, either. We used to try tohave hardware that let us live 10 years in thefuture. Now I am hard-pressed to get a PC onmy desk. Yet in the software area, there reallyis no such thing as a long-term answer. If youcan conceive it, somebody can reduce it topractice. So I do not know what it means tobe long term anymore.Feigenbaum: If you look at what individualfaculty people do, you find smallish things ina world that seems to demand more team and systemactivity. There is not much money around to fund any-thing more than small things, basically to supplementa university professor’s salary and a graduate studentor two, and perhaps run them through the summer.Partly this is because of a general lack of money. Partlyit is because we have a population explosion problemand all these mouths to feed. All the agencies that werefeeding relatively few mouths 20 years ago are nowfeeding maybe 100 times as many assistant professorsand young researchers, so the amounts of money goingto each are very small. This means that, except for theoccasional brilliant meteor that comes through once ina while, you have relatively small things being done.When they get turned into anything, it is because theindividual faculty member or student convinces anindustry to spend more money on it. Subsequently, theworld thinks it came out of industry.

LOOKING OUTWARD AT REAL PROBLEMSAnita Borg (audience member): I wanted to talk a bitabout where you get innovation and where academicsget ideas for problems to work on. This is somethingI talk about every time I go, as an industry person, totalk to a university. If we keep training students tolook inside their heads and become professors, we losethe path of innovation. If we train our students to lookat what industry is doing and what customers andpeople out there using these things cannot do—to notbe terrorized by what they can do, but to look atwhere they are running into walls—our students startappreciating these as the sources of really hard prob-lems. I think this focus is lacking in academia to someextent and looking outward at real problems givesyou focus for research.Hartmanis: I fully agree. Students should be wellaware of what industry is and is not doing, and Ibelieve that many are well informed. Shaw: Earlier I mentioned three innovations that camefrom outside the computer science community: spread-sheets, text formatting, and the Web. I think they cameabout because people outside the community had

January 1998 31

Someone once saidthe nineties wouldbe the decade of

standards. There isan old joke: The nice

thing aboutstandards is that

there are so many tochoose from.

Authorized licensed use limited to: IEEE Xplore. Downloaded on December 12, 2008 at 14:43 from IEEE Xplore. Restrictions apply.

Page 4: Innovation and Obstacles: The Future of Computing · Innovation and Obstacles: The Future of Computing I n this excerpt from “Visions for the Future of theFields,” a panel discussion

.

32 Computer

Outlook on SoftwareSystem Design

O ver the last decade, computers havebecome nearly ubiquitous, and theirusers are often people who neither have,

nor want, nor (should) need, years of specialtraining in computing. Business computersare often in the hands of information users,no longer under exclusive control of a cen-tralized information systems department.Instead of gaining access to computers onlythrough professional intermediaries, vastnumbers of people are responsible for theirown computing—often with little systematictraining or support. This disintermediation—the direct association between users and theirsoftware—has created new problems for soft-ware system designers.

If all these computers are to be genuinelyuseful, their owners or handlers must be ableto control them effectively: they must under-stand how to express their problems, theymust be able to set up and adapt the com-putations, and they must have reasonableassurance of the correctness of their results.This must be true across a wide range ofproblems, spanning business solutions, sci-entific and engineering calculations, and doc-ument and image preparation. Furthermore,owners of personal computers must be ableto carry out the tasks formerly delegated tosystem administrators, such as configura-tion, upgrade, backup, recovery.

The means of disintermediation havebeen available, affordable hardwaretogether with application-targeted soft-ware that produces information and com-puting in a form the end user canunderstand and control. The software car-riers—the “killer apps”—have beenspreadsheets, the Web, integrated officesuites, and interactive environments suchas MUDs (multiuser domains). To a lesserdegree, Visual Basic has enabled peoplewith minimal programming experience tocreate useful software that fits their ownneeds. These applications have becomewinners in the marketplace because theyput a genuinely useful capability in thehands of people with real problems.

The software system design community

should be alarmed to notice that these killerapps have emerged from outside theirresearch world. Worse, the research com-munity has often (at least initially) failed totake these applications seriously. Suchapplications have been regarded as toys, notworthy of serious attention; they have beenfaulted for a lack of generality or (imagined)lack of scalability; they have been ignoredbecause they don’t fit the establishedresearch mold. But software system designresearchers comprise the very communitythat should be breaking the mold and pro-viding solutions for real-world needs.

Although it’s always risky to predict themarket, one place to look for ideas is the rela-tion between the people who use computingand the computing they use. We’ve alreadyseen substantial disintermediation, as moreand more people have direct access to theircomputers and software. At present, theircomputing is dominated by individual inter-active computations, which lets them moni-tor results as they go. It is more challengingto set up stand-alone processes that rununmonitored. This requires describing pol-icy for an open-ended set of computations,not just manipulating instances. We can seethe small beginnings of such independentprocesses in mail filters, automatic check pay-ments, and the daemons that select articlesfrom news feeds. But what will be required toenable large numbers of users to set upautonomous software agents with largerresponsibilities? At what point will the pub-lic at large trust the Internet and electroniccommerce mechanisms enough to carry outindividual transactions? When will con-sumers be willing to have an autonomoussoftware agent spend money on their behalf?

Another potential change in the rela-tion between people and computing is afusion between computing and enter-tainment. This will, of course, requireinfrastructure development in band-width, 3D display, intellectual propertyprotection, and electronic commerce—the usual stuff of software system designresearch. Beyond that, though, what newcapabilities will the consumer need?What will be required to make entertain-ment both interactive and multiparty?How can individuals become producersas well as consumers of computer-basedentertainment?

What does all this mean for softwaresystem design research? First, we must rec-ognize the important—and difficult—research problems that these applicationscarry, including how to

• analyze component interoperabilityand develop techniques for copingwith incompatibility;

• specify and implement event-drivensystems that support the dynamicreconfiguration of loosely confeder-ated processes or agents;

• support metainformation that carriestype, signature, performance, andother information needed to auto-mate distributed agents;

• manage families of related systems; • deal with the security issues of elec-

tronic commerce; • design for “gentle-slope systems,” in

which the learning time required iscommensurate with the application’ssophistication;

• integrate multiparty real-time inter-action with other applications(beyond chat rooms, electronic white-boards, MUDs, and virtual commu-nities); and

• analyze requirements for market seg-ments rather than individual bespokesystems.

S econd, we should contribute todeveloping accurate models of com-puter use that are simple enough for

nonspecialists to understand. Finally, weshould increase interdisciplinary workwith researchers in human-computer inter-action and in application areas. ❖

— Mary Shaw, Carnegie Mellon University

Authorized licensed use limited to: IEEE Xplore. Downloaded on December 12, 2008 at 14:43 from IEEE Xplore. Restrictions apply.

Page 5: Innovation and Obstacles: The Future of Computing · Innovation and Obstacles: The Future of Computing I n this excerpt from “Visions for the Future of theFields,” a panel discussion

.

January 1998 33

something they needed to do and were not getting anyhelp doing it. So we will get more leads by looking notonly at the problems of computer scientists, but also atthe problems of people who do not have the technicalexpertise to cope with these problems. I do not thinkthe next innovation is going to be an increment alongthe Web, or an increment on spreadsheets, or an incre-ment on something else. What Anita is asking us tothink about is, how are we going to be the originatorsof the next killer application, rather than waiting forsomebody outside to show it to us?Reddy: If you go back 40 years, it was clear that cer-tain things were going to have an impact on society—for example, communications satellites, predicted byArthur Clarke; the invention of the computer; and thediscovery of the DNA structure. At the same time,none of us had any idea of semiconductor memoriesor integrated circuits. We did not conceive of theArpanet. All of these came to have an impact. So myhypothesis is that some things we now know will havean impact. One is digital libraries. The term digitallibrary is a misnomer, the wrong metaphor. It ought tobe called digital archive, bookstore, and library. It pro-vides access to information at some price, includingno price. In fact, the National Science Foundation andDARPA have large projects on digital libraries, butthey are mainly technology-based—creating that tech-nology to access information. Nobody is working onthe other problem of content.

We have a Library of Congress with 30 million vol-umes; globally, the estimate is about 100 million vol-umes. The US Government Printing Office produces40,000 documents consisting of six million pages thatare out of copyright. Creating a global movement—because it is not going to be done by any one country orany one group—to get all the content (to use Jefferson’sphrase, all the authored works of mankind) online is it.At Carnegie Mellon University, we are doing two thingsto help. In collaboration with National Academy Press,we are beginning to scan, convert, correct, and put inHTML format all its out-of-print books. There arealready about 200 to 300 of them. By the end of theyear, we expect to have all of them. The second thingCMU is doing is offering to put all authored works ofCSTB members on the network.

FIXING THE INTERNETMetcalfe: The Internet is one of our big success sto-ries and we should be proud of it, but it is broken andon the verge of collapse. It is suffering numerousbrownouts and outages. Increasingly, the people I talkto, numbering in the high 90 percent range now, aregenerally dissatisfied with its performance and relia-bility.

There is no greater proof of this than the prolifera-tion of intranets. The good reason people build themis to serve internal corporate data processing applica-

tions, as they always have. The bad reason theybuild them is that the Internet offers inadequatesecurity, performance, and reliability for itsuses. So we now have a phenomenon in com-panies. The universities, as I understand it, arecurrently approaching NSF to build anotherNSFnet for them. This is really a suggestion notto fix the Internet, but to build another networkfor us.

Of course, the Internet service providers arealso tempted to build their own copies of theInternet for special customers and so on. Ibelieve this is the wrong approach. We need tobe working on fixing the Internet. Lest you bein doubt about what this would include, itwould mean adding facilities to the Internet bywhich it can be managed. I claim that these facil-ities are not in the Internet because universitiesfind management boring and do not work on it.Fixing the Internet also would include adding mecha-nisms for finance so that the infrastructure can begrown through normal communication between sup-ply and demand in our open markets, and adding secu-rity; it is not the National Security Agency’s fault thatwe do not have security in the Internet. It occurredbecause for years and years working on security hasbeen boring, and no one has been doing it; now wefinally have started.

We also need to add money to the Internet—not thefinance part I just talked about, but electronic moneythat will support electronic commerce on the Internet.We need to introduce the concept of zoning in theInternet. The Communications Decency Act is aneffort, although a lame one, to bring this about. Onthe Internet, mechanisms supporting freedom ofspeech have to be matched by mechanisms supportingfreedom not to listen.

We need progress on the development of residen-tial networking. The telecommunications monopolieshave been in the way for 30 or 40 years, and we needto break these monopolies and get competition work-ing on our behalf.

Shaw: We talked a lot about software and a littleabout the Web, which is really a provider of infor-mation rather than of computation at this point. Ibelieve we should not think about these two thingsseparately, but rather about their fusion as informa-tion services, including not only computation andinformation, but also the hybrid of active informa-tion. On the Web, we have lots of information avail-able as a vast undifferentiated sea of bits. We havesome search engines that find us individual points. Weneed mechanisms that will allow us to serve more sys-tematically and to retain the context of the search. Tofundamentally change the relation between the usersand the computing, we need to find ways to makecomputing genuinely widespread and affordable, pri-

We need to fixthe Internet,

which means addingfacilities by which it

can be managed.These facilities arenot in the Internet

because universitiesfind managementboring and do not

work on it.

Authorized licensed use limited to: IEEE Xplore. Downloaded on December 12, 2008 at 14:43 from IEEE Xplore. Restrictions apply.

Page 6: Innovation and Obstacles: The Future of Computing · Innovation and Obstacles: The Future of Computing I n this excerpt from “Visions for the Future of theFields,” a panel discussion

.

34 Computer

vate and symmetric, and genuinely intellectually acces-sible by a wider collection of people.

I thank Bob Metcalfe for saying most of what I wasgoing to say about what needs to be done because thenetworks must become places to do real business,rather than places to exchange information amongfriends. In addition, we need to spend more timethinking about what you might call naïve models, thatis, ways for people who are specialists in something

other than computing to understand the computingmedium and what it will do for them, and to do thisin their own terms so they can take personal controlover their computing.Lucky: I know two things about the future. First, afterthe turn of the century, one billion people will be usingthe Internet. Second, I do not have the foggiest ideawhat they are going to be using it for. We have createdsomething much bigger than us, where biological rules

T he familiar telephone network hasserved us well for a century.Yesterday’s grand challenge—con-

necting the planet with voice telephony—has been accomplished.

But as the new century looms, a revolu-tionary model for telecommunications hasthrust itself on an unsuspecting industry.It is a model based on the Internet, whereall communications take the form of digi-tal packets, routed from node to nodeaccording to IP, or Internet Protocol. Thisis a world in which the Esperanto thatenables intercommunication between dis-parate networks means expressing every-thing in IP packets. The future plug onyour wall will speak IP, and the service itoffers will be IP dial tone.

Today’s infrastructure is a circuit-switched network in which the dominanttraffic is voice, and the medium of ex-change among networks is the standard 3-kHz analog channel. To transmit data,we use modems to change the digital sig-nal into a voicelike analog signal compat-ible with this transmission format. Thenetwork of the future will invert this par-adigm—reformatting voice to look likedata. The natural medium of exchange willbe the IP packet.

The new IP-dial tone network is hap-pening very fast. It is well known that thenumber of host computers on the Internetdoubles annually, and estimates on theannual growth of data traffic in theInternet backbone start at a factor of 4 andgo up to 10. Thus the traffic is growingfaster than the number of users, indicating

that the average user is consuming con-siderably more bandwidth each year.

People speak of two forthcomingevents: The crossover, when data trafficequals voice traffic, will probably happenin the next several years. Soon after, how-ever, we will have the eclipse, when datatraffic becomes an order of magnitudelarger than voice traffic.

Moreover, if data traffic is indeed grow-ing by an annual factor of 10, the eclipsecould be sudden. Almost immediately afterdata traffic pulls equal to voice, it will beginto supplant voice as the dominant trafficon the world’s networks. And we wouldessentially have an IP dial tone network.

To many engineers, sending voice asdata packets has sounded awkward, evensilly. Voice is, after all, continuous andanalog by its nature. Why break it intopackets with varying and unreliablearrival times? There are several good rea-sons. The first and most immediate is thatInternet telephony (voice on IP) is essen-tially free. Even if this somewhat artificialadvantage does not survive, more sus-tainable potential advantages include theintegration of multimedia content, embed-ded signaling, and the integration of thecomputer and telecommunications envi-ronments on the desktop. Transmissionefficiency may also improve somewhat

because of better speech coding and mul-tiplexing.

The new IP-dial tone network portendsnot just a revolution in technology, but inthe very basis of telecommunications eco-nomics. Today the cost of the network andits operation are supported by chargingfees based on call time and distance. Thetariffs have historically been set accordingto the guiding principle of universal service.In the IP world of packets it is not at allclear what might constitute a natural basisfor charging. Moreover, the technologyrevolution is being accompanied by a com-plete restructuring of the regulatory envi-ronment. The system is being unbundled,and the infrastructure will be provided bymany competing service providers, eachwith a rich choice of technology alterna-tives. The only constant holding togetherthese disparate elements will be the IP.

The IP world has often been viewed asan hourglass. On the wide top are theapplications; on the bottom are all thealternative physical transmission tech-nologies. The narrow waist is the IP.Anything that speaks IP can flow throughthe waist unimpeded. That narrow waistisolates the myriad complexities of theunderlying world from the equally daunt-ing complexities of the upper applications.That is the real beauty in the retrospectiveappreciation of the IP.

I n an IP world, the user is empowered tobuild applications on a minimallydefined standard. We have already seen

overwhelming evidence of that empower-ment in the Internet. New applicationscontinually spring from all corners of theworld. There is a magic in the air ofInternet, and it will soon emanate fromthat little IP plug in the wall. Welcome toIP dial tone. ❖

—Robert W. Lucky, Bellcore

Outlook on Tele-communications

Authorized licensed use limited to: IEEE Xplore. Downloaded on December 12, 2008 at 14:43 from IEEE Xplore. Restrictions apply.

Page 7: Innovation and Obstacles: The Future of Computing · Innovation and Obstacles: The Future of Computing I n this excerpt from “Visions for the Future of theFields,” a panel discussion

.

seem more relevant than the future paradigm we areused to, where Darwinism and self-adaptive organi-zation may be the more relevant phenomena withwhich to deal. The question is, How do we design aninfrastructure in the face of this total unknown?Certain things seem to be an unalloyed good that wecan strive for. One is bandwidth. Getting bandwidthout all the way to the user is something we can dowithout loss of generality.

On the other side, it is hard to find other unalloyedgoods. For example, intelligence is not necessarily agood thing. Recently there was a flurry of e-mail onthe Internet when one of the router companiesannounced that it was going to put an “Exon box” inits router. An Exon box would check all packets goingby to see if they are adult packets or not. There wasa lot of protest on the Internet, not because of FirstAmendment and Communication Decency Act prin-ciples, but because people did not want anything putinside the network that exercises control, simply as anarchitectural paradigm, more than anything else. Sobandwidth is good, but anything else you do on thenetwork may later come back to bite you because ofprofound uncertainty about what is happening.

LIMITS OF RATIONAL REASONINGHartmanis: I would like to talk more about the sci-ence part of computer science, namely, theoreticalwork in computer science and its relevance, and iden-tify some stubborn intellectual problems. For example,security and trust on the Internet are of utmost impor-tance; yet all the methods we use for encryption arebased on unproven principles. We have no idea howhard it is to factor large integers, but our security sys-tems are largely based on the assumed difficulty of fac-toring. There are many more such unresolvedproblems about the complexity of computations thatare directly relevant to trust, security, and authenti-cation, as well as to the grand challenge of under-standing what is and is not feasibly computable. Thenotorious P=NP is probably the best-known problemof this type, but by far not the only one. I considerthese among the most important problems in theo-retical computer science and sincerely hope that, dur-ing the next 10 years, some of them will be solved. Ibelieve that deeper understanding of the computingparadigm, the quest to understand what is and is notfeasibly computable is equivalent to understandingthe limits of rational reasoning—a noble task indeed.

LIGHTING THE WORLDFeigenbaum: I would like to talk briefly about artifi-cial intelligence and the near future. If we look back50 years—in fact to the very beginning of computing—Turing was around to give us a vision of artificial intel-ligence and what it would be, beautifully explicated inthe play about Turing’s life, Breaking the Code.

Raj Reddy published a paper in the May 1996Communications of the ACM, his Turing Awardaddress, called “To Dream the Possible Dream.” I sharethat possible dream, but I feel like the character in theWilliam Steif cartoon who is tumbling through spacesaying, “I hope to find out what it is all about before itis out.”

There is a kind of Edisonian analog to this. Yes, wehave invented the light bulb, and we have given peo-ple plans to build the generators. We have given themtools for constructing the generators. They have hand-crafted a few generators. There is one lamp post work-ing here, or lights on one city block are working overthere. A few places are illuminated, but most of theworld is still dark. Yet the dream is to light up theworld! Edison, of course, invented an electric com-pany. So the vision is to find out what it is we mustdo—and I am going to tell you what I think it is—andthen go out and build that electric company.

W hat we learned over the last 25 years is thatthe driver of the power of intelligent systemsis the knowledge the systems have about their

universe of discourse, not the sophistication of thereasoning process the systems employ. We have puttogether tiny amounts of knowledge in very narrow,specialized areas in programs called expert systems.These are the individual lamp posts or, at most, thecity block. What we need built is a large, distributedknowledge base. The way to build it is the way thedata space of the Web came about—a large numberof individuals contributing their data to the nodes ofthe Web. In the case I am talking about, people willbe contributing their knowledge in machine-usableform. The knowledge would be presented in a neutraland general way—a way to build knowledge basesso that they are reusable and extensible, so that theknowledge can be used in many applications. A lotof basic work has been done to enable this kind ofinfrastructure growth. I think we just need the willto go down that road. ❖

AcknowledgmentsComputer thanks Nancy Talbert for soliciting and

coordinating the Outlook sidebars and David Clarkfor reviewing the excerpted panel discussion.

David D. Clark is a senior research scientist at Mass-achusetts Institute of Technology’s Laboratory forComputer Science and chair of the CSTB. Contacthim at [email protected].

Edward A. Feigenbaum is chief scientist of the US AirForce. Contact him at [email protected].

Donald P. Greenberg is Jacob Gould Schurman pro-

January 1998 35

Authorized licensed use limited to: IEEE Xplore. Downloaded on December 12, 2008 at 14:43 from IEEE Xplore. Restrictions apply.

Page 8: Innovation and Obstacles: The Future of Computing · Innovation and Obstacles: The Future of Computing I n this excerpt from “Visions for the Future of theFields,” a panel discussion

.

36 Computer

Infoworld and vice president of technology at Inter-national Data Group. Contact him at [email protected].

Raj Reddy is dean of Carnegie Mellon University’sSchool of Computer Science and the Herbert A. SimonUniversity professor of computer science and robotics.Contact him at [email protected].

Mary Shaw is the Alan J. Perlis professor of computerscience and associate dean for professional programsat Carnegie Mellon University. Contact her [email protected].

fessor of computer graphics at Cornell University.Contact him at [email protected].

Juris Hartmanis is the National Science Foundation’sassistant director for computer and information sci-ence and engineering. Contact him at [email protected].

Robert W. Lucky is corporate vice president ofapplied research at Bellcore. Contact him at [email protected].

Robert M. Metcalfe is executive correspondent for

Outlook onComputerGraphics

A s we continue our inexorable questfor realistic image synthesis in realtime, we can at last say that our goal

is close at hand. Indications are that by2025, we will have the technology to pro-duce realistic real-time images at resolu-tions up to or beyond the limits of ourvisual perception. The algorithms used willbe pixel-based, not polygon-based, and theimages will include all the subtle effects ofshading, shadows and interreflections wesee in complex environments.What thismeans is that we will be able to createimages that are visually indistinguishablefrom real-world scenes.

How will we get such simulations? I seeprogress as having three phases.

The first phase deals with local light reflec-tion. We now have the algorithms to modelphysically based reflection behavior of lightscattering off a surface. The models result inwhat is standardly termed the diffuse, direc-tional diffuse, and specular components.Researchers are currently extending thesemodels to include fluorescence, polarizedlight, and multilayered materials.

The next phase deals with the propaga-tion of light energy throughout the envi-ronment. Because each surface can interactwith every other surface in receiving or dis-tributing radiant energy, this problem iscomplex. The accuracy of the solutiondepends on how the environment is dis-

cretized for the simulation, the originalgeometric complexity, and the precision ofthe computational solution. Computationtimes for these simulations are so severethat in today’s computing environment,only static scenes are practical. These tasksmight require 1010 times more processingpower then we currently have today on asingle workstation. Fortunately, much ofthe calculation has no visual effect on theresulting image, so we need not calculatebeyond the limits of human perception.This would require only 107 times moreprocessing power.

The third phase involves establishingtone-mapping procedures, to map the phys-ical predictions to within the limits of thedisplay devices. Displays are constrained bytheir spatial resolution (number of pixels),color resolution (number of color chan-nels), dynamic range (number of illumina-tion levels), and temporal resolution

(number of frames per second). Today’s dis-play systems, even high-resolution moni-tors, are not sufficient. High-definitiontelevision comes closer, but another factorof 10 in some dimensions is probably nec-essary. This display capability will proba-bly be reached within the next two decades.

So if the display devices are available by2020, will we be able to compute the sim-ulations in real time? According toMoore’s law (chip densities double every18 months), in 15 years we will have 106

times more processing power than today,and within five years after that, approxi-mately 107 times. More radical predic-tions, which include the shift from alum-inum to copper circuitry, the use of multi-bit transistors, and the inclusion of paral-lelism, claim that Moore’s law will even beexceeded.

This means that sometime near the endof 2025 we will have both the display andcomputational capability to produceimages that are both physically accurateand perceptually indistinguishable fromreal-world scenes.

I f this comes true, we then face anotherdilemma. The good news is that ourcommunication capability will be vastly

enhanced. The bad news is that seeing is nolonger believing. Images may no longer beadmissible as evidence, and verificationtools might be needed to avoid confusionbetween real and virtual worlds. But I don’texpect these cultural issues to slow ourquest for realism in image synthesis. Indeed,a picture will be worth 1,024 words. ❖— Donald P. Greenberg,Cornell University

Authorized licensed use limited to: IEEE Xplore. Downloaded on December 12, 2008 at 14:43 from IEEE Xplore. Restrictions apply.

Page 9: Innovation and Obstacles: The Future of Computing · Innovation and Obstacles: The Future of Computing I n this excerpt from “Visions for the Future of theFields,” a panel discussion

January 1998 37

.

William Wulf onthe Many Facesof InnovationInterviewed by Nancy Talbert

Computer: Every significant change seems tobe associated with a push and pull. What arethe next major pushes and pulls?Wulf: I wish I could say otherwise, but Ithink the major push has been and will con-tinue to be Moore’s law. Anything that ischanging at that rate just can’t be ignored.We don’t have an equivalent law for band-width, but we are obviously getting an evengreater rate of change there. I think thosefundamental hardware advances will driveeverything else. The pulling forces are a lotmore diffuse and harder to see. We’ve got aphenomenon for which we have no realprecedent. The first time I saw a spread-sheet, for example, I did not appreciate theimpact they would have, because there hadnever been a thing like them before, at leastnot mechanized. Because we have less expe-rience, I think it is very much harder toidentify the pulls.Computer: And therefore much harder tosay where innovation will come from?Wulf: Exactly. We are lucky in this coun-try to have the atmosphere to grow moreThomas Edisons—groups of entrepreneursthat will generate the pull. Who knowswere the next light bulb will come from?Computer: How will computers evolve?Wulf: I suspect that computers will simplybecome parts of products that we will viewas having some intelligence, some respon-siveness to human needs. My car, I am told,has seven microprocessors in it, but I amnot conscious of them. It will be interestingto see whether there is an identifiable thingcalled a computer 20 years from now. Myguess is there won’t be.Computer: A year ago, you stated“Interesting and deep academic problemsare spawned by short-term product devel-opment.” What do you think will be thenext major product-problem set?Wulf: One example is MEMS [micro-electromechanical systems] technology.MEMS lets the designer put on a smallpiece of silicon something that can sense itsenvironment, reason about the implica-tions of the sensory information, plan whatto do about it, physically act, and commu-nicate its actions. Although we don’t talkabout it much, there’s no reason we can’t

also put radio transmitters and receiverson MEMS devices. It seems to me that thisopens enormous possibilities. Every one ofthese devices is going to create a complexsystem or be embedded in one. I have infi-nite faith that that will generate a set of aca-demically interesting and fundamentalquestions.Computer: So are you are saying that newareas of research will come less from try-ing to fill some identified need and moreas a direct result of observing the technol-ogy itself? Wulf: Yes, people talk about “curiosity-dri-ven research”—research motivated solelyby the researcher’s desire to understand thenatural world. I’m sure there’s some of that,but an awful lot of research is need-driven,motivated by a problem domain. It isresearch whose question is triggered byobserving a man-made artifact, rather thana natural one. Pure science is motivated bytrying to understand the natural world, butas we build more complex man-made sys-tems, we will see behaviors that are everybit as difficult to understand, every bit asfascinating, and every bit as important asunderstanding the natural world.Computer: On the other hand, there seemsto be a movement away from technologyfor technology’s sake. We’ve seen analmost frantic movement to have comput-ers conform to the way people behave.Wulf: It’s only because the technology hasbecome so much more versatile that wecan even consider making it more humane.I remember back in the ’70s, before PCs,when several people made the rather star-tling but correct observation that it takesmore computing power to support a sec-retary than a computer scientist. We now

have that power, and we are beginning todevelop more accurate models of humanbehavior. So with both these things, we canbegin to be more humane. Of course,we’re not completely there yet, either incomputing power or in our understandingof what it means to be humane.Computer: Some people claim that this isthe age of software, and indeed it seems thatsoftware developments and problems areat the forefront. Others claim that silicon,although less visible, drives it all. Do yousee any convergences between these two?Wulf: Well, as I said everything is driven byMoore’s law, but I do see hardware andsoftware design converging. In both, theproblem is how to manage complexity. Thedesign of a 10-million-device chip hasnothing to do with electronics and every-thing to do with a digital abstraction ofthose electronics. As design tools likeVHDL become more widely used, thehardware and software engineering ofthese artifacts will blend. You won’t knowor care whether you are designing hard-ware or software.

We made an absolutely arbitrary deci-sion about using the instruction set as theboundary between hardware and software.If you go back to microcode days, theinstruction set wasn’t really hardware; itwas a program running on a simpler pieceof hardware. The bugs in a Pentium aresoftware bugs. They manifest themselvesin hardware, but in fact they are the samekind of bugs we get in software and for thesame reason—complexity. So I’ll buy thatthis is the age of computation and com-munication but dividing that into hard-ware and software is absolutely arbitrary,and arguing over the boundary is a coun-terproductive way to spend time.Computer: Much has been said about thegovernment, industry, academia triangle,not all of it complimentary. How will theseroles evolve?Wulf: I’ve been at the Academy now for17 months, and I’ve been able to exam-ine this question for fields other thancomputing and electronics. I have decidedthat our field is actually doing quite well.Somehow among DARPA and NSF, thereally good industrial labs like XeroxPARC, IBM TJ Watson and Bell Labs,and the better experimental universities,a tremendous churning has gone on, andthe degree of collaboration and coopera-tion is phenomenal. It’s what a lot ofother fields would aspire to. I see consid-erable respect for industry by academia.

Authorized licensed use limited to: IEEE Xplore. Downloaded on December 12, 2008 at 14:43 from IEEE Xplore. Restrictions apply.

Page 10: Innovation and Obstacles: The Future of Computing · Innovation and Obstacles: The Future of Computing I n this excerpt from “Visions for the Future of theFields,” a panel discussion

38 Computer

.

How could you be a university personand not respect a Butler Lampson [TuringAward winner for work at Xerox PARC,DEC, and Microsoft]. Industry recog-nizes that what has gone on in universitieshas been very beneficial. That’s not to sayperfect. But I think it is worthwhile tostep back and say, this has really workedpretty well. For example, the field wouldnot have progressed nearly as fast ifCarver Mead hadn’t deposited his modelof VLSI design, or Xerox hadn’t done theAlto, or DARPA and NSF hadn’t made iteasier for universities to do VLSI designthrough MOSIS.Computer: But now funding through ser-vices like MOSIS is shrinking. What do yousee as an alternative? Do you see industriessubcontracting university research? Willthe arrangement change from less of an iso-lated laboratory to more of a collabora-tion, almost a business partnership?Wulf: I hope that doesn’t happen. DickCyert was president of CMU when I was awet-behind-the-ears assistant professorand he was fond of talking about an orga-nization’s unfair advantages—the proper-ties of an organization that made it betterat doing what it does than anybody else.His idea was to figure out your unfairadvantage and use it, don’t try to competeon somebody else’s turf when they have theunfair advantage.

Universities it seems, have the unfairadvantage of being able to scan the hori-zon relatively unfettered to see where theinteresting problems are. They have a costadvantage for doing some kind ofresearch. Universities have a lot of unfairadvantages, but one unfair advantagethey absolutely do not have is an under-standing of customers and businessrequirements. So if you want to see prod-ucts get really screwed up, do them in auniversity. It’s just not the right place, andnot for some abstract intellectual reason,but simply because universities are notgood at it.Computer: And yet academia would arguethat industry has the unfair advantage ofdollars, of being able to hire their peopleaway before they are completely educated.Wulf: That’s a bit of a problem, but Ithink a temporary one. We have a rela-tively severe shortage of information pro-cessing personnel, so people are doingsome crazy things. But I have a lot of faiththat top-level management won’t eat theseed corn.Computer: So putting your professor haton for a moment, do you have any thoughts

about how university people can contributemore to the progress of computing?Wulf: I do. We need to do more to encouragetechnological literacy across disciplines. Weteach physics for poets; why don’t we teachengineering for poets as well? My favoritequick definition of engineering is “designunder constraints.” What we do is designthings and we do it under a set of societal,business, and technological constraints.Design is an incredibly creative activity. Whywe’re not teaching it to liberal arts folks is con-founding.Computer: In another definition of engi-neering, you stated, “I feel strongly thatengineering is an activity that creates arti-facts and processes that improve people’slives.” How will that charter be fulfilled?Will we see innovation in the next decade?Wulf: I’d like to first distinguish betweeninvention and innovation. Invention is theact of discovery. Innovation is taking thediscovery and producing products thatactually sell and affect people’s lives. A lotgets invented; a smaller amount gets inno-vated. Almost nothing that goes on in anindustrial laboratory is concerned withinvention.

Most people in industrial laboratoriesare trying to produce an existing productthat is faster, cheaper, and more reliable ina more user-friendly way. But they areessentially dealing with an existing marketand trying to better satisfy the needs of thatmarket at a lower cost. The true inventions,the breakthrough things, are rare eventsand they are seldom recognized for whatthey are at the time they happen. AlexanderGraham Bell thought the telephone was abroadcast device. He was going to use it todeliver concerts to people. Marconithought radio was a point-to-point com-munication device between two people.When Bell Labs invented the laser, theirpatent attorneys initially refused to file apatent on it because they could see noimplications for communications. So goingback to your question, the path for inno-vating breakthrough technologies is prob-ably not something you can pre-dict—which is why it tends to get done insmall companies, rather than large ones. Computer: So are you saying there is noclear path to innovation?Wulf: There’s a wonderful book from theHarvard Business School, called TheInnovator’s Dilemma. It explains what lotsof us have observed about the computerbusiness—things like why the mainframemanufacturers missed the introduction ofthe minicomputer and the minicomputer

people missed the introduction of the PC.The author, Clayton Christensen, arguesthat this is so for all truly breakthroughtechnologies. And it’s not because the bigcompany is somehow dumb or unaware;it’s that at the time the innovation is devel-oped, it doesn’t satisfy the needs of the exist-ing customer base. So when the PC comesin, it doesn’t satisfy the needs of the mini-computer users and it gets shelved in favorof efforts to improve the minicomputer. Bythe time the PC gets to the point where itcan do what the minicomputer can do, thesmaller companies, who weren’t hamperedby the needs of an existing customer base,have already cornered the market.Computer: It seems that innovation stemsfrom new and interesting areas in whichcomputing intersects some application.What are the challenges to making exist-ing intersections larger and what newintersections do you see in the near future?Wulf: If you want the interesting intersec-tions, you have to do a very hard thing:you must examine the unstated assump-tions you are making about the way thetechnology will be used, and these may beso deeply ingrained that you aren’t evenaware of them.

I experienced this firsthand six or sevenyears ago. I was on a committee at theUniversity of Virginia tasked with deter-mining how information technology wasgoing to affect the university over the nextseveral decades. This included things likethe humanities. Being a standard techy, Iassumed word processors might be usefulto humanists, but offhand I couldn’t thinkof anything else. On one of those light-bulb-go-off days, I suddenly realized therewere applications I had never thoughtabout. An historian and an English pro-fessor were on the committee. Eventuallythe three of us entrepreneured the Institutefor Advanced Technology in Humanities,which aims to understand and support thescholarship of humanities with moderntechnology. In the process, they are redefin-ing the limits of humanistic scholarship.

So what I have learned is that if you ques-tion your own unstated assumptions, youwill frequently find intersections you neverthought of, and in so doing you can funda-mentally change a great many areas. This isultimately what innovation is all about. ❖

William Wulf is president of the NationalAcademy of Engineering and AT&T Pro-fessor of Engineering and Applied Sci-ences at the University of Virginia.Contact him at [email protected].

Authorized licensed use limited to: IEEE Xplore. Downloaded on December 12, 2008 at 14:43 from IEEE Xplore. Restrictions apply.