condu tconduit - cs.brown.edu

24
Volume 11, Number 1 Department of Computer Science Spring, 2002 Brown University Brown University, Box 1910, Providence, RI 02912, USA conduit! condu t! controlled ways, first through files, later through databases. Even long-running pro- grams such as operating systems or data- base services are thought of as individual entities, running on their own machine (or small set of machines), that happen to pro- vide services to other programs. New and current technology lets us think of programs in a different way. Rather than everyone running their own program, we can think, at the extreme, of there being only one program that runs on all comput- ers simultaneously. Users wanting to ac- cess information or do some computation simply provide this program with the ap- propriate input and get their output. Pro- grammers, instead of developing new programs, develop new modules that are plugged dynamically into the single run- ning program. New computers, as they come on line, would start running part of this single program and can take advan- tage of the existing computing resources, while providing the program with addition- al capabilities. There are obvious advantages to so-called “pervasive programming.” First, it is a logi- cal extension of the trend to network-based applications and modules. Microsoft’s .Net framework, for instance, lets programs in- voke network services almost as easily as calling internal routines. Cooperating net- Introduction We envision a time in the not too distant future when all applications are handled by a single program that runs over all machines simul- taneously. Such a program would be continually evolv- ing, growing, and adapting. It would be the logical exten- sion of today’s combina-tion of Internet technology, perva- sive computing, grid comput- ing, open source, and increased computational power. Moreover, it would significantly change the way we think about and do programming. Programs have traditionally been stand- alone entities. From back in the days of card decks, programs were things that had a definite starting time and stopping time. They were run by individual users to accomplish some task and then they were exited. Sharing among users was done in PERVASIVE PROGRAMS Steve Reiss Rather than everyone running their own program, we can think, at the extreme, of there being only one program that runs on all computers simultaneously

Upload: others

Post on 20-Feb-2022

11 views

Category:

Documents


0 download

TRANSCRIPT

Volume 11, Number 1 Department of Computer Science Spring, 2002 Brown University

Brown University, Box 1910, Providence, RI 02912, USA

conduit!condu t!controlled ways, first through files, laterthrough databases. Even long-running pro-grams such as operating systems or data-base services are thought of as individualentities, running on their own machine (orsmall set of machines), that happen to pro-vide services to other programs.

New and current technology lets us thinkof programs in a different way. Rather thaneveryone running their own program, wecan think, at the extreme, of there beingonly one program that runs on all comput-ers simultaneously. Users wanting to ac-cess information or do some computationsimply provide this program with the ap-propriate input and get their output. Pro-grammers, instead of developing newprograms, develop new modules that areplugged dynamically into the single run-ning program. New computers, as theycome on line, would start running part ofthis single program and can take advan-tage of the existing computing resources,

while providing the program with addition-al capabilities.There are obvious advantages to so-called“pervasive programming.” First, it is a logi-cal extension of the trend to network-basedapplications and modules. Microsoft’s .Netframework, for instance, lets programs in-voke network services almost as easily ascalling internal routines. Cooperating net-

IntroductionWe envision a time in the not too distantfuture when all applications are handledby a single program thatruns over all machines simul-taneously. Such a programwould be continually evolv-ing, growing, and adapting. Itwould be the logical exten-sion of today’s combina-tionof Internet technology, perva-sive computing, grid comput-ing, open source, andincreased computationalpower. Moreover, it wouldsignificantly change the way we thinkabout and do programming.

Programs have traditionally been stand-alone entities. From back in the days ofcard decks, programs were things thathad a definite starting time and stoppingtime. They were run by individual users toaccomplish some task and then they wereexited. Sharing among users was done in

PERVASIVE PROGRAMS

Steve Reiss

Rather than everyone runningtheir own program, we can think,

at the extreme, of there beingonly one program that runs on all

computers simultaneously

conduit! 2

sign and building. Addressing large-scalechallenges such as controlling automo-biles on a computerized roadway requiresthat large numbers of systems (in thiscase all the automobiles on or near theroadway) communicate effectively. If wechange how we think of programs so thatsuch communication is inherent ratherthan an addendum, we should be able to

make develop-ing such sys-tems easier.

ResearchIssuesMany technicaland social prob-lems must beovercome in or-der to make thisview of pro-

gramming a reality. However, we feel thatsolutions to these problems are possible.

Any implementation of pervasive pro-gramming must be able to handle situa-tions in which portions of the programbecome unavailable and are replacedwhile the program is running, as well asthe dynamic definition of new portions ofthe program. A number of environmentsand systems support dynamic code re-placement, and others have dealt withunreliable networks in distributed sys-tems, so in principle these problems aretractable. However, doing this on a largescale, in a distributed rather than cen-tralized manner, in a way that cannot af-ford failure, and where code can come andgo dynamically will require additional re-search and probably a new framework tomanage and track the code base. Dynamictechniques for discovering implementa-tions such as those used by Jini will alsobe necessary here.

Pervasive programming will require ex-tensions to current languages if not a to-tally new programming language, since itimplies a new way of thinking about pro-grams. One approach that seems feasibleis to introduce an extended notion of aninterface, which we call an outerface fornow, as the basis for the new functional-ity. Outerfaces would be defined as ab-stract classes with both virtual and staticmethods. Users would code their portionsof the system to implement a particularouterface and at the same time use al-ready existing outerfaces. The underlyingsystem would then be responsible forchoosing an appropriate implementationof an outerface to be used by another out-erface at execution time. This choice

work services, each with shared state andrunning on different machines, are a formof a pervasive program. Second, as perva-sive computing becomes more prevalentand everything in one’s house is essen-tially a computer on a network, viewingthe various entities as a single cooperat-ing program provides a framework forcontrolling and managing the multiplicityof devices. Third,as computers be-come more so-phisticated,users expect toget more out ofthem. There isonly a limitednumber of 50-million-line pro-grams that canbe written effec-tively. If all programmers actually con-tribute to the capabilities of a singlesystem and each programmer could makeuse of the efforts done by others, then wecould create the types of systems that us-ers will expect in the future. Fourth, asingle program provides a logical meansof sharing the resources of large numbersof computers, effectively making gridcomputing available to all. Finally, inter-program communication right now is oneof the more complex parts of system de-

...outerfaces are the keybuilding blocks. To support

them, there must be standardways of defining, finding andcategorizing outerfaces and

their implementations

Congratulations are in order to astaff for successfully complet-ing job audits that raised each of their job levels one notch.

Tstaffer Kathy Kirman has the same reason to smile. Clockwisefrom top left: Dawn Reed, Lori Agresti, Kathy Kirman, Akina Cruz,

Fran Palazzo, Jennet Kirschenbaum and Genie deGouveia

conduit! 3

During the summer of2001 I was working onLightstage 3.0 underthe supervision ofPaul Debevec in theGraphics Group at theUSC Institute for Cre-ative Technologies.

Many times in movieproductions an actor’sperformance is shot ina studio and the scen-ery is shot at someother location or iscomputer-generated.The goal in general isto make the observerbelieve that both theactor in the fore-

ground and the background scenery wereshot at the same location looking throughthe same camera. In order to achieve real-istic-looking composites, we have to

match several parameters of the fore-ground and the background image. Theperspective from which the actor and thebackground are viewed must match. Theillumination of the actor has to be consis-tent with the illumination at the back-ground location, plus several camera-specific properties like brightness, re-sponse curve, color balance and more alsohave to match. Lightstage 3.0 tries tosolve the problem of creating a consistentillumination for the studio shot by usingpreviously captured illumination proper-ties at the background location or the illu-mination properties of the virtual sceneinto which we want to composite the ac-tor.

Illuminating the actor in the studio con-sistently with the background scenerymeans we need to know how that environ-ment would illuminate the actor. We getthis information from light probes, whichare omnidirectional high-dynamic-rangeimages that capture the incident illumi-nation for a specific point in space. Light-stage 3.0 is the device we built toilluminate an actor with a previously re-

would be based on parameters the usermight provide, properties of the outerfaceimplementations, analysis of the uses ofan outerface, network proximity andavailability, as well as other conditions.Numerous language issues arise in at-tempting to make such a notion bothpractical and implementable. Moreover,the language must provide the hooks todeal with outerface implementations thatdisappear or are replaced dynamically.

In this view of pervasive programming,outerfaces are the key building blocks. Tosupport them, there must be standardways of defining, finding and categorizingouterfaces and their implementations.This could be done at various levels, eachimplying a different level of authority andresponsibility. First, we visualize a well-defined set of standard outerfaces adopt-ed by some sort of standards committee,akin to the methodology currently used indefining extensions to the Java language.Second, user groups could propose andcode to shared outerfaces that might notbe in the standard but can otherwise beagreed upon. Finally, it should be possiblefor individual users to create their ownouterfaces and corresponding implemen-tations, keeping the scope private wheredesired.

Another problem involves identifying thesemantics of an outerface. If outerfacesare going to be widely used, each musthave well-defined and well-understood se-mantics. Without a long drawn-out stan-dards process, this is difficult andimpractical. Instead, we envision that anouterface would be defined both as a set ofmethods and a set of test cases. Any im-plementation proposed for the outerfacewould have to pass all the test cases in or-der to be considered valid. Users wouldthen be able to define their own subouter-faces that add additional test cases ofparticular interest or importance. The un-derlying system would take care of ensur-ing that only implementations that passthe test suite are acceptable. This opera-tional definition of semantics seems apractical and feasible approach.

In addition to dealing with the immediatetechnical issues, a pervasive program-ming world will have to deal at all levelswith security and privacy concerns. Thesemust be addressed directly at the lan-guage level since they will be propertiesof outerfaces that implementations willhave to conform to and that users will in-sist on. A capability-based model at thelanguage level could be a basis for a solu-tion in this dimension.

L I G H T S T A G E 3.0

Master’s studentAndy Wenger

conduit! 4

with infrared reflecting pieces of cloththat are illuminated with six infraredlight sources to create a background wecan later subtract out. The advantage ofinfrared light is that the matting does notinterfere with the illumination we’re try-ing to recreate; classical methods likeblue screening would produce extraneousblue light and therefore affect the appear-ance of the actor. Using a beam splitterand two cameras, one for the visible lightand one for the infrared, allows us torecord the actor’s performance as well asthe matte at the same time. The two cam-eras must be carefully registered with re-spect to each other and the beam splitterfor the two images to align. The composit-ing software uses the visible light imageand the infrared image to cut out the ac-tor. It then places the cut-out actor intothe background image.

The infrared matting system and the cur-rent compositing software took shape af-ter I left in September, as did the currentcomposites. The work we started lastsummer culminated in a paper that I andChris Tchou, Andy Gardner, Tim Hawk-ins and Paul Debevec submitted to SIG-GRAPH 2002 in San Antonio. Those whowant to know more about Lightstage 3.0can check out the ICT Graphics Group’sweb page at http://www.ict.usc.edu/graphics/.

I would like to take this opportunity tothank the Graphics Group at USC Insti-tute for Creative Technologies for givingme the chance to be part of such a greatproject and for such a great summer withthem. Thank you!

corded light probe. The device has theshape of a once-subdivided icosahedron,which allows us evenly to distribute 162light sources over the sphere by placingan inward-pointing light source at eachvertex and in the middle of each edge.With a diameter of two meters we can il-luminate an actor from the chest up-wards. (The actual number of lightsources is 156 because we left out a pieceat the bottom so the actor could stand.One light source is made up of red, greenand blue LEDs, so no single LED is alight source, but rather eight red, fivegreen and five blue ones combined.) Bychanging the intensities of the three basecolors, we can dial virtually any color wewish. Each of the light sources is indepen-dently addressable, giving us the abilityto specify a color for a desired position onthe sphere.

The control program of Lightstage 3.0 us-es a light probe in a longitude/latitudeformat as its input. From that light probeit calculates the color configurations ofthe light sources by projecting the posi-tion of each light source into this inputspace and taking a weighted average ofthe surrounding pixels. The control pro-gram is also responsible for color andbrightness correction due to the differentcharacteristics of the light sources withwhich we generate the illumination andthe camera’s imaging sensor with whichwe record it.

To differentiate between the actor in theforeground and the studio background weuse an infrared matting system. The visi-ble background to the camera is covered

Andy sitting in an early version of Lightstage 3.0 with only 41 light sourcesand a simple static matting system using a single camera. The control

program running on the PC is operated by Jonathan Cohen ’00

conduit! 5

After starting the Industrial PartnersProgram in 1989 and serving as its di-rector for all but three years since then,Professor John Savage has stepped down.He has handed over stewardship of theProgram to Associate Professor MichaelBlack.

In March 1989, ten years after the CS De-partment was founded, John, while Chairof the Department, introduced the Indus-trial Partners Program (IPP), which hehad designed with the advice of Brown’sDevelopment Office and of Roy Bonner, aretired IBM executive. IPP was intendedto be a different kind of academic out-reach program that would encouragepartnership relations among corporatemembers and students and faculty in theCS department.

At its first meeting, IPP was introducedto senior executives from a number of pro-spective Partners including Bellcore, Co-dex, DEC, GTECH, IBM and Sun Micro-systems. At this event we not only de-scribed the purpose of IPP to our visitors

but also used it to showcase faculty re-search. Talks were given by Brown col-leagues specializing in artificialintelligence, computer graphics, databas-es, operating systems, multiparadigm de-sign environments, theoretical computerscience and VLSI theory and algorithms.

IPP has evolved over time. It now givesPartners opportunities for pre-competi-tive cooperation via its biannual technicalsymposia on topics of interest to Partners.We also encourage communication withPartners via the IPP Program Director(Michael Black), the IPP Program Man-ager (Suzi Howe), individual facultymembers, and students employed byPartners over the summer.

A distinguishing characteristic of ourday-long technical symposia is that mostof the speakers are representatives ofPartner companies. This encourages in-teraction among Partners and makes forvery lively meetings, with exciting giveand take. It is always interesting to hearpeople from competing firms engage in

HIBISCUS, PALM TREES AND FLAMINGOS RULE IN CS2 !

Don Stanford, recently retired CTO of GTECH Corporation (a CS Industrial Partner), isnow teaching CS 002, Concepts and Challenges of CS. To give the students a break half-way through the class, Don introduced the Friday Hawaiian shirt challenge. He has abounteous collection of these, each more gaudy and outrageous than the last; the chal-lenge is to out-shirt him!

Of the initial shirt competition, CS2 TA Daniel Santiago wrote on thecourse webpage, “In response to the Friday Hawaiian shirt challenge,a single warrior emerged ready to take on Don and his vast collectionof shirts. Mike’s colorful Hawaiian shirt was complemented by hisTony Soprano wife-beater undershirt and his very stylish sabre-toothnecklace. In what was undoubtedly a rigged vote, the class pickedMike’s fashion statement over Don’s, even though most of it lookedlike it had been purchased at a yard sale. Ever the gracious loser, Donrewarded Mike with an HTML cheat sheet that, we understand, hassold out at the Brown Bookstore. Don looks forward to meeting newchallengers next Friday and anticipates a less biased vote on the partof the class!”

When Suzi Howe visited a Friday challenge to take photographs forposterity, the competitors were Nicole Janisiewicz ’02 and Zach Wood-ford ’05 vs. Don. Applause decided the winner—alas, not Don, butthen he was wearing one of his more conservative habiliments.Nicole’s splashy floral print was the winner; she was awarded aMicrosoft textbook, and a consolation prize went to Zach. Fridays pro-vide an opportunity to see Don in the department before class sportinghis glorious duds, even in the dead of winter. He is without doubt ascharismatic an individual as his shirts are unrestrained!

MICHAEL BLACK IS NEWIPP DIRECTOR

conduit! 6

animated discussions on neutral ground,a very healthy form of pre-competitive co-operation. To illustrate the star lineupswe enjoy, at the 1994 ‘Nexal Computing’symposium that John hosted, James Gos-ling from Sun Microsystems gave a fasci-nating talk about his new programminglanguage, ‘Oak,’ renamed ‘Java’ a fewmonths later!

The IPP symposia provide an importantwindow on Partner interests and are help-ful to CS faculty and students in under-standing the concerns of the commercialworld. The symposia also give Partners

visibility in the department, are valuablein their recruiting, and expose them tofaculty and student research interests.The financial support provided by Part-ners through IPP makes possible manyimportant ancillary activities that go farto make the Department the outstandingresearch and teaching environment it istoday.

Recently the Program introduced IPPSeminars, a venue for Partners to speakon advanced topics of interest to facultyand students; here Partners can also givebrief introductions to their companies aswell as make themselves available to dis-cuss job opportunities. Brown alums fre-quently either give the IPP Seminars oraccompany the speakers. Speakers are in-vited to spend the day in the Department,talking with faculty members with simi-lar research interests.

While John is still active in research, hisfocus in recent years has turned more to-ward the University. From 1996 to 1999he was the Faculty Vice Chair of the Advi-sory Committee on University Planning(ACUP), which recommends the Universi-ty budget to the President. From 1998 to2001 he was President of the Faculty

Club, and in 2000 he started a three-yearterm as an officer of the Faculty, whichentails chairing the Faculty ExecutiveCommittee in the second year. This yearas Chair of the Faculty he is workingclosely with our new President, RuthSimmons. Last year the inaugurationprogram committee that he chaired ar-ranged for twenty faculty talks broadlyrepresentative of faculty interests thatwere a feature of President Simmons’ in-auguration. Last year, also, he had a keyrole in shaping a new conflict of interestand commitment policy for the Universi-ty. This year he is active in organizing atask force on faculty governance thathopes to make significant changes in thefaculty’s role in the University.

New IPP Director Michael Black (M.S.Stanford, Ph.D. Yale) came to CS in 2000and has been assisting John Savage withIPP for the past year or so. Michaelbrings to this new position extensive ex-perience in industry, corporate research,intellectual property and joint academic/industry partnerships. Before joiningBrown, Michael managed computer vi-sion research at Xerox PARC, learning a

l to r: JohnSavage and

MichaelBlack

According to media coordinatorMark Oribello, Elvis has left thebuilding...

ELVIS SIGHTING !

conduit! 7

Many of us in computer science to-day found our first encounter with acomputer a profound, life-alteringexperience. Typically we sat at aterminal of some sort, typing state-ments; the computer interpretedour statements as commands andcarried out those commands, print-ing an occasionally scrutable butmore often cryptic and incompre-hensible response.

With time, we learned to decipherthose cryptic responses and evenprogram the computer to carry outother commands with incomprehen-sible responses of our own design.We learned that interaction with a

computer could be subtle, applicable to awide range of tasks, and infinitely exten-sible. We quickly realized that by usingloops, conditionals, and Boolean logic youcould persuade computers to carry outcomplex tasks without intervention; youcould embed your knowledge of the worldin programs that would repeat routinetasks as many times as you like and do itinfinitely faster and (if you’re careful inwriting the program) without error.

The speed and precision of computers letyou cheat time and avoid (some forms of)fatigue and boredom. Not only could youembed your knowledge in a program, butyou could share that knowledge withsomeone else in the same way that youcould tell someone how to repair a bicycleor prepare a recipe. You could also use theknowledge that others embedded in theirprograms by combining it with your pro-

grams to solve new problems you couldn’thandle otherwise.

The very fact that you can interact with acomputer is fascinating in and of itself.Even more exciting is the fact that com-puters can be said to learn: computer pro-grams developed by credit-card com-panies learn to recognize, anticipate andlimit credit-card abuse. The various formsof machine learning possible today chal-lenge us to think about what learningmeans.

Computers at grocery stores and onlinemerchants analyze your buying habits,predict your preferences, and suggest oth-er items you might be interested in—an-noying, perhaps, but often eerily ontarget. These programs can be thought ofas creating a ‘model’ of users, and manyother programs can build even more so-phisticated models of their environmentand the people (users) with whom they in-teract. Computers with internal models,even models that refer to the computer it-self, further erode the differences be-tween human and machine and force usto think more carefully about who we areand what it is to be human.

Computers let us build better models ofthe world around us. They model theweather, earthquakes, buildings, biologi-cal processes, the human immune system,brains and even other computers yet to bebuilt. These models allow us to diagnoseproblems, predict the future, and explainthe world around us (and inside us). Com-puter science also tells us about the limi-tations of computing and therefore aboutthe limitations of human understandingand influence.

In colleges and universities, computersplay many roles. They provide metaphorsand models for teaching and learning andself-analysis that apply to a wide range of

great deal about the challenges of indus-trial research and the important role ofacademic partnerships in a competitivemarket. Having seen industrial partnersprograms from both sides, he views IPPas a fine example of how a focused, flexi-ble program can adapt to meet the chang-ing needs of academia and industry. Hisresearch interests include computer vi-sion, optical flow estimation, human mo-tion analysis, probabilistic and stochasticalgorithms and brain-computer interfac-es. One of the attractions of Brown to himwas this same mix of disciplines and the

collaborative spirit here. He is excited tobe part of the Brain Sciences Program.

Stay tuned as in the coming months westrengthen and expand our core offeringsto Partners. At this transition point weare particularly interested in feedbackfrom Partners about how IPP can helpthem remain competitive in the currentmarket climate.

COMPUTER SCIENCE AS ANINTEGRAL COMPONENT IN

HIGHER EDUCATION

Tom Dean

conduit! 8

studies in philosophy, psychology, neuro-science and biology. Computers offer toolsto extend our abilities and enhance thelearning experience. They allow studentsand scientists to explore worlds that aredifficult to access, that don’t exist yet,that never will. They let workers in dis-parate fields combine their special exper-tise in programs that provide insightsinto new, hybrid fields. Computer pro-grams allow us to navigate in the sea ofinformation, both new and old knowledge,that constantly threatens to drown us. (Ofcourse, computers are also largely toblame for the glut of information, but fewof us are willing simply to turn off thetap.)

The future of the discipline will comprisemore of the same and then some: newmodels and metaphors, new types of com-puting that borrow from genetics, animalbehavior, neurophysiology and the physicsof the small and large. There will be newdisciplines, computational this and that,in which computer models and tools pro-

vide fundamentally new insights and le-verage. Increases in speed alone open upnew opportunities and provide enhancedcapability in much the same way that ad-vances in optics and imaging devicesopened up new fields by making visiblethe invisible.

Computer-mediated prostheses of allsorts with better interfaces will becomeincreasingly common. Direct brain-com-puter interfaces already exist as cochlearimplants for the hearing-impaired, withretinal implants to restore vision and di-rect neural shunts to control artificiallimbs not far behind. Complicated arti-facts like airplanes and large buildingsrely on computer tools to create and thentest their design. Bioengineering dependson advances in computer design toprogress, and the new technologies will

serve to reengineer the human body andbrain.

The way modern software is produced isinfluencing how people work in multidis-ciplinary studies where computers play acentral role. In software production hous-es, programmers work in teams with oneor a few persons responsible for each com-ponent of a large piece of software. Sel-dom nowadays is a solitary programmerresponsible for a large software project.Indeed, some of the contributors to such aproject are often not programmers at allbut rather computer-savvy experts in dis-ciplines from human factors and psychol-ogy to business and law. Key to a suc-cessful product are clear communicationamong the team members and the abilityto decompose large, complex problems in-to tractable components encapsulatingthe different sources of expertise. Suchprojects force us to define appropriate vo-cabularies at the right level of abstractionto support discourse at the boundaries be-tween components and their encapsulat-

ed expertise. The technical andorganizational skills that pro-duce successful software pro-jects are also crucial to multi-disciplinary efforts, especiallythose in which computers areused to integrate differentsources of knowledge.

Computational molecular biol-ogy, combining among otherfields molecular biology, organ-ic chemistry and computer sci-ence, is a good example of adiscipline that lends itself tocollaborative research, with

computers and computer scien-tists playing myriad supporting roles.Computational pharmacology, sequencingand related genomic studies, evolutionarymolecular biology, and protein-foldinganalyses depend on scientists from sever-al fields working together, often physical-ly in the same space but sometimes onlyvirtually, using computers to facilitate al-most every aspect of their communica-tion, data gathering and analysis. Similarrevolutions are afoot in economics, brainscience, and a host of other disciplines,and we’re just beginning to understandthe benefits of visualizing large data setsfrom financial, health care and govern-ment census sources.

Computer science has always thrived oninteractions with other disciplines. Inparticular, computer scientists are fasci-

The technical and organizationalskills that produce successful

software projects are also crucial to multidisciplinary efforts, especiallythose in which computers are used

to integrate different sourcesof knowledge

conduit! 9

KANELLAKIS LECTURE SERIES INAUGURATEDOn November 29, 2001, the department hosted the first annual Paris Kanellakis lecture. Dr. MihalisYannakakis, of Avaya Laboratories, spoke on “Progress in System Modeling and Testing.” In his intro-ductory remarks, chairman Tom Dean said, “This is a celebration of two quite extraordinary people,both good friends and colleagues ofthe department, and, coincidentally,both Greek and exceptionally talentedtheoretical computer scientists....I’mabsolutely sure that Paris would beecstatic to have Mihalis here for theday to talk with our faculty and stu-dents and to give this lecture.”

This lecture series honors Paris Kanel-lakis, a distinguished computer sci-ence theoretician who was anesteemed and beloved member of thisdepartment. The deaths of Paris andhis family in a December 1995 air-plane ac-cident in Colombia continueto be a profound loss of which we areespecially reminded towards the endof each year. We are therefore all themore delighted to have inauguratedthis lecture series in his memory. Sub-sequent Kanellakis lectures will beheld around Paris’s birthday, Decem-ber 3.

If you’d like to be on the mailing list for

problems, these interactions also spawnnew ways of thinking about computation.Computer science has its own growingbase of knowledge ranging from the pure-ly pragmatic to some of the most beauti-ful mathematics of the past century. Thediscipline is deep as well as broad; thereare still fundamental open problems thatstand in the way of understanding com-putation and making it work for society.

Beyond the scientific and technical issues,computer technology raises a host of mor-al and ethical problems for society towrestle with in the coming years. We as-pire to produce civic-minded scientistsand technologists and scientifically liter-ate and technologically savvy citizens.Just because we have the technology to dosomething doesn’t mean we should do it.Computers let us scrutinize the lives ofindividuals at a frighteningly intrusivelevel. We are only just beginning to ad-dress the security and privacy issues thatarise as society becomes increasingly de-pendent on networked computers. Uni-versities, as creators and first adopters ofnew technology, have the opportunity andthe responsibility to tackle head on theethical and moral issues it raises. We’vejust begun to realize the social implica-tions of new computer technologies. Soci-

nated with the brain and with biologicalsystems in general. Artificial neural net-works, now a very useful technology inpattern recognition and machine learn-ing, may have very little to do with howreal neurons work, but new insights fromneuroscience are informing new models ofcomputation that promise even more use-ful technologies. Nature is a wonderfulsource of technical hints and demonstra-tions of things we wouldn’t have dreamedpossible. Computer science is both a con-sumer of ideas from and a producer ofmodels and tools for the hard and soft sci-ences. And the metaphors and lessons ofcomputing permeate our culture and soci-ety at all levels. Computers and comput-er-mediated communication are changinghow we relate to one another in languageand the visual arts, and the new accessi-bility of the written word, coupled withnew ways of linking and cross-referencingdocuments, is profoundly affecting schol-arship in all disciplines.

As long as computer science remains opento other disciplines, it will have no chanceto rest on its laurels or drift into a phaseof purely incremental progress. While in-teractions with other disciplines providenew applications, and there is no denyingthat computer scientists love to solve

Dr. Yannakakis surrounded by Kanellakis Fellows from l to r:Ioannis Vergados, Ioannis Tsochantaridis, Nikos

Triandopoulos, Aris Anagnostopoulos, Manos Renieris andChristos Kapoutsis (of MIT)

conduit! 10

On November 6, 2001, we hostedan IPP Symposium on ‘Compo-nent Software and Technolo-gies’. Steve Reiss and PascalVan Hentenryck suggested thetheme and helped me find suit-able speakers. Despite the lullprompted by corporate spendingcuts and depression over recentevents, we had a packed roomand rather lively debates.

Components have been in thepublic eye for a while now, thetopic of many successful books,conferences and papers. Thechallenge with any technologythis popular is to separate thewheat from the chaff: to under-stand what it really contributes

and, in particular, to avoid merely using anew term for stale ideas. To work towardsthis understanding, we incorporated abroad diversity of perspectives spanningindustry and academia, users and re-searchers, foundations and pragmatics,and I think our effort was fruitful. While Idoubt we resolved major questions to ev-eryone’s satisfaction, the event did re-mind me of an exchange I’ve seenattributed to Samuel Johnson. His corre-spondent writes, “I have read your article,Mr. Johnson, and I am no wiser thanwhen I started.” The good Doctor re-sponds: “Possibly not, Sir, but far betterinformed.”

I inaugurated the day with a talk that at-tempted to offer an overview of compo-nents. Preparing the talk was actually anunusually enlightening experience forme. In trying to understand the prove-nance of components and why they felt in-

evitable, I came fully to appreciate thewisdom behind the work on softwareproduct lines. As a result, my talk washalf overview and half (disguised) mani-festo for a certain design methodology toaccompany components. Finally, I provid-ed a rationale for the choice of remainingspeakers (notice that I cleverly did this atthe end of my talk, so I didn’t need to ex-cuse my own presence).

The features of components that makethem salutary for software developmenthave a darker side: they can inhibit thecross-component reasoning necessary forvalidation and optimization. This posesnew challenges for compiler authors.Mark Wegman of IBM presented an in-cipient effort on compiler optimization inthe context of components. This is ambi-tious work and it will be exciting to seetheir experimental results.

Jean Laleuf, on behalf of members of theExploratories project at Brown, made abrief presentation on their experienceswith components. Exploratories is an ex-periment in creating a framework for rap-idly prototyping graphical and other toolsfor interactive exploration. Jean brieflyoutlined his group’s successes and frus-trations with components. He also dem-onstrated the software during the breaks.

Karl Lieberherr of Northeastern Uni-versity has been refining programmingtechnology in interesting directions forseveral years. He observed that a gooddeal of administrative code in object-ori-ented systems addresses traversing objectgraphs. He handles this through traversalabstractions, which let programmers sep-arate code into declarative specificationsof the traversal and concrete specifica-tions of behavior. His recent work on tra-versals integrates them with components,and he described this approach, which hecalls aspectual components, in his talk.

ety will soon have to consider the rights ofand laws governing, first, augmented hu-man beings, and then, true artificial intel-ligences of steadily increasing capa-bilities.

A good argument can be made that com-puter science literacy, at least at the levelof models, metaphors and the basic no-tions of algorithms and data structures, isnecessary in a liberal arts education.Within academe, students come to com-puter science to learn valuable skills andpowerful tools, to encounter deep mathe-

matical ideas, and to acquire models andmetaphors that enrich their other studiesand provide insights into being human.Computation is changing how we relateto one another and to society. It is trans-forming society and will continue to do so.Most computer scientists today feeltugged this way and that by all the inter-relations of their discipline to the worldaround them and the dependencies andresponsibilities these interrelations en-tail, but few would have it any other way.

ShriramKrishnamurthi

THE 28th IPP SYMPOSIUM

conduit! 11

Finally, Jim Waldo offered a rousing in-terpretation of Sun’s view of the next gen-eration of components. He pointed outthat the network forms the ultimate ab-straction boundary between computa-tions; the increasing emphasis onerecting abstractions implies that thenatural way to separate components is todistribute them across a network. He ex-plained Sun’s vision of using Java as thelanguage of exchange among these com-ponents, and ended the day with an in-spiring vision of what components mightdo.

I’d like to thank the speakers for theirtime—some of them came to participatefrom two time zones away. They humoredmy impositions, such as a request thatthey offer a concrete definition of the no-tion of components they were employingso we wouldn’t talk at cross purposes.Some of the speakers even subjected theirdefinition to iterative refinement. Themix of experiences (and personalities!) ledto several interesting exchanges, andmany people lingered to continue overhors d’oeuvres.

Finally, I have to let my faculty colleaguesin on a dirty little secret: running an IPPDay is easy! Suzi and the other adminis-trative staff (Fran in particular, in thiscase) do all the preparation and execute itnot just flawlessly, but with an unmistak-able and irrepressible panache. (Unfortu-nately, however, this time Suzi did notmatch her clothes to the tablecloth,more’s the pity.) I’m just the mook who

Jay Lepreau of the University of Utahtackled a long-standing, niggling concernabout software engineering: will its meth-ods apply to constructing low-level, per-formance-oriented software? Jay’s teamhas built three generations of operating-system components to help users rapidlyconfigure and build custom kernels—adomain in which excessive cost intro-duced by abstractions becomes immedi-ately apparent to users. They have foundthat naive notions of components are in-sufficient for this task, but are convergingon a combination of programming-lan-guage support and component technologythat is proving very successful.

Mirek Kula and Bob Rogers represent-ed our industrial partner GTECH, one ofthe most successful providers of lotterytechnology. This description is deceptive:in fact, they must meet extremely rigor-ous technology standards. For instance,the average user, standing impatiently ina queue at a gas station, is relatively un-interested in the details of computer sys-tems: he wants to buy his ticket and hitthe road as soon as possible. Mirek andBob outlined many of the fascinatingtechnical challenges that confrontGTECH. They then gave a sobering re-port on successes and failures of theiradoption of component technology. In par-ticular, they explained how they had beenlet down in their expectations (encour-aged by the press) of a “component mar-ketplace.”

IPP Symposium speakers clockwise from top left: Jean Laleuf, Brown CSGraphics Group; Mark Wegman, IBM; Jay Lepreau, U. of Utah; Bob

Rogers, GTECH; Karl Lieberherr, Northeastern U.; host ShriramKrishnamurthi, Jim Waldo, Sun; Mirek Kula, GTECH

conduit! 12

Andy van Dam is delightedto report the transition ofthe chairmanship of the CSdepartment at the Univer-sity of Washington fromone Brown CS graduate toanother! Ed Lazowska ’72has stepped down in favorof David Notkin ’77. SaidAndy, “new chair Notkinis also one of ours, so theBrown line continues...”Below, in celebration, Not-kin (l) and Lazowska raisethe official brew of UW,Arrogant Bastard Ale!

------8<------For the CS 2 mid-term,Don Stanford had plannedto give any Hawaiian shirtwearers a prize. Surpris-ingly, however, 25 bril-liantly beshirted studentsshowed up! He had pur-chased only five gifts—thePatriots Yearbook issue of

lishes the good old days, for we tendto filter out the unpleasant aspects.

Yet it seems almost impossible thatthirty years ago (yes, just thirtyyears ago), very little of today’simposing infrastructure was inplace: the wide urban expresswayswith their perfect pavement (a farcry from the pot-holed streets of ourProvidence) lined with graciousumbrella-shaped trees and sur-rounded by well-appointed gardens,the international flair of the shop-ping district, the awesome skyscrap-ers of the financial section, wherearchitectures compete in size, vari-ety, materials, and a bit of extrava-ganza. Here everything is exotic andnothing is. All the familiar Ameri-can icons (McDonald’s, KFC, PizzaHut) are ubiquitous, as are theexclusive Italian designer shops(Gucci, Armani) and the Japanesedepartment stores (a Takashimayamore glittering then its counterpartin Kyoto). All cuisines are available,and in abundance; there is no notionof ‘ethnic food.’ Here more than else-where one perceives the reality ofglobalization: anything that is avail-able somewhere is available here. Itis also very hot (perpetual summer),but I don’t think this is attributableto global warming.

Beyond these broad impressions ofeveryday living, one could also com-ment on this incredible experimentin social engineering that hasguided a racial kaleidoscope to atransition from a semidilapidatedBritish colony to a first-world littlenation. But this would make my dis-course too serious and deflect mefrom my commitment to broad-brush impressions.

I write this letter soon after sunrise(always about 7am on the equator)and soon I’ll go to my spacious officeat the National University of Sin-gapore, a modern institution grow-ing according to the American

fronts for these incredibly hard-working people. For moreinformation about the day, please seehttp://www.cs.brown.edu/industry/ipp/symposia/ipp28/

If someone had applied for a sabbat-ical leave in Singapore thirty orforty years ago, the request mighthave seemed a suspicious and dubi-ous way to smuggle in an exotic tripin the guise of a professionalendeavor.

Some people may still have thesame suspicion; in which case theyneed a refresher on today’s sociopo-litical realities. I remember firstcoming across the name ‘Singapore’in my childhood adventure books,an exotic cove of pirates surroundedby luscious jungles, where fiercepredatory animals roamed freely.Later, W. Somerset Maughamdepicted a farflung outpost of theBritish empire where a unique com-munity of colonists, merchants,adventures, and demimondaineshad recreated a comfortable micro-cosm in the image of faraway‘home’. Vestiges of that epoch andsociety remain more in the nostal-gia of the warm accounts of abygone time than in actuality. Nos-talgia, of course, always embel-

LETTER FROM SINGAPORE

Franco Preparata

conduit! 13

model, whose president, by the way, is Prof. C. H. Shih, untilrecently a colleague of ours in Brown’s Division of Engineering.In my office I will meet a young Singaporean colleague who has, Iam sure, worked into the wee hours to fine-tune a sequencereconstruction program: an enthusiastic, very competent andresourceful collaborator in my research in computational biology.Such an interface certainly did not exist thirty, forty years ago.This is why a sabbatical leave in Singapore is today a totallyappropriate undertaking.

In "Future Interfaces: An IVR Progress Re-port,” Andy van Dam, David Laidlaw andRosemary Simpson summarize importanttrends in user interfaces for immersive virtualreality. Section 6 of this paper, which de-scribes ongoing work at Brown on IVR andscientific visualization, is reprinted below; thecomplete paper will appear in Elsevier’sComputers and Graphics, Vol. 26, No. 4,available online in August.

WHERE ARE WE NOW?

At Brown we have been exploring a numberof different IVR applications to study UIresearch issues in IVR. We are collaboratingwith domain experts in several fields todevelop applications that address drivingproblems from other scientific disciplines,believing that such collaborations will help usto factor out common patterns from theproblems in various disciplines to developIVR interaction metaphors and visualizationtechniques that can be generalized. Thecollaborations also let the domain expertsvalidate new techniques and ensure that theyare responsive to the needs of real users. Westudy our visualizations in a Cave, a cubiclein which high-resolution stereographics areprojected onto three walls and the floor tocreate an immersive virtual reality ex-perience. Our scientific application areasinclude archaeological data analysis,biological fluid flow (bioflow) visualization,

brain white-matter analysis, and Mars terrainexploration.The visual characteristics of a virtual worldare coupled with the user interface. We drawinspiration from artistic techniques to supportour communication goals; art history, pedago-gy, and methodology, together with art itself,provide both source material and a languagefor understanding that knowledge. Perceptu-al psychologists have also developed a bodyof knowledge about how the human visualsystem processes visual input; these percep-tion lessons aid in designing the visual repre-sentations of IVR user interfaces.

Beyond visual inspiration, perceptual psychol-ogy also brings an evaluation methodology tobear on scientific visualization problems.Evaluating visualization methods is difficultbecause the goals are difficult to define andmeaningful evaluation tests are difficult to de-sign. Similar issues arise with the methods ofevaluating how the human perceptual systemworks. In essence, we are posing hypothesesabout the efficacy of user interfaces and visu-al representations and testing those hypothe-ses using human subjects, an experimentalprocess that perceptual psychologists havebeen developing for decades. Perceptual psy-chologists, in close collaboration with domainexperts and artists, are helping us develop amethodology for evaluating visualizationmethods.

EXPERIMENTS IN IMMERSIVEVIRTUAL REALITY FOR SCIENTIFIC

VISUALIZATION

Andy van Dam, David Laidlaw and Rosemary Simpson

Sports Illustrated. Whilethe class took the exam,Don rushed to Thayer St.for more magazines,returning with the onlySports Illustrated issue hecould find in bulk--theswimsuit edition!

------8<------

Both CS nominees for the2002 undergraduate CRAawards, Harry Li ’02 andRachel Weinstein ’02, wereselected for honorable men-tions.

------8<------

Alumnus John Crawford’75 has been named to theNational Academy of Engi-neering. Said John in arecent email to Andy, “Dis-tinguished Gentlemen: It isa great honor to join you allin the Academy of Engi-neering. Thanks especiallyto Professors van Dam andBrooks for your trainingand inspiration!” Dr.Crawford is director ofmicroprocessor

conduit! 14

Visual design and interactiondesign come together to createa virtual environment. Virtualenvironment design, however,goes beyond the typical domainor training of a single designer.Many of the design issues aresimilar to those of varied other

areas of design from which we can learn fromand draw inspiration. Components of archi-tectural and landscape design are relevant forcreating and organizing virtual spaces, sculp-ture and industrial design for finer-grained 3Dparts of the environment, illustration for the vi-sual representation of scientific data and thesurrounding environment. Traditional user-in-terface design is applicable to some parts ofthe interaction design, and animation to otherparts. The design process is different for allthese types of designers, and getting all of thepieces to play together effectively is a chal-lenge, particularly with the constraining needsof the scientific applications.

ARCHAEOLOGICAL DATAANALYSISArchave, created by Eileen Vote and DanielAcevedo, was one of Brown’s first Cave appli-cations. It was developed in cooperation withMartha Sharp Joukowsky of Brown’sCenter for Old World Archaeology andArt and Brown’s NSF-funded SHAPElab, which was set up to developmathematical and computational toolsfor use in archaeology. Archave pro-vides virtual access to the excavatedfinds from the Great Temple site at Pe-tra, Jordan. Here we examine some ofthe user interface and design issuesarising during the system’s design, de-velopment, and testing.Our application design had archaeolo-gists work in the Cave for multi-hoursessions on analysis tasks that wouldhave been quite difficult using tradi-tional analysis tools. They were ableto support existing hypotheses aboutthe site as well as to find new insightsthrough newly discovered relationshipsamong the over 250,000 cataloguedfinds. This experience, while successful,raised many research issues in user interfaceand visual design, many of which involve tak-ing the Cave experience beyond the demostage and making it truly useful for archaeolo-gists.Multivalued visualization: In this archaeo-logical application, each of the many thou-sands of artifacts is described by dozens ofattributes, such as Munsell color, date, histori-cal period, condition, shape, material, and lo-cation. Furthermore, the relationships amongthe artifacts and with the trenches from whichthey are excavated also must be representedin an easily distinguishable way.

As in all design issues, the driving force be-hind the visual representation is the task to beperformed with it. For the relatively simpledemo-like evaluations performed so far, only afew attributes of the data are displayed—those essential to a specific task. As the tasksbecome more exploratory, the visual mappingwill need to become more complete and sim-ple designs will no longer suffice. Here iswhere we hope to exploit perceptual psycholo-gy and art history for inspiration as well asstudying the design process itself.

Design: Design issues are omnipresent in ourCave applications and several from Archavedeserve mention. First, the visual context ofthe data representation, as well as the visualrepresentation of artifacts themselves, is im-portant in interpreting the data. Initially weused realistic representations for artifactswithin a plausible reconstruction of the temple(Figure 1). We found, however, that a moreprimitive iconic representation of the artifacts,with important data components representedby shape, size, and color, was easier to ana-lyze (Figure 2). Our Archave users, who hadtypically been involved in the excavation pro-cess, were also distracted by the reconstruc-tion, since they were far more familiar with thepresent-day ruins. They were able to work

architecture at Intel Corp-oration; he received thishighest of such awards forthe architectural design ofwidely used

microprocessors.

------8<------

David Laidlaw spent partof his sabbatical last falltaking an introductory oil-painting class in the Illus-tration Department atRISD. Because some of hisscientific visualizationresearch has built on ideasfrom oil painting, he hopedthat the class would pro-vide some new inspiration.He reports that, in additionto feeling a bit older afterbeing twice the average ageof the other students, helooks at the world throughbetter-trained eyes. Hewon’t, however, be chang-ing his vocation. Some ofhis continuing collabora-tive efforts with RISD fac-ulty are also described inthe article on page 16.

Figure 1—Archave image from thefull reconstruction version of

Archave

Figure 2—Archave representationof excavation site with artifacts

conduit! 15

more than those in some other disciplines, to fatigueand discomfort in the Cave. This sensitivity seems tobe shared by many artists as well. This result is sur-prising, running counter to an expectation of little dis-orientation because of the “familiar” and simplespatial nature of the depiction, and must be further in-vestigated.

BIOFLOW VISUALIZATION

Our second scientific application, in collaborationwith bio-engineer Peter Richardson of the Division ofEngineering, CFD specialists George Karniadakisand Igor Pivkin of Applied Math, and IVR specialistsAndrew Forsberg, Bob Zeleznik, and Jason Sobel ofComputer Science, is a virtual environment forvisually exploring simulated pulsatile blood flowwithin coronary arteries. Cardiologists and bio-engineers hypothesize that characteristics of the flowwithin these vessels contribute to the formation ofplaques and lesions that damage the vessels. Asignificant fraction of the population suffers from thisproblem. We conjecture that IVR is an appropriateway to develop a better understanding of thecomplex 3D structure of these flows.

In Figure 3, a scientist uses our system to study sim-ulated pulsatile flow through a model of a bifurcatingcoronary artery; lesions typically form just down-stream of these splits. IVR offers a natural explora-tion environment. The representation in Figure 5 isan overview of all of the data available as a startingpoint for exploration. Here the wire-mesh artery wallsshow quantities on them via variably distributedsplats, yellow for pressure and green for residencetime, and the flow is represented within the arterywith particle paths that advect through the flow.

To make the view both fast enough to compute andpossible to interpret, particle paths are concentratedin “interesting” areas. Only a subset of the possiblepaths is visible at any one time, permitting a fasterframe rate. By cycling through which particle pathsare visible, however, all the time-varying flow can beshown over a period of viewing time. The applicationalso supports some interaction with the flow, includ-ing placement of persistent streaklines, uniform col-oring or deletion of all particles that pass through aspecified region, and controls for the rate, density,

more effectively with a model of the present-day ru-ins as context in which artifact concentrations areshown as simple 3D icons in saturated colors amonga muted view of the present-day ruins and transpar-ent trench boundaries.Scale and navigation: Another research issue con-cerns the intertwined matters of scale and naviga-tion. Should we work at full scale? In miniature?What are the tradeoffs for archaeologists studying asite like Petra that is the size of three football fields?For a scale model that fits within the Cave, naviga-tion can be primarily via body motions. Larger scalesrequire longer-distance navigation, and the virtualworld must move relative to the Cave, obviating anyfixed mental model a user may have created. Moti-vated in part by these multiscale navigation needs ofArchave, Brown’s Joe LaViola developed step-WIMnavigation, in which the user employs gestures to ac-cess a world-in-miniature for navigating large dis-tances and familiar body-motion navigation forshorter distances.

Productivity: Most desktop productivity applicationscreate persistent artifacts—word processors and edi-tors produce documents, modelers create geometricmodels, and spreadsheets create analysis docu-ments. The ephemeral nature of Cave experienceslimits their scientific utility. Archave, for example, hasbeen primarily a browser of the archaeologicalrecord, and while archaeologists have found thisvaluable, they need to capture their work in the Cavefor later analysis.Quantitative evaluation: Quantifying the value ofIVR for this application is difficult. Initially, we collect-ed anecdotal evidence during demos to archaeolo-gists. We then attempted to design a user study,defining essential tasks as quantitative performanceyardsticks. We failed. Such user studies require aquick task because it must be repeated many times.But a task must also be representative of what realusers will do. In our scientific applications, the realtasks are still being identified and so are not clearly

defined. Those that havebeen identified tend to becomplex and time-consum-ing, e.g., determining rela-tive dating of a set ofspatially related finds fromseveral trenches. Definingtasks for quantitative as-sessment is thus difficult be-cause of all the competingconstraints. For now, wecompromise and give struc-tured 1-2 hour “demos” thatwe record on video and ana-lyze anecdotally. As this is-sue arises in all ourapplications, our continuingresearch agenda includes asearch for more effectiveways to evaluate user inter-face concepts in IVR.Discipline-specific VRsensitivity: Another secondlimiting factor in Cave utilitywas the sensitivity of Ar-chave users, seemingly

Figure 3—Bioengineer studyingpulsatile flow within an idealized virtual

model of a bifurcating coronaryartery. Viewing this scene with head-

tracked stereo glasses causes particlesand the textured tubular vessel wall to

“jump” into the third dimension andbecome much clearer

Figure 4—Synoptic view of interior of theartery, including streaklines, annotations,sponges, pierce planes, pressure splats

and particle paths

conduit! 16

and for defining the “interesting” areas to empha-size.The synoptic view provides a starting point for explo-ration that gives more insight than an empty space.Other parts of the interface give a way to explore thesynoptic view, adjust it, and annotate features asthey are discovered. Thus far, we have been able tounderstand the simulation process that creates ourflow data better than has been possible with tradi-tional workstation-based tools. We are just startingto find new arterial flow features.Beyond the perennial issues of frame rate, trackerlag and calibration, and model complexity, somemore subtle ones emerged as we developed this ap-plication.Multivalued visualization: As in the Archaveproject, this application has more different types ofdata than we can visualize at once. In addition to thevelocity within the artery, which is a 3D vector field,pressure and residence time are defined on thewalls of the vessel. Beyond those quantities, our flu-id flow collaborators want to look at pressure gradi-

ent, vorticity, other derived quantities in the flow, andthe structure of critical points.Our synoptic visual approach for displaying this mul-tivalued data is partly motivated by Interrante's workdemonstrating that patterns on nested surfaces aremore effective at conveying shape than transparen-cy. We carry that to a volumetric display of particlesand also use motion to increase the apparent “trans-parency”. We also use layering of strokes. Withthese principles we can display all the data with verylittle occlusion as a starting point for study of theflow. We are also exploring other more feature-based visual abstractions for the flow showing, forexample, critical points and their relationships, co-herent flow structures, or regions of the flow thatmay be separating. They have the potential to ab-stract the essence of a flow more efficiently, al-though at the risk of missing important butunexpected flow behavior.What makes a visual representation effective? Forunderstanding steady 2D flows, important tasks thatare simple enough for a user study include advect-ing particles, locating critical points, and identifyingtheir types and the relationships among them. Thesame tasks are likely to be relevant in 3D but thereare likely to be other simple tasks that we haven’tidentified yet, and there are certainly complex tasksthat are beyond the scale appropriate for quantita-

tive statistical evaluation needed for a usabilitystudy. The discovery of such measures of efficacywill clearly influence the design of visualizationmethods.Scale and navigation: While the scale and naviga-tion issues are somewhat similar to those of Ar-chave, they differ in that Archave has a natural life-size scale while arterial blood flow requires a more“fantastic voyage” into the miniature. When shouldour artery model be life-size? Large enough to standin? Somewhere in between? Under user control?Any one such specific question could probably beanswered given some context and a user study. Butwould the results generalize to different navigationstrategies or to a different visual representation? Forour synoptic visualization, a model about six feet indiameter seems to give the best view of flow struc-tures of interest from inside. Smaller-scale modelsare too difficult to get one's eyes inside. We have ex-plored several navigation metaphors, includingwand-directed flying, direct coupling of the virtualmodel to the hand, and a railroad-track-like meta-phor with a lever for controlling position along apath down the center of the artery. Each hasstrengths and weaknesses; none is ideal in all situ-ations. Design issues like these of scale and navi-gation, both at a specific and at an integrative level,have become an important part of our ongoing re-search.Our group at Brown is working with half a dozenRhode Island School of Design (RISD) faculty toaddress some of these issues. Our goals are three-fold: first, to develop new visual and interactionmethods; second, to explore the design process;and third, to develop a curriculum to continue thisexploratory process. For the last four months wehave held a series of design sessions and havebuilt a common frame of reference among the de-signers, computer graphics developers, and do-

main scientists; we are ready to proceed with newdesigns. We will offer a Brown/RISD course next fallto 6-10 students from each institution.Design iteration costs: One of the highest costs ofdeveloping IVR applications is iterating on their de-sign. Archave took almost three years to go throughfour significant design phases. For our bioflow appli-cation, we have implemented three significantly dif-ferent designs in about 18 months. Parts of thisprocess are likely to be essential and incompress-ible, but we believe that other parts are accidental, touse Brooks’ terminology. A Brown graduate student,Daniel Keefe, is working together with our RISD col-laborators to find ways of speeding it up. Initial ef-forts build on his CavePainting application, whichhelps quick sketches of visual ideas in the Cave.Productivity: As with Archave, scientists would bemore productive if they had tools for annotating theirdiscovery process, facilities for taking results away,and some way to build on earlier discoveries. Weare exploring a number of possible alternatives thatwould let researchers take their notes and versionsof the applications away from the Cave environment.

BRAIN WHITE-MATTER

Our synoptic visual approach fordisplaying this multivalued data ispartly motivated by Interrante's

work demonstrating that patternson nested surfaces are moreeffective at conveying shape

than "transparency"

conduit! 17

VISUALIZATIONIn a collaborative biomedical effort developed bycomputer science graduate students CagatayDemiralp and Song Zhang in collaboration with,among others, neurosurgeons from MGH and brainresearchers from NIH, we are designing avisualization environment for exploring a relativelynew type of medical imaging data: diffusion tensorimages (DTI). Humans are 70 percent water. Thestructure of our tissues, particularly in the nervoussystem, influences how that water diffuses throughthe tissues. In particular, in fibrous structures likemuscle and nerve bundles, water diffuses fasteralong the fibers than across them. DTI can measurethis directional dependence. Measurements of thediffusion rate have the potential to help understandthis structure and, from it, connectivity within thenervous system. This will help doctors better un-derstand the progression of diseases and treatmentin the neural system. DTI provides volume imagesthat measure this rate of diffusion at each point withina volume. Viewing these volumes is a challengebecause the measurement at each point in thevolume is a second-order tensor consisting of a 3x3symmetric matrix containing six interrelated scalarvalues.

We generate geometric models with thevolumes to represent structures visuallyand display them in the Cave. In Figures 5and 6, an abstraction derived from a DTI ofa human brain, different kinds of geometricmodels represent different kinds of ana-tomical structures: the red and white tubesrepresent fibrous structures, like axontracts; the green surfaces represent lay-ered structures, like membrane sheets orinterwoven fibers; the blue surfaces showthe anatomy of the ventricles.Our experience thus far is that IVR facili-tates a faster and more complete under-standing of these complicated models, themedical imaging data from which they arecreated, and the underlying biology andpathology. Applications we are pursuing toexplore this claim include preoperativeplanning of tumor surgery, quantitativeevaluation of tumor progression underseveral conditions, and the study ofchanges due to surgery on patients withobsessive compulsive disorder.

Once again, frame rate, lag, visual design, and inter-action design are intertwined issues. Biomedical re-searchers using this application almost always wantmore detail and will tolerate frame rates as low as 1FPS for visualization, even knowing that they mustmove their heads only very slowly so as to avoid cy-bersickness. This clearly detracts significantly fromthe feelings of immersion and presence. Worse,many interactions are virtually impossible at thatrate, so we struggle to balance these conflicting re-quirements.Multivalued visualization: These volume-fillingsecond-order tensor-valued medical images consistof six values at each point of a 3D volume. Often, wehave additional coregistered scalar-valued volumes.We continue to search for better visualization ab-stractions through close collaboration with scientists,since it is only thus that we can define what is impor-tant and what can be abstracted away. Exploring dif-ferences between subjects or longitudinally withinone subject brings the additional challenge of dis-playing two of these datasets simultaneously.Design: Beyond the data visualization design, wenotice again, as with Archave, the importance of vir-tual context. Working within a virtual room, createdby providing wall images within which the brain mod-el is placed, gives users a subjectively more compel-ling experience. Several users report that the stereoseems more effective and that they can “see” betterwith the virtual walls around them. In collaborationwith perceptual psychologists, we are exploring why.Scale and Navigation: As with Archave, full-scaleseems a natural choice. However, much as with ourarterial flow visualization, users prefer larger scalessince at full scale they must struggle to see smallfeatures. They can make the projected image largerby moving closer to the virtual object, but then mustsquint and strain; also, they find it difficult to fuse thestereo imagery when features are up close. With theapproximately 2:1 scale models used thus far, explo-ration has been almost exclusively from the outsidelooking in, unlike the flow visualization, where mostof the exploration has been from within the flow vol-ume. Could each area benefit from interaction tech-

Figure 5—Brain model use in Cave

Figure 6—Brain model in wireframe enclosure

conduit! 18

wall of the Cave for reference, then returning to theCave for further exploration.

MARS TERRAINEXPLORATIONIn the fourth and final IVR application discussed here,terrain from Mars is modeled for planetary geologistsso they can “return to the field.” The predominantmeans of visualizing satellite data is 2D imaging (atmultiple resolutions) with color maps representing el-evation and other terrain attributes. However, MarsOrbiter Laser Altimeter (MOLA) missions give a thirddimension to the terrain. Incorporating topography in-formation into analysis opens new avenues for study-ing geological processes. Our primary goal is tocreate a complete model integrating all of the avail-able data that is interactively viewable. The geolo-gists’ ultimate goal is to understand the geologicalprocesses on Mars from the current surface struc-tures, including the relationships among the stratathat are partially visible on the surface. More prag-matically, studying the terrain lets us search for sitesfor further study and identify potential landing sites forfuture missions.

niques used by the other? Are there intrinsicdifferences?

Productivity: We continue to see productivity issueswithin the Cave. It is often run in batch mode, withtime slots, limited access, and a (probably ineffective)urgency to be “efficient.” Users want a Personal Caveto can sit inside and think, with the computer doingnothing. Some of our most effective sessions have in-volved users working for a while, then sitting on thefloor of the Cave talking, then stepping outside to lookat things, then going back in to look some more.

Collaborating in a single Cave has proven very impor-tant—we almost always have at least two domain ex-perts in the Cave at a time so that we can hear themtalk to each other. One disadvantage is that they can’tpoint with real fingers because only one viewer ishead-tracked. In some ways, long-distance collabora-tion solves this problem because each viewer has hisor her own Cave, but that introduces other problemsof synchronization and communication. Users of thisapplication were the first to ask for physical objects toaugment the virtual environment—anatomy booksand printed medical images, for example. Our currentpractice is to iterate moving out of the Cave to workwith reference books, keeping the image on the front

THE NOT-SO-GREAT MARMITE TASTE CHALLENGE!

British-born Suzi Howe grew up eating Marmite (“a brownish vegetable extract witha toxic odor, saline taste and an axle-grease consistency”). She had been shipped offto boarding school at the age of nine, where inmates subsisted in a Dickensian stateof semi-starvation and Marmite loomed large in the diet, along with bread and drip-ping! After reading the entertaining article “Long Live Marmite! Only the BritishCould Love It” in the NYT’s international section earlier in the year, she was eager

to share the Marmite experience witheveryone in CS; in fact, there was nostopping her.

As you can see from the photograph,she set up a three-point Marmite tast-ing station. Participants were invitedto 1) read the article, 2) take breadand spread on some Marmite, and 3),well, need we say more? She thought-fully provided a notepad for com-ments, which ranged from “Let’s getMikey!” to “IDLH” (ImmediatelyDangerous to Life and Health—anOSHA term denoting a hazardousenvironment) and “I fear the British!”You will be spared the less savory (nopun intended) remarks.

Despite the NYT’s classification ofMarmite as “an emblem of enduring British insularity and bloody-mindedness,” andthe fact that the trash can pictured was full by day’s end, Marmite has its mavens.Check out this website: www.ilovemarmite.com. And the less stout of heart anddigestion might see: www.ihatemarmite.com.Rule Britannia!

conduit! 19

In Figure 7, James Head of Brown’s GeologicalSciences Department uses a handheld IPAQPDA, which provides convenient interactive inputto control global position and rendering styleswhile flying over the terrain of Mars. Mars dataavailable to the exploration process include ele-vations from the MOLA and color images of sur-face swathes from the Mars Orbiter Camera(MOC). Altitude information we are displayingcurrently is on a 7200 x 7200 grid. The thousandsof color swathes can be as large as 6000 x 2000pixels and cover a small region of the surface.Scale and navigation: What's the best scale forMars study? At full scale, navigation becomes ex-tra-planetary and unfamiliar. Smaller scales ap-pear to be more appropriate, although the scalethat we currently use requires hybrid navigation:flying for most movement coupled with a muchsmaller-scale map for taking larger steps. As withmany of the other applications, users are morecomfortable driving than being driven, particular-ly moving backwards.Performance: One of the most significant limitationsfor this application is performance. A consistent 30+frames per second is very difficult to maintain. Anaïve approach to rendering the 7200 x 7200 height-field yields about 100 million triangles! LLNL(Lawrence Livermore National Laboratory)’s ROAM(Realtime Optimally Adapting Meshes)] software forview-dependent terrain rendering improves perfor-mance, but more progress is necessary. The geolo-gists consistently request more detail, exacerbatingthe problem.

CONCLUSIONWhile IVR and scientific visualization have their ownsignificant histories and accomplishments as sepa-rate fields, their intersection is less common and con-

siderably more immature. Because scientificvisualization so often involves handling large datasets, it is a great stressor of all aspects of IVR tech-nology. To create satisfactory user experiences,IVR’s requirements of low latency, high frame rate,and high-resolution rendering and interaction han-dling (e.g., head and hand tracking) are much morestringent than those for workstation graphics. Thismay force more severe restrictions in the complexityof models and data sets than IVR can adequatelyhandle. Such constraints put a serious burden ofproof on IVR proponents to show that the advantag-es of the immersive experience outweigh the disad-vantages of higher cost, of scheduling a one-of-a-kind institutional facility such as a Cave, and, mostimportant, of complexity restrictions in what can bevisualized.

Michael Black. In December Michaelattended the Neural Information Pro-cessing Systems meeting in Vancouverwhere he and his colleagues from theDivision of Applied Mathematics and theDepartment of Neuroscience presented apaper on the probabilistic inference ofarm motion from neural activity in themotor cortex. After the main conferencehe gave a talk at the workshop on Direc-tions in Brain-Computer Interface (BCI)Research, held at Whistler. From snow-covered mountains Michael traveled tothe sunny beaches of Kauai for the IEEEConference on Computer Vision and Pat-tern Recognition, where he served as anarea chair and presented a paper withFernando De la Torre on Dynamic Cou-pled Component Analysis. Michael hadtwo Ph.D. students graduate over the

last few months: Hedvig Sidenbladh fromKTH (the Royal Institute of Technology)in Stockholm and Fernando De la Torrefrom La Salle School of Engineering inBarcelona.

In February, the Motion Capture Lab wasinstalled at Brown. This state-of-the-artfacility, a joint effort of Computer Scienceand Cognitive and Linguistic Sciences,will be used to capture 3D human motionfor research in graphics, machine learn-ing, computer vision, and human naviga-tion.

Michael’s research on human motion an-alysis was supported this fall by anothergenerous gift from the Xerox Foundation.

▼▼▼

[email protected]

Figure 7—Geologist using PDA to aid Mars flyover

conduit! 20

Amy Greenwald. Amy has recentlywon two major awards—a five-year NSFCareer Grant, their most prestigiousaward for new faculty members. She re-ceived this award for her project ‘Compu-tational social choice theory: strategicagents and iterative mechanisms.’ Amyhas also become the CRA’s Digital Fellowfor 2002. She has been invited to the FCCto speak in May in support of their goal ofbuilding relations between academia andgovernment. This program is supportedby the National Science Foundation and

orchestrated by the Computing ResearchAssociation (CRA). It is intended to createties between academic and industrialcomputing research communities on theone hand and information technologyworkers in federal, state, and local gov-ernments on the other.

Finally, she mentioned casually, “...andI’m getting married next week!” We’veheld this issue in order to include a wed-ding photo...(the NYT wedding announce-ment is on our website—very romantictoo).

Maurice Herlihy. At the Latin-Ameri-can Theoretical Informatics (LATIN 2002)symposium in Cancun, Maurice was invit-ed by the Iowa State University Depart-ment of Computer Science to give theMiller Distinguished Lecture. He spokeon Algebraic Topology and DistributedComputing.

John Hughes. Said Spike, “I’ve beenPapers Chair for SIGGRAPH this year;it’s been an interesting experience. Forone thing, it’s made me think about myprofessional organization in a long-termway rather than just as ‘the place to pub-lish my next paper.’ A surprising numberof the challenges of the job turn out to bebalancing short-term tactics against long-term strategy. But there are a few otherchallenges: three days before our commit-tee meeting, I got email telling me one ofthe committee members had just had anemergency appendectomy. Then, as thecommittee began to arrive in Marietta,Georgia, it turned out that the hoteldidn’t have enough rooms for us, eventhough we’d booked months in advance.Fortunately, I was blessed with incrediblycompetent folks from SIGGRAPH whodealt with the hotel, so I didn’t have to doany yelling and screaming. But I’ll nevergo back to Marietta.

“The two-day meeting was an interestingexperience: my job was purely adminis-trative—get every paper discussed, getdecisions made on every paper, make sureit’s all done as fairly as possible—and Inever got to learn any of the technical de-tails of any paper, so that by the end, thecontent of the program was almost a sur-prise.

“To keep things fair, whenever a paperfrom some company or lab was discussed,committee members with a conflict of in-terest would leave the room and watchthe status of the paper on a monitor in thehallway showing a spreadsheet of all thepapers. It was pretty difficult to see yourpaper discussed for a long time, and thensee the critical cell in the spreadsheetturn red instead of green. It was especial-ly tough to come back into the room andhave to be the person who presented thenext paper and tried to say how good itwas. Michael Cohen, of Microsoft, had agreat suggestion: for next year, we should

▼▼▼

▼▼▼

Amy Greenwald andher new husband

Justin Boyan

Photos courtesy of David Binder

conduit! 21

buy a crate of cheap crockery from Build-ing 19 and a hammer. Then when yourpaper was rejected, you could smash aplate or two before returning to the fray. Iliked the idea so much I offered to helppay for it, so we may see ‘Mike andSpike’s stress-relief center’ at future com-mittee meetings.

“In the end, despite all the stress, we’vegot a terrific program, and it’s going to bea great conference. (It’s also going to be ahot one: we’re in San Antonio in late Ju-ly.) I hope I’ll see some of you there.”

Philip Klein. Wavemarket, the companyfor which Philip is serving as Chief Scien-tist, received venture funding in thefourth quarter of 2001, perhaps the onlywireless software startup to do so. Philipwill be returning from his leave to teachin the fall.

Shriram Krishnamurthi. Shriramserved on the program committee ofFoundations of Object-Oriented Languag-es 2002 and was program co-chair of Prac-tical Aspects of Declarative Languages2002. He co-authored award papers atboth Foundations of Software Engineer-ing, 2001 and Automated Software Engi-neering, 2001. His invited talk atLightweight Languages 1 resulted in arather fr ightening photograph inDrDobbs (thanks, but yes, he’s seen it; no,he doesn’t need the URL; yes, he knowshe won’t live it down). More seriously, heis increasing his indoctrination in theBrown scheme of things, having co-au-thored a paper with Spike with the word“Graphics” in its title. Fortunately, thisdoes not imply any concomitant knowl-edge of graphics.

Nancy Pollard. Nancy is program co-chair for the brand new ACM Symposiumon Computer Animation to be held thisJuly: sca2002.cs.brown.edu. There is a lotof great computer animation research go-

ing on right now, and the goal of the newsymposium, co-located at SIGGRAPH (themain computer graphics conference), is togive computer animation researchers aconvenient forum for sharing their latestwork.

Eli Upfal. Eli was elected a fellow of theIEEE; he was an invited speaker at aDIAMACS workshop on “ComputationalComplexity, Entropy and Statistical Phys-ics.”

Don Stanford. Don, ’72, Sc.M. ’77, wasthe keynote speaker at the recent dedica-tion of the Undergraduate Teaching Labo-ratories for the Division of Engineering.

Andy van Dam. Andy gave a distin-guished lecture on “Immersive Virtual Re-ality for Scientific Visualization” at ETH,Zurich. He has been selected to receiveCRA’s Distinguished Service Award for2002, which will be presented at Snow-bird on July 15. Also, Andy will receivethe 2001 Harriet W. Sheridan Award forDistinguished Contribution to Teachingand Learning at Brown. Said Prof. Will-iam Risen, Chair of the Awards Commit-tee, “Your faculty colleagues, both in andbeyond your own department, cite the ex-ample you have established as a pioneerin computer science education, beginningwith a course you developed in 1962. As adistinguished scholar, your willingness totake precious time to mentor studentsand junior faculty has inspired your col-leagues. Your training of your undergrad-uate TAs is a model for all faculty. TheAdvisory Board of the Sheridan Centerhonors you for the inspiring example thatyou have set for your colleagues.”

▼▼▼

▼▼▼

▼▼▼

▼▼▼▼▼▼

▼▼▼

▼▼▼

conduit! 22

RECOVERING gender-challenged paranoid schizo-phrenic wants understanding partner. [email protected].

SWMC++ looking for SF, no Java please.

F JAVA wants sensitive M to share art, romantic sun-sets and garbage collection.

INTERFACE specialist seeks high-performance server.(Theoreticians need not apply.)

SWM seeks dominant F for concatenation and con-straint programming.

A NOTE to my detractors, I have switched to Linux. D.Vader

Alumni and friends of CS are invited to sub-mit personal ads. Contributions fit to printwill be in the fall issue. We wish to acknowl-edge the significant contribution of LynetteCharniak, wife of the Unplugged one, whosuggested this column.

PERSONALS

I finally broke down and got myself (andmy family) a cable Internet connectionfrom our local cable company, Cox. Get-ting it was sort of a challenge as Cox hadbeen providing the cable but farming outISP (Internet Service Provided) part ofthe business to a separate company. Thiscompany had gone belly up about twoweeks before I decided to joint the 20thcentury (a year or two late, depending onyour computation). However, the thingthat really stopped me in my tracks waswhen Cox asked me if I had a systemsbackup disk for my IBM windows system.The answer was that I did not, but thatsince the system was installed here in theCS department at Brown, if there was anyproblem I could simply bring it back hereand they could re-install it. This explana-tion, however, cut no ice. No backup disk,no Internet installation.

At this point I went to see John Bazik, along-time member of our technical staff.John, I am happy to say, solved the prob-lem with the “out-of-the-box” type think-ing for which Brown would like to befamous. He pulled out a blank disk, wrote“Windows XP backup disk” on it, andproudly handed it to me. I have saved thedisk for the next Cox Internet customerhere in the department.

One of my jobs here at Brown is to talk toprospective undergraduates who want toknow more about our computer sciencedepartment. On several occasions I have

had the chance to talk to children of col-leagues of mine, and this last week KathyMcKeown, chair of computer science atColumbia and a well known researcher inmy area, and her daughter visited Brown.Her daughter was considering amongother possibilities a program thatstressed both computer science and art.

One of my selling points for Brown is thedegree to which undergraduates can getinvolved in research here. I frequently tellprospective students, particularly thosewith some interest in graphics, that anytime I have a student come to talk withme I can be confident that there will be anundergraduate working in the graphicslab here. Then after we have finishedtalking we perform the experiment bywalking to the lab and I ask folks workingthere if any of them are undergraduates.Kathy’s kind letter tells of the result inthis case.

“Eugene -- I enjoyed talking toyou on Tuesday about naturallanguage, but I especiallywanted to thank you for theselling points on Brown to mydaughter. I can’t tell you howmuch of an impression it made on

CHARNIAK UNPLUGGED

conduit! 23

her that your experiment worked*and* that not only was the onlystudent in the lab an undergrad-uate, but that he was also onewho combined a humanities sub-ject like archaeology with com-puter graphics, and that he hadbeen to Jordan. Couldn’t havedone the trick better: she’s nowsold on a combination of art andcomputer graphics. And, for allI might have mentioned this pos-sibility in the past, it hadnothing like the effect of yourexperiment :)

So thanks!

Kathy”

My response:

“You are welcome. It sort ofreminds me of the time we had afaculty candidate who came tothis country from, I think,Greece, and on the way to dinnerthe group came across some peo-ple in traditional Greek costumewho were on their way to a Greek-tradition-day event, or somesuch. The candidate asked us,semi-seriously, if we had setthis up!

Eugene”

I was reading in the New York Timesthat legislators in Utah are threateningthe University of Utahwith dire consequencesunless it revokes its ruleagainst guns in classes.If I remember thedetails correctly, Utahhas a new gun ordi-nance allowing anyonewith a gun permit tocarry the gun anywhereexcept grades K-12.This would, for example,include churches. (Orwas it that churcheswere excluded, but K-12was OK?) Utah had ear-lier made news whenone clinically insane fel-low was sold a gunwhen his backgroundcheck showed that he

had no criminal record. The legislaturedeliberately allowed for this possibilitysince they did not want the police to startmaking decisions about who was saneenough to carry a gun. Thus it was not toosurprising that they did not place muchweight on the University’s argument thatintellectual arguments should be settledby something less definitive than lethalforce.

Not being able to resist, I sent one of mycolleagues at Utah (Ellen Riloff) emailasking what she packs when going toclass. I thought I was being pretty clever,but it is clear that I was just an effeteeast-coast liberal (EECL) thinking alongparty lines. Ellen responded that one ofher Utah co-workers got e-mail from nodoubt another EECL asking for adviceabout what sort of gun he should getwhen coming to the university for aresearch visit. Would a rapid-fire rifle getin the way too much?

This did get me to wondering, however, ifBrown had any rules about guns inclasses. It would be pretty funny if I wasmaking fun of Utah for having such arule, while Brown did not. So our editorcalled up the University General Coun-sel’s office to ask. Fortunately, Brown doesnot allow students to own or use “fire-arms”. (Student Handbook, XI, “Stu-dent Conduct”, Article VIII of “Offenses”reads: “Possession, use, or distribution offirearms, ammunition or explosives. The

During this grading process, Suzi has missed seeingdeer on the field in the evening—the only deer out

there lately is a John Deere!

Department of Computer ScienceBrown University

Box 1910, Providence, RI 02912, USA

conduit!

conduit! 24

NON-PROFITU.S. Postage

PAIDProvidence, RI

Permit #202

Printed on recyled paper Address changes welcomed

conduit!A publication of

The Computer Science DepartmentBrown University

❦Inquiries to: conduit!

Department of Computer ScienceBox 1910, Brown University

Providence, RI 02912FAX: 401-863-7657

PHONE: 401-863-7610EMAIL: [email protected]

WWW: http://www.cs.brown.edu/publications/conduit/

Jeff CoadyTechnical Support

Katrina AveryEditor

Suzi HoweEditor-in-Chief

University defines firearms as any projec-tile firing device, especially those whichare capable of causing harm to persons ordamage to property. This includes but isnot limited to conventional firearms(devices using gunpowder); all types of airrifles; BB, pellet, and dart guns; or anyslingshot device.”) Of course, this is notexactly the same thing as Utah’s rule, asthere is nothing here preventing the fac-ulty from exercising their constitutionalrights.

One of the jobs for Suzi Howe, our editor-in-chief, is laying out this rag. Naturally,the easiest way to do it is starting out atthe front and working your way to theback. From my point of view this is bothgood and bad: good in that I get to pro-crastinate about writing my column, bad

in that occasionally Suzi says that thereis a blank space at the end of the paperand she needs more words from me. Thisjust happened about five minutes ago.However, while I was in her office discuss-ing this, I noticed some pictures of whatseemed to be a major construction projecton Suzi’s desk. It seems that Suzi’s hus-band is a big baseball fan and has had, forthe last 20 years or so, a baseball field intheir “back yard”. They are now doing thegrading work to make it much more pro-fessional. The field has the exact dimen-sions of Fenway Park! Upon seeing this Ipronounced it definitely “conduit!-wor-thy”—a term of art among those of us whomake this periodical their hobby. Andbesides, throw in a construction photo,(see previous page) and poof, no moreextra space.