virtual city design

12
REFEREED PAPER Virtual City Design Based on Urban Image Theory Itzhak Omer, Ran Goldblatt and Udi Or Department of Geography and The Human Environment, The Environmental Simulation Laboratory, Tel-Aviv University, Ramat-Aviv, Tel-Aviv, 69978, Israel Email: [email protected]; [email protected]; [email protected] This paper aims to evaluate what effect applying residents’ urban image to virtual city design (a real time virtual model of an actual city) has on wayfinding performance during ‘flying-based’ navigation mode. Two experiments were conducted to compare two virtual city designs using the virtual model of Tel Aviv city. One design included highlighted urban elements from the residents’ urban image, while in the second design no highlighted elements were included. The experiments proved that using the elements of the residents’ urban image in a virtual city design enhances the performance of all participants in the wayfinding tasks, and especially those with a low level of spatial knowledge. Analysis of the trajectory patterns and the verbal reports of the participants during navigation showed that the urban image design facilitates a more intensive use of a position-based strategy, in addition to the path-integration wayfinding strategy, which was found to be dominant in the virtual model without the highlighted urban image elements. On the basis of these findings we propose principles for designing virtual cities from a perspective of wayfinding. 1. INTRODUCTION A virtual city is a real-time model of an actual city that enables the user to walk through or fly over a certain area. Such models have been constructed recently for many cities, e.g. Los Angeles, Philadelphia, London, Barcelona, Glasgow, Tokyo and Tel Aviv, thanks to improvements in geovisua- lization tools (computer graphics, GIS etc). Currently, the research in this field tends to concentrate on the models’ technological dimensions and their implementations for supporting urban planning and various decision-making processes (Fisher and Unwin, 2001; Laurini, 2001; Jiang et al., 2003). However, with a few exceptions, which include a conceptual discussion on cognitive issues for virtual environment design (Slocum et al., 2001) and a considera- tion of wayfinding aspects in a virtual cities design (Bourdakis, 1998; Omer et al., forthcoming), little attention has been paid to the wayfinding difficulties that characterize these models and their design implications. Virtual cities are unique when compared to other geographical representations of the city, such as maps, aerial photographs or static 3D models, due to the real-time movement within them, which is characterized by high speed of locomotion, different 3D viewing perspectives and vary- ing geographical scales. These characteristics of virtual cities could entail non-intuitive and unfamiliar user behavior, resulting in wayfinding difficulties for the users i.e. difficulties in locating their current position and finding their way to a desired location. In addition, users may experience difficulties of orientation just as users of any desktop virtual environment (VE). These difficulties are related to the lack of ‘presence’, i.e. ‘the participant’s sense of ‘being there’ in the virtual environment’ (Slater et al., 1994), perspective distortions and the use of standard input devices that might affect performance during navigation (Darken and Sibert, 1996; Ruddle et al., 1997; Harris and Jenkin et al., 2000; Whitelock et al., 2000; Jansen et al., 2001). Enhancing wayfinding performance in a virtual city design aims to help city residents transfer their image and spatial knowledge from the real city to its virtual model. Lynch’s urban image theory (Lynch, 1960) could be an appropriate tool to attain this goal since it enables us to see how city residents perceive their city. The urban image, or city image, is actually the overlap of many individual images, Lynch claims ‘which are the result of a two-way process between the observer and his environment. The environment suggests distinctions and relations, and the observer […] selects, organizes and endows with meaning what he sees’ (Lynch, 1960, p. 6). The underlying assumption is that the city image, which is obtained from sketch maps or interviews, provides information on the imageability of the city elements. Lynch defined image- ability as a ‘quality in a physical object which gives it a high probability of evoking a strong image in any given observer’ (Lynch, 1960, p. 9). In discussing real city design by these elements, Lynch suggests they can be classified conveniently into five types of elements: paths, edges, districts, nodes and Keywords: geovisualization, virtual cities, urban image theory, wayfinding strategies, Virtual Environment design The Cartographic Journal Vol. 42 No. 1 pp. 1–12 June 2005 # The British Cartographic Society 2005 DOI:

Upload: juan-carlos

Post on 18-Jul-2016

12 views

Category:

Documents


0 download

DESCRIPTION

Virtual City Design Based on Urban Image Theory-libre

TRANSCRIPT

Page 1: Virtual City Design

R E F E R E E D P A P E R

Virtual City Design Based on Urban Image Theory

Itzhak Omer, Ran Goldblatt and Udi Or

Department of Geography and The Human Environment, The Environmental Simulation Laboratory, Tel-Aviv

University, Ramat-Aviv, Tel-Aviv, 69978, Israel

Email: [email protected]; [email protected]; [email protected]

This paper aims to evaluate what effect applying residents’ urban image to virtual city design (a real time virtual model of

an actual city) has on wayfinding performance during ‘flying-based’ navigation mode. Two experiments were conducted to

compare two virtual city designs using the virtual model of Tel Aviv city. One design included highlighted urban elements

from the residents’ urban image, while in the second design no highlighted elements were included.

The experiments proved that using the elements of the residents’ urban image in a virtual city design enhances the

performance of all participants in the wayfinding tasks, and especially those with a low level of spatial knowledge.

Analysis of the trajectory patterns and the verbal reports of the participants during navigation showed that the urban

image design facilitates a more intensive use of a position-based strategy, in addition to the path-integration wayfinding

strategy, which was found to be dominant in the virtual model without the highlighted urban image elements. On the basis

of these findings we propose principles for designing virtual cities from a perspective of wayfinding.

1. INTRODUCTION

A virtual city is a real-timemodel of an actual city that enablesthe user to walk through or fly over a certain area. Suchmodels have been constructed recently for many cities, e.g.Los Angeles, Philadelphia, London, Barcelona, Glasgow,Tokyo and Tel Aviv, thanks to improvements in geovisua-lization tools (computer graphics, GIS etc). Currently, theresearch in this field tends to concentrate on the models’technological dimensions and their implementations forsupporting urban planning and various decision-makingprocesses (Fisher and Unwin, 2001; Laurini, 2001; Jianget al., 2003). However, with a few exceptions, which includea conceptual discussion on cognitive issues for virtualenvironment design (Slocum et al., 2001) and a considera-tion of wayfinding aspects in a virtual cities design(Bourdakis, 1998; Omer et al., forthcoming), little attentionhas been paid to the wayfinding difficulties that characterizethese models and their design implications.

Virtual cities are unique when compared to othergeographical representations of the city, such as maps, aerialphotographs or static 3D models, due to the real-timemovement within them, which is characterized by high speedof locomotion, different 3D viewing perspectives and vary-ing geographical scales. These characteristics of virtual citiescould entail non-intuitive and unfamiliar user behavior,resulting in wayfinding difficulties for the users i.e. difficultiesin locating their current position and finding their way to adesired location. In addition, users may experience difficulties

of orientation just as users of any desktop virtual environment(VE). These difficulties are related to the lack of ‘presence’,i.e. ‘the participant’s sense of ‘being there’ in the virtualenvironment’ (Slater et al., 1994), perspective distortionsand the use of standard input devices that might affectperformance during navigation (Darken and Sibert, 1996;Ruddle et al., 1997;Harris and Jenkin et al., 2000;Whitelocket al., 2000; Jansen et al., 2001).

Enhancing wayfinding performance in a virtual citydesign aims to help city residents transfer their image andspatial knowledge from the real city to its virtual model.Lynch’s urban image theory (Lynch, 1960) could be anappropriate tool to attain this goal since it enables us to seehow city residents perceive their city. The urban image, orcity image, is actually the overlap of many individualimages, Lynch claims ‘which are the result of a two-wayprocess between the observer and his environment. Theenvironment suggests distinctions and relations, and theobserver […] selects, organizes and endows with meaningwhat he sees’ (Lynch, 1960, p. 6). The underlyingassumption is that the city image, which is obtained fromsketch maps or interviews, provides information on theimageability of the city elements. Lynch defined image-ability as a ‘quality in a physical object which gives it a highprobability of evoking a strong image in any given observer’(Lynch, 1960, p. 9). In discussing real city design by theseelements, Lynch suggests they can be classified convenientlyinto five types of elements: paths, edges, districts, nodes and

Keywords: geovisualization, virtual cities, urban image theory, wayfinding strategies, Virtual Environment design

The Cartographic Journal Vol. 42 No. 1 pp. 1–12 June 2005# The British Cartographic Society 2005

DOI:

Page 2: Virtual City Design

landmarks, which should be patterned together to providean imageable environment.

Though Lynch’s urban image theory has not beenapplied to the design of a virtual city, its efficiency forenhancing wayfinding has been proved in many other VEstudies. It was found that route-finding performance of theVE userimproved improved when familiar objects wereplaced within the VE than when no landmarks were used(Ruddle et al., 1997). In addition, the importance of therelations between Lynch’s element types for navigationenhancement is emphasized in VE studies. These relationshave been found to help users structure their spatialrepresentation in differing scales (Vinson, 1999; Darkenand Sibert, 1996). While these studies do not involve reallarge-scale VE, Al-Kodmany (2001) used Lynch’s theory asa framework when combining Web-based multimediatechnology to assist residents and planners in visualizing acommunity in Chicago by ‘visualizing selected areas thatwere selected as most imageable by the residents them-selves’ (Al-Kodmany, 2001, p. 811).

The aim of this paper is to study the effect that a virtual citydesign based on residents’ urban image has on wayfindingperformance. To that end, two virtual city designs of Tel Avivcity were compared. (The virtual model of Tel Aviv city willbe referred to in this paper as ‘virtual Tel Aviv’.) The firstdesign did not include highlighted urban elements selectedfrom the residents’ urban image, while in the second designhighlighted elements were incorporated.

The conclusions of this study also have operativeimplications on the construction of virtual cities. One ofthe important decisions in this process is the selection of theurban objects to be presented by 3D models within thevirtual environment. This decision also has an economicaspect since constructing 3D models, mostly with thephotos of the facade textures, involves vast amounts ofmoney and time. In cases where the urban image design isfound to enhance wayfinding performance, the urbanimage framework can serve as an appropriate tool for thisselection. For example, the current virtual Tel Aviv modeldoes not yet include 3D models of buildings, and,therefore, this study can clarify whether these buildingscould be selected based on the urban image elements.

In the next section, we describe virtual Tel Aviv and theexperiments and the methods used for their documentationand analysis. We go on to report the findings of theseexperiments. On the basis of these findings, in the fourthsection we suggest principles of using residents’ urban imagein the design of virtual cities. In the last section, we summarizethe results of the study and note some further work.

2. METHODOLOGICAL FRAMEWORK

2.1 Virtual Tel Aviv

Virtual Tel Aviv offers the user a real-time flying-basednavigation mode over Tel Aviv city, an area of about 50square km. The model was built in the EnvironmentalSimulation Laboratory at Tel Aviv University with SkylineH4.5 software. Using this software, we interpolated a CDTMpoint layer (in a resolution of 50 m) Cof Tel Aviv into a

raster layer of the city’s altitudes. Then we added anorthophoto of Tel Aviv (in a resolution of 25 cm), andusing these altitudes, we established a 3D visualization ofthe city terrain. Afterwards (in experiment 2) we added GISlayers of the Tel Aviv ‘urban image’ objects (paths,landmarks, nodes, edges and districts) as shown inFigure 1. These objects were highlighted by colourC(different colours for linear and non-linear objects) andlabels (as text next to the object).

In order to construct the urban image of Tel Aviv, C32residents of the city were asked ‘to draw a map of Tel Avivand to draw the dominant elements in it (no more thanC15 elements)’. We decided to limit the number of theelements to 15 so that only the most imageable elementswould emerge, as well as to create a common under-standing of the assignment for all participants. We thengathered the data from the individual sketch maps into oneaggregate map representing the residents’ urban image ofTel Aviv. In order to create a representative urban image,only those elements that appeared in more than two sketchmaps were included in this aggregate map.

2.2 The experiments

Twenty-four participants (15 male and 9 female), 26 to 58years of age, took part in the experiments. None of theseparticipants had taken part in drawing the sketch maps fromwhich we evaluated the urban image for use in theexperiments. All participants declared they knew the cityof Tel Aviv ‘well’. To make sure they were familiar with thecity, a list of nine well-known locations in Tel Aviv was readto them, and they were asked whether they knew their exactlocations.

Two experiments were conducted by dividing theparticipants into two groups of 12. In experiment 1, thedesign of the virtual model did not include any highlightedlandmarks, while in experiment 2 the design of the virtualmodel included highlighted urban elements from theresidents’ urban image. (In this experiment we added theresidents̀ urban image as a GIS layer.) The participants ofboth experiments had to complete the following steps:

Phase 1: Each subject was provided with an A4 sheet ofpaper, on which the municipal borders of Tel Aviv weremarked. In order to give the respondents reference points,we also marked two very familiar landmarks along thecoastline of Tel Aviv — the new Tel Aviv port and the oldJaffa port. The main national highway (Ayalon Highway)was also marked (Figure 2).

All participants were given the same instruction: ‘Pleasemark each of the above sites Con the map, as accurately asyou can’. The nine sites were those which were read tothem at the beginning of the experiment, and they includesix locations in the wayfinding tasks (the clock tower inJaffa, the Israel Museum, Habimah National Theater, CityHall, Yehuda-Maccabi Street and the central bus station)and three other central locations (the Tel Aviv Museum ofArt, the Azrieli mall and the railway station). The locationsof the nine elements for each map given by the participantswere compared to their real locations — providing a meandistance error value for each participant. Such informationallows us to define the spatial knowledge quality of theparticipants, a factor that might influence their behavior.

2 The Cartographic Journal

Page 3: Virtual City Design

Phase 2: Participants were introduced to the virtualmodel of Tel Aviv on a 19" desktop monitor at theEnvironmental Simulation Laboratory. We explained tothem how to use the flight simulator (using the keyboard asan interaction device): moving forwards, backwards, con-trolling the speed, stopping, moving up scale and down

scale and Cmanipulating the viewed screen. To prevent theparticipants from seeing the orthophoto of the examinedarea (Tel Aviv), they practiced using the simulator onanother area of Tel Aviv for C5 minutes, or longer if theyfelt (or we felt) they needed extra practice. We explained tothem that they could fly at any speed and at any height theywanted.

Phase 3: This phase comprised two wayfinding tasks. Inthe first task, participants were asked to ‘fly’ to threedifferent locations in Tel Aviv: from the clock tower in Jaffato Habimah Theater; from the theater to Yehuda-MaccabiStreet, and from there to the Israel Museum. In the secondtask, participants were asked to fly from the new central busstation to the city hall building (see Figure 3). In bothexperiments, the initial viewing angle was 90u and theviewing height was 315 meters above sea level. This settingallowed the user to clearly identify the object and itsimmediate surroundings.

It should be noted that the two tasks differed in areacovered and in initial viewing conditions: The first taskincluded the coastline as a dominant reference object in thearea where the participants began the wayfinding task. Inthe second task no reference object was visible in theimmediate environment of the starting point (Figure 4). Inaddition, the areas covered by the tasks were urban areas ofvarying density and complexity. In each assignment, theparticipants were asked to tell us once they had identifiedthe target location and to receive confirmation that it wasindeed the one requested.

Figure 1. (a) The components and (b) the interface of virtual Tel Aviv

COLOUR

FIGURE

Figure 2. The municipal borders of Tel Aviv and the referencepoints which were marked

COLOUR

FIGURE

Virtual City Design Based on Urban Image Theory 3

Page 4: Virtual City Design

2.3 Methods of documentation and analysis

a. Tracking movement: In order to investigate individualwayfinding performance, we recorded all the participants’real-time-log data of movement (coordinates, speed, heightetc.) while using the model (including the area on theinterface as it was seen by the user). Such tracking enabledus to perform quantitative analysis of the trajectory patternsof participants during navigation. In order to obtain thesetrajectory patterns, the recorded real-time-log data wasconverted to GIS layers and then visualized as polylines inGIS layers in ArcGIS 8.2 environment. The statisticalanalysis was performed using SPSS 11.0 software.

b. ‘Think-aloud’ method: A ‘think-aloud’ or ‘self-report’method (Golledge, 1976; Darken and Sibert, 1996; Murrayet al., 2000) was implemented to reveal and understand thestrategies and thoughts of the participants during theassignments. The participants were asked to verbally explainto us everything that came into their minds duringnavigation (strategies, thoughts, questions, internal con-flict, decisions etc.). When we felt they were not descriptiveenough we encouraged them to elaborate and asked themwhat they were thinking about. Everything they said wasrecorded and later analysed. Each participant’s documenta-tion was examined according to three categories: the

geographical elements mentioned, wayfinding strategiesand difficulties during wayfinding tasks.

3. RESULTS: HOW DOES THE URBAN IMAGE DESIGN

INFLUENCE WAYFINDING IN A VIRTUAL CITY?

Using the data analysis of the documentation in experiment1 (navigation in the virtual model without the highlightedelements) and in experiment 2 (navigation in the virtualmodel with the highlighted urban image elements), we areable draw conclusion regarding the influence of highlightedelements on the participants’ performance during wayfind-ing tasks with respect to strategies and difficulties.

Although each participant used different methods forarriving at the destination objects, we can classify thesetechniques into two basic wayfinding strategies, common inhuman navigation in real environments: path-integrationand position-based strategies (Loomis et al., 1999; Peruchet al., 2000). Navigation by position-based, or pilotingstrategy relies on recognizable landscape elements.Navigators use landmarks as cues for information on theirposition and how to arrive at a desired location duringflying-based navigation mode. Path-integration, or deadreckoning strategy, means continued integration of large-scale and angular components, allowing estimation ofdirection and distance. In other words, during ‘flying–based’ navigation mode to locations that are beyond thevisual field, navigators will coincidently see single referenceobjects and from that object calculate the position of thetarget location.

In general terms, in experiment 2, using the urban imagedesign model that provided a network of locations in theobserved simulated environment, participants tended to usethe position-based strategy. In experiment 1, however,where the model did not include the urban image elements,participants seemed to rely mainly on the path-integrationstrategy.

The dominance of the path-integration strategy inexperiment 1 is illustrated by the fact that many participantsrelied on global reference elements of the city. Three mainlinear elements were found to be helpful while navigating:the coastline (to the west), the Ayalon Highway (to theeast) and the Hayarkon River (to the north). The coastlinewas found to be a dominant reference line in the first task,while the Ayalon Highway dominated in the second task.This can be verified both by the verbal report (Table 1) andthe trajectory patterns of the participants, which tended torun close and parallel to these reference lines (Figure 5). Ascan be learned from the verbal reports, these elements fillthree main functions in the path-integration strategy: asanchors for indicating the general direction towards thedesired location, until another ‘strong’ element is found; ascues for relating their position within the frame ofreference (north, west ...) and as transitional cues thatprovide a basis for interpreting mobility and relative scaleduring navigation.

However, since adopting such a strategy requires a highlevel of configurational knowledge i.e. a level of spatialknowledge that incorporates information concerning direc-tions and the relative positions of places (Golledge, 1992;

Figure 3. The two tasks given to the participants and the tasklocations. Arrows represent the shortest flying path from each loca-tion to the next

COLOUR

FIGURE

4 The Cartographic Journal

Page 5: Virtual City Design

Kitchin and Blades, 2002, pp. 58–67), it is often notsufficient for all the participants to fulfill the wayfindingtasks. Therefore, it seems that the participants needrecognizable elements to find their way by using theposition-based strategy, mainly in the later stages of thetasks when they need to ‘leave’ these anchors. As a result, incases where the recognizable elements are not available, the

participants, particularly those with a low level of config-urational knowledge and who use mainly a proceduralknowledge, experience problems that result in poorCwayfinding performance. Analysis of the relation betweenthe level of spatial knowledge and the wayfinding perfor-mance in the model without the urban image elementsproves this. To reach this conclusion, we assume that the

Figure 4. (a) The initial viewing point of a. the clock tower in Jaffa, (b) the central bus station

COLOUR

FIGURE

Table 1. Verbal documentation of the function of the geographical objects during wayfinding tasks

Function of objects Experiment 1 Experiment 2

Positional location — ‘I know the general direction from Milano Squareto the Israel Museum … I am not following anyspecific streets, just flying in a certain general direction.’— ‘I’m trying to find some landmark that I know forsure … like a street — that will get me fully oriented … .’— ‘Here’s the David Intercontinental Hotel — I’ll takea right there.’— ‘Judging by these towers, this must be Pinkas Street.’

— ‘Here is Rotschild Blvd … so it should besomewhere in this area.’— ‘I can see the label of the Shalom building.I’ll take a left turn here and this will take me tothe area I want ... .’— ‘O.K! Here is the label ’Azrieli‘ — so I’llturn on Kaplan St and continue straight tillI reach Ibn Gvirol St.’

Frame of reference — ‘I don’t want to get too far away from the beach,because otherwise I wouldn’t know where west is!’— ‘If that’s the beach, then that’s west and this is north.’— ‘Once I know where Ayalon Highway is, I’ll knowwhich way is north.’

— ‘Here is the label of Ayalon Highway.I’ll turn left so I’ll be heading north.’— ‘If this is Weizman St. and this is Dizingoff St,so this is north.’— ‘O.K. lets leave the coastline and head east.’— ‘I recognize Rabin Square and the city hall.So it should be much more to the east.’

Transitional — ‘I can identify the Ayalon Highway and I’m goingparallel to it rather than over it … .’

— ‘I’m flying in a general direction following mainstreets that I don’t really recognize ... .’

— ‘I want to fly over Ayalon Highway till I’ll identifyAzrieli mall.’

— ‘I can see the label of Alenbi st … you knowwhat? I will follow this street.’

— ‘I want to drive north to the Opera Building and thenveer to the east … this is what I do when I drive there.’

— ‘I feel like I’m driving a car ... I’ll just followthis road.’

Virtual City Design Based on Urban Image Theory 5

Page 6: Virtual City Design

accuracy of the sketch maps (measured according to thedistances between the locations drawn on the sketch mapsand their real locations) serves as an indicator for estimatingthe quality of knowledge, while the length of the flying pathserves as an indictor for wayfinding performance, as a longerpath may indicate that the participant didn’t know thetarget’s exact location. A significant positive correlation wasfound between the distortions in the sketch maps and thelength of the flying path (a Pearson correlation of 0.615,p50.033). This correlation shows that participants whohad a more accurate representation of the city were able tonavigate more efficiently in the virtual model.

This fact can be related to the difficulty participants had inrecognizing familiar objects from a bird’s-eye view, a viewthat characterizes a flying-based navigation mode, making it

hard for them to evaluate spatial relations needed fororientation. In addition, it is clear from the experiment thatthe users not only are unaccustomed to seeing the shape ofcity objects from above (without its 3D familiar shape), butthey also have difficulty getting used to their proportions(Table 2). Because of this, when an object or an area has afamiliar shape, a city square for example, it is extremelydifficult for the user to identify it without seeing itssurroundings (or a label with its name). For example, manyparticipants experienced problems distinguishing betweenCHamedina Square and Dizingoff Square (well-knownsquares in Tel Aviv) despite the fact that the ratio betweenthese two areas is about 2 : 1 (approximately 850 sqm and450 sqm, respectively). Participants also experienced pro-blems estimating speed of movement, which caused them to

Figure 5. The influence of urban image design on track patterns of the participants: (a) without design, (b) with design; (I) the coastline asreference line, (II) Ayalon Highway as reference line

6 The Cartographic Journal

Page 7: Virtual City Design

misjudge the location of objects, as they thought they hadalready gone past them, or not yet approached them.

Wayfinding strategies changed significantly in experiment2. Adding the imageable objects to the model provided amore legible and recognizable environment. This environ-ment provided the conditions for adopting a position-basedstrategy, i.e. the labeled objects served as positionalinformation for a location, or a network of locations, forevaluating the relative distances between locations, as wemay expect from a flying-based navigation mode.Therefore, in addition to, and in several cases even insteadof, relying on dominant spatial features that are easilyidentified from a bird’s-eye view, the participants inexperiment 2 continuously updated their position usingthe highlighted elements, usually elements with which theywere familiar from their everyday experience in the city.

The transformation from the path-integration strategy tothe position-based strategy in the second experiment can beverified when comparing the documentations of theexperiments: the verbal reports (Table 1) and the trajectorypatterns of the participants (Figure 5). The verbal termi-nology used in each strategy is also different. While in thefirst experiment (without the urban image design), theterminology used was based mainly on descriptions of thereference points/lines, the participants in experiment 2,who used the position-based strategy, relied mainly on therelations between the observed elements. As illustrated intable 1, the urban image elements function as aids forupdating or ‘calibrating’ the users’ position (e.g. ‘If I’m atlocation X, I can go on from here towards the targetlocation’), as well as for ‘confirmation’ (i.e., ‘This elementshould be ‘X’ […] yes, here is the label telling me it is ‘X’’).

The trajectory patterns also illustrate this transformation:When the urban image labels were available for the users, theyfelt confident enough to ‘leave’ the reference lines muchsooner than participants in the first experiment, where nohighlighted elements were available (see Figure 5). Thus, theavailability of recognizable urban features enables a contin-uous update of the current position during navigation, whenthe identified locations function as a network of locations oras positional information for confirming location.

The urban image elements available improved wayfindingperformance because with them the participants had fewerdifficulties in recognizing familiar objects and in evaluatingspatial relations between them (which is needed for orienta-tion). When comparing the two experiments, in the test wherethe design of the virtual model used urban image elements,wayfinding performance was significantly improved. The totallength of the flying paths in experiment 1 was 52,043 mC (std.30,774 m) while in experiment 2 it was 15,136 m. (std.9,467 m). A T-test confirmed these difference (t53.976,df5(22), p5 0.001C). However, an examination of therelation between the level of spatial knowledge and thewayfinding performance in experiment 2 shows an uncorrelatedrelation (a Pearson correlation of 0.043, p50.895). Notice thatthis relation was significantly correlated in experiment 1. Thismeans that the urban image design of the virtual city improvedthe performance of all participants, especially those with a lowlevel of spatial knowledge. This finding is an additionalindication that those with low level of configurational knowl-edge depend heavily on covering the area in which they arenavigating with recognizable geographical objects.

To summarize, the urban image design enables a moreintensive use of the position-based strategy, in addition to,and in several cases even instead of, the path-integrationwayfinding strategy, which was found to be a dominantstrategy when the model design did not include thehighlighted elements.

Based on these findings, we can conclude that Lynch’surban image theory can be applied in the design of virtualcities due to its capabilities to enhance wayfindingperformance. In suggesting principles for such design, inthe next section we present a comparison between real cityand the virtual city concerning the imageability of the urbanelements.

4. THE RELATION BETWEEN THE REAL CITY URBAN

IMAGE AND THE VIRTUAL CITY IMAGE: IMPLICATIONS

FOR DESIGN

Figure 6a presents the imageable elements of the urbanimage of real Tel Aviv city, namely, the objects that

Table 2. Verbal documentation of difficulties during wayfinding tasks

Verbal report of experiment 2 Verbal report of experiment 1 difficulties

— ‘I feel that I’m getting lost!’ — ‘Which way is north?? If I can find the north,it’ll be much easier.’

Lack of orientation

— ‘I couldn’t identify Dizingoff square withoutthe label!’

— ‘Is this what Tel Aviv looks like from above???’— ‘What is this big building?’

Identification

— ‘I know the cinema should be here, but I can’tidentify it!’— ‘It’s difficult when it’s not three- dimensional!’— ‘In Tel-Aviv all the roads look the same ...this is why I’m looking for the labels.’— ‘I can see a junction, but which one is it?’

— ‘I decided to follow Ayalon Highway, as it’s amajor road, and it’s much easier to identify iton the air- photography.’— ‘I understood the area I thought is thesquare is actually Habima Theater.’

— ‘This is Hamedina Square? ... No, no …this is Dizengoff Square…’.— ‘But wait a minute! Which square is this??’— ‘It takes me time get used to the proportions ...the city seems so big suddenly!’

Space-time scale

Virtual City Design Based on Urban Image Theory 7

Page 8: Virtual City Design

appeared in the individual sketch maps. Figure 6b presentsthe urban image of virtual Tel Aviv established by gatheringthe objects verbally mentioned (while looking for an object

or viewing one) by the participants in experiment 1, whoperformed the wayfinding tasks in the model without thehighlighted urban image elements.

Figure 6. (a) The urban image drawn by Tel Aviv residents

COLOUR

FIGURE

8 The Cartographic Journal

Page 9: Virtual City Design

As one can gather from the visual comparison of the twoimages, both are essentially similar. A positive correlationbetween the appearance frequency of the urban-imageelements in the cognitive maps and the appearance

frequency of these elements when mentioned in thewayfinding tasks, verified this conclusion (a Pearsoncorrelation of 0.75, p50.000). The elements that arecharacterized by a relatively high imageability during the

Figure 6. (b) The urban image of the objects mentioned during wayfinding tasks

COLOUR

FIGURE

Virtual City Design Based on Urban Image Theory 9

Page 10: Virtual City Design

wayfinding tasks are mainly those with high physicalidentification; that is, they can be easily identified from abird’s-eye view.

The elements which were found to be very useful duringnavigation include large continuous elements, in particularthe coastline, the Ayalon Highway and the Hayarkon River,as well as elements with distinctive landscapes andboundaries such as Hayarkon Park. Other elements thatare easy to identify from a bird’s eye view are those withunique morphology, especially city squares, which standout in the image that emerges during navigation.Hamedina Square, Rabin Square and Dizingoff Squarewere found to be main nodes of that image. The elementsthat are characterized with a lower imageability in thevirtual city are those with a low possibility of physicalidentification — small-size landmarks that have a uniquemorphology from a side-view (rather then from a bird’s-eyeview) and districts such as the Neveh Zedek neighborhood,which has no recognizable boundaries.

As a result of these differences, the image that emergesduring navigation in the virtual city is more a common one— that is one formed by elements mentioned by most ofthe participants, a fact to which the high frequency of theappearance of elements testifies (Figure 6). In addition, asillustrated in Figure 7, the paths and nodes are relativelymore imageable during navigation from a bird’s-eye view,while the landmarks, districts and edges are relatively lessimageable than in the real city image.

The close similarity between the real and the virtualurban images strengthens the hypothesis that Lynch’surban image theory enables users to transform their spatialrepresentation of the real city to its virtual counterpart.Moreover, it may also confirm that a preconceived realurban image can be integrated into, or participate in, theusers’ representation that emerges during navigation i.e.they call on information they have from the real urbanimage to help them navigate in the virtual environment (seeFigure 8). Thus, the integration of the real city’s urbanimage into the virtual model enables users to identifyimageable elements which are seen in the real city, even ifthey have a low physical identification level from the bird’s-eye-view, and also to use elements that are particularlyimageable from the bird’s-eye-view during flying basednavigation mode.

Comparing the real and the virtual city images providesinformation as to which elements should be highlighted. Asmentioned in the introduction, this selection is one of thedecisions that has to be made when creating virtual citieswith the aim of enhancing wayfinding, and economicalaspect must also be taken into account. Selecting only partof the buildings to be constructed is advisable, especiallywhen dealing with large cities with an enormous number ofobjects. Therefore, the distinction between three groups ofobjects — those imageable particularly in the real city, thoseimageable particularly in the virtual city and those that arecommon for both — could be used for selecting the mostappropriate objects for emphasis in order to enhancewayfinding performance. One possible use of this distinc-tion is to give priority to the group of particularly imageableelements of the real city, which will help the virtual city’suser identify them. Another possible use is to focus on theintegration between these three groups, where the com-mon imageable elements can serve as a link between theparticular groups.

Once the selection of the objects has been made, ageneralization process can be implemented for selectingwhich geographic objects will be presented when a newscale or perspective emerges during flying-based navigationi.e. the level of detail (LOD) in a design of a virtual model.The generalization process can be constructed taking intoaccount the imageability of the urban elements as a sourceof knowledge that can be applied by generalizationmethods, especially those developed for GIS and 3Dvisualization, which are mainly driven by communicationrequirements, such as legibility, graphical clarity andunderstandability (Muller et al., 1995; Frery et al., 2004).

One of the basic assumptions of Lynch’s urban imagetheory is that the more imageable the element, the more

Figure 7. Classification of the imageable elements of real and vir-tual Tel Aviv, according to Lynch’s element types

COLOUR

FIGURE

Figure 8. Distinction and integration between real and virtual cityrepresentations

10 The Cartographic Journal

Page 11: Virtual City Design

useful it is in wayfinding in larger-scale areas of the city,while the less imageable elements are used in local-scaleareas of the city (1960, pp. 86–87). Working on thisassumption, the scale in which the objects should bedisplayed can be determined according to their degree ofimageablity, which is represented by their frequency in theaggregative map.

As Bourdakis (1998) points out, it is essential to refer tothe context in flying-based navigation mode over urbanenvironment, due to its varied density and complexity. Forthat purpose, the designer may be able to refer to theinterrelations between the urban elements and theirclassification into Lynch’s element types in the representa-tion of contextual relations that are suggested for carto-graphic generalization, such as being part of a significantgroup, being in a particular area and being in relation with‘same level’ surrounding objects (Mustiere and Moulin,2002).

5. CONCLUSIONS AND FURTHER STUDY

The impetus behind the study presented in this paper wasthe need for virtual city design to deal with wayfindingdifficulties experienced by the users, as well as to locate aframework by which the designer can decide which urbanobjects should be highlighted over the city orthophoto.The assumption of the study was that Lynch’s urbanimage theory can serve as an appropriate framework due toits potential in facilitating the transferal of images andspatial knowledge from the real city to its virtualrepresentation.

Experiments conducted using virtual Tel Aviv provedthat a design based on urban image elements improves theperformance of all participants, especially those with a lowlevel of spatial knowledge. Moreover, the urban imagedesign facilitates more intensive use of a position-basedstrategy, in addition, or even instead of, the path-integration wayfinding strategy, which was found to be adominant strategy when the virtual model did not includethe highlighted urban image elements. Furthermore, a vastsimilarity was found between the imageable elementsmentioned during the wayfinding tasks in the virtual model(which did not include the highlighted elements) and theimageable elements of the real city that were revealed by thedrawn sketch maps.

Based on these findings, this paper proposes thatdesigners of virtual cities use the similarities and differencesbetween the imageable elements of the real city and of thevirtual city as a source for generalization knowledge. Thiscomparison provides a useful tool in the selection ofelements to be highlighted, and for constructing a general-ization process with respect to scale and context. To thisend we are currently working at the ESLab of TelUniversity on building a wayfinding support system forthe virtual model of Tel Aviv, based on the presentedmethodology. Accordingly, 3D models of selected build-ings are being constructed and inserted into the virtualmodel. The generalization process in the developed systemis based on the residents’ urban image of the real city, aswell as on the imageable elements of the virtual one.

REFERENCES

Al-Kodmany, K. (2001). ‘Supporting imageability on the World WideWeb: Lynch̀s five elements of the city in community planning’,Environment and Planning B: Planning and Design, 28, 805–32.

Bourdakis, V. (1998). ‘Navigation in Large VR Urban Models’, inVirtual Worlds, Springer-Verlag, ed. by Heudin, J. C., pp. 345–56, Heidelberg, Berlin.

Darken R. and Sibert J. (1993). ‘A toolset for navigation in virtualenvironments’, Proceedings of the ACM Symposium on UserInterface Software and Technology, Atlanta, GA. Available at,http://www.movesinstitute.org/darken/publications/toolset.pdf.

Darken, R. and Sibert, J. (1996). ‘Navigating large virtual spaces’,International Journal ofHuman-Computer Interaction, 8, 49–71.

Frery, A. C., Silva, da Silva, C. K. R., Costa, E. B. and Almeida, E. S.(2004). ‘Cartographic Generalization Virtual Reality’, Proceedingsof ISPRS Congress, 20, 200–04.

Fisher, F. and Unwin, D. B. (2001). Virtual Reality in Geography,Taylor and Francis, London.

Golledge, R. (1976). ‘Methods and Methodological Issues inEnvironmental Cognition Research’, in EnvironmentalKnowing, ed. by Moore, G. T. and Golledge R. G., pp. 300–13,Hutchinson and Ross, Stroudsburg.

Golledge, R. (1992). ‘Place Recognition and Wayfinding: MakingSense of Space’, Geoforum, 23, 199–214.

Harris, L. R. and Jenkin M. (2000). ‘Visual and non-visual cues in theperception of linear self motion’, Experimental Brain Research,135, 12–21.

Jansen, G., Schade, M., Katz, S. and Herrmann, T. (2001). ‘Strategiesfor Detour Finding in a Virtual Maze: the Role of the VisualPerspective’, Journal of Environmental Psychology, 21, 149–63.

Jiang, B., Huang, B. and Vasek, V. (2003), ‘Geovisualisation forPlanning Support Systems’, in Planning Support Systems inPractice, ed. by Geertman S. and Stillwell J., pp. 177–91, Springer,Berlin.

Kilpelainen, T. (2000). ‘Knowledge Acquisition for GeneralizationRules’, Cartography and Geographic information science, 27,41–50.

Kitchin, R. and Blades, M. (2002). The Cognition of GeographicSpace, I. B.Tauris, London.

Laurini, R. (2001). Information Systems for Urban Planning: AHypermedia Cooperative Approach, Taylor and Francis, London.

Loomis, M. (1999). ‘Dead reckoning (path integration), land-marks,and representation of space in a comparative perspective’, inWayfinding Behavior, ed. by Golledge, R. G., pp. 197–228,Johns Hopkins University Press, Baltimore.

Lynch, K. (1960). The Image of the City, MIT Press, Cambridge.Muller, J. C., Lagrange J. P. and Weibel R. (1995). GIS and

Generalization: Methodology and Practice, Taylor and Francis,London.

Murray, C. D., Bowers, J. M., West, A. J., Pettifer, S. and Gibson, S.(2000). ‘Navigation, Wayfinding and Place Experience within aVirtual City’, Presence, 9, 435–47.

Mustiere, S. and Moulin, B. (2002). ‘What is Spatial context incartographic generalization?’, Proceedings of the ISPRSTechnical Commission IV Symposium on Geospatial Theory,Processing and Applications, 34.

Omer, I., Goldblatt, R., Talmor, K. and Roz, A. (forthcoming).‘Enhancing the legibility of virtual cities by means of residents’urban image: a wayfinding support system’, in Complex ArtificialEnvironments, ed. by Portugal, J., Springer, Heidelberg.

Peruch, P., Gaunet, F., Thinus-Blanc, C. and Loomis, J. (2000).‘Understanding and Learning Virtual Spaces’, in CognitiveMapping, ed. by Kitchin, R., and Freundschuh, S., pp. 108–124,Routledge, London.

Ruddle, R. A., Payne, S. J. and Jones, D. M. (1997). ‘NavigatingBuildings in ‘Desk-Top’ Virtual Environments: ExperimentalInvestigations Using Extended Navigational Experience’, Journalof Experimental Psychology, 3, 143–59.

Slater, M. Usoh, M. and Steed, A. (1994). ‘Depth of Presence inVirtual Environments’, Presence: Teleoperators and VirtualEnvironments, 3, 130–44.

Slocum, T. A., Jiang, B., Koussolakoy, A. Montello, D. R., Fuhrman,S. and Hedley, N. R. (2001). ‘Cognitive and Usability Issues in

Virtual City Design Based on Urban Image Theory 11

Page 12: Virtual City Design

Geovisualization’, Cartography and Geographic InformationScience, 28, 61–75.

Vinson, N. G. (1999). ‘Design Guidelines for Landmarks to SupportNavigation in Virtual Environments’, Proceedings of CHI ’99.

Weibel, R. (1995). ‘Three Essential Building Blocks for AutomatedGeneralisation’, in GIS and Generalization: Methodology and

Practice, ed. by Muller J. C., Lagrange, J. P. and Weibel, R.,pp. 56–69, Taylor and Francis, London.

Whitelock, D., Romano, D., Jelfs, A. and Brena, P. (2000). ‘PerfectPresence: What does this mean for the design of virtual learningenvironments?’ Education and Information Technologies, 5,277–89.

12 The Cartographic Journal