Transcript
Page 1: Toward methods for supporting the anticipation-feedback loop in user interface design

This article was downloaded by: [University of Windsor]On: 11 November 2014, At: 14:37Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number:1072954 Registered office: Mortimer House, 37-41 Mortimer Street,London W1T 3JH, UK

Theoretical Issues inErgonomics SciencePublication details, including instructions forauthors and subscription information:http://www.tandfonline.com/loi/ttie20

Toward methods forsupporting the anticipation-feedback loop in userinterface designMarcus Plach & Dieter WallachPublished online: 26 Nov 2010.

To cite this article: Marcus Plach & Dieter Wallach (2002) Toward methods forsupporting the anticipation-feedback loop in user interface design, TheoreticalIssues in Ergonomics Science, 3:1, 26-46, DOI: 10.1080/14639220110110333

To link to this article: http://dx.doi.org/10.1080/14639220110110333

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of allthe information (the “Content”) contained in the publications on ourplatform. However, Taylor & Francis, our agents, and our licensorsmake no representations or warranties whatsoever as to the accuracy,completeness, or suitability for any purpose of the Content. Anyopinions and views expressed in this publication are the opinions andviews of the authors, and are not the views of or endorsed by Taylor& Francis. The accuracy of the Content should not be relied upon andshould be independently verified with primary sources of information.Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilitieswhatsoever or howsoever caused arising directly or indirectly inconnection with, in relation to or arising out of the use of the Content.

This article may be used for research, teaching, and private studypurposes. Any substantial or systematic reproduction, redistribution,reselling, loan, sub-licensing, systematic supply, or distribution in anyform to anyone is expressly forbidden. Terms & Conditions of access

Page 2: Toward methods for supporting the anticipation-feedback loop in user interface design

and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Dow

nloa

ded

by [

Uni

vers

ity o

f W

inds

or]

at 1

4:37

11

Nov

embe

r 20

14

Page 3: Toward methods for supporting the anticipation-feedback loop in user interface design

Toward methods for supporting the anticipation-feedback loop inuser interface design

MARCUS PLACH{ * and DIETER WALLACH{

{ Department of Psychology (FR 5.3), Saarland University, PO Box 151150, 66041SaarbruÈ cken, Germany

{ Department of Digital Media, University of Applied Sciences, Kaiserslautern, Germany

Keywords: Usability; Design problem solving; Cognitive walkthrough; Guidelines.

Recent research has compared di� erent usability evaluation methods with respectto their e� ectiveness and e� ciency. The paper analyses the impact of di� erentusability evaluation methods on design problem-solving processes of individualdesigners and evaluators. It is proposed that usability evaluation methods have tobe divided into two categories according to their fundamentally di� erent way ofsupporting the design for usability: (1) guideline-based methods and (2) methodsbased on the mental generation of scenarios and anticipation of user goals. Wepresent data from an experimental study that shows that these two types ofmethods entail di� erences in the perspective-taking processes. Furthermore, theresults indicate that the methods have a di� erential impact on the generalproblem-solving strategy, i.e. whether to use a top-down, breadth-®rst or adepth-®rst approach. Possible implications for the development of techniquesthat support the design of usable systems are discussed.

1. IntroductionIt is now broadly recognized by developers and human factors specialists that`usability is deeply a� ected by every decision in design and development’ (Dumasand Redish 1993: 8). Hence, it is not surprising that the last two decades witnessedthe emergence of user-centred engineering approaches to master the process of`building’ usability into interactive systems. Such approaches range from generalprinciples (e.g. Gould and Lewis 1985), guidelines (e.g. Mayhew 1992), and compre-hensive methodologies (e.g. Good 1988, Hackos and Redish 1998) to specializedmethods and techniques such as task analysis (e.g. Diaper 1989) or usability evalua-tion methods (e.g. Molich and Nielsen 1990, Nielsen 1994, Polson et al. 1992). Toemphasize the close correspondence to the process of software engineering, the con-glomerate of these approaches and methods has been coined usability engineering(Whiteside 1986, Good 1988). Notwithstanding the success of the usability engin-eering approach with its focus on the development and project level perspective, theproblem-solving process of the individual designer as the core component of anydevelopment process has in part been neglected. (The terms `designer’ and `devel-oper’ are used interchangeably in this paper.)

There seems, however, a clear need to support designers more e� ectively on anindividual level in their e� orts to design usable systems. A necessary prerequisite forthe development of e� ective methods is an understanding of the cognitive processes

THEOR. ISSUES IN ERGON. SCI., 2002, VOL. 31, NO. 1, 26±46

Theoretical Issues in Ergonomics Science ISSN 1463±922X print/ISSN 1464±536X online # 2002 Taylor & Francis Ltdhttp://www.tandf.co.uk/journals

DOI 10.1080 /1463922011011033 3

* Author for correspondence. e-mail: [email protected]

Dow

nloa

ded

by [

Uni

vers

ity o

f W

inds

or]

at 1

4:37

11

Nov

embe

r 20

14

Page 4: Toward methods for supporting the anticipation-feedback loop in user interface design

that underlie design problem solving and its relation to usability concerns (Carroll1996). In this paper, we propose a framework to investigate the role that di� erentusability evaluation methods play in user interface design problem solving.

Methods generally di� er with regard to a couple of features such as their degreeof formality, their e� ectiveness and the e� ciency of ®nding or avoiding usabilityproblems (Gray and Salzman 1998). All of these aspects surely have to be taken careof in a general e� ort to provide methods and technologies for supporting user-centred design. However, the development of such methods also needs to begrounded on empirical ®ndings about, for example, how compatible di� erentapproaches are to typical problem-solving processes of designers, about the waythey in¯uence design cognition and problem solution strategies, about how wellthey help the designer in taking the role of the user of the to be designed interfaceand, last but not least, how much additional cognitive load they impose on designers.In sum, the cognitive support function of di� erent methods needs to be carefullyconsidered. Consequently, we take the stance of Cognitive Ergonomics in thispaper and conceive of the designer or evaluator as the user of methods which aresupposed to help him or her to improve the usability of a product or design.

When tackling the question about the e� ects di� erent usability evaluationmethods have on design cognition and which of these methods might be bettersuited from the perspective of the designer or evaluator, it should not be overlookedthat there are some general constrains which are crucial for the acceptance of anymethod. We assume that cognitive support methods have to ful®l the followinggeneral requirements: (1) the method should be applicable quickly, that is, itshould not impose much additional cognitive load; (2) the method providedshould be easily integrated into natural occurring problem-solving processes (wedo not, however, go as far as May and Barnard 1995 who claim that the methodshould not require the adoption of any particular structured method or designnotation); (3) the method must not only deal with partially speci®ed design, butprevent designers from early commitments which are consequently too deeplyembedded to be rejected later on (May and Barnard 1995).

Within the limits de®ned by these general constraints, this paper provides antheoretical and empirical analysis of the cognitive support function of di� erentmethods. The analysis is structured as follows. Section two will span the theoreticalframework of the approach presented. We will argue that although user interfacedesign is a rather typical design problem, certain characteristics of the problemstructure seem to dominate user interface design more than in other design domains.On the basis of these characteristics, hypotheses about reasoning processes and thecompatibility of di� erent methods for supporting design will be derived. The thirdsection reviews results on mental simulation in design. Section four will then discussthe e� ectiveness and cognitive support properties of di� erent methods of usabilityevaluation. It will be argued that approaches based on stimulating usage on the basisof scenarios as, for example, Cognitive Walkthrough (Polson et al. 1992, Wharton,Rieman et al. 1994), which alert the designer to explicate probable user cognition inthe design process, help designers to model user±system interaction more deeplycompared with guideline-oriented methods. In the ®fth section, we will then presentan empirical study that shows that di� erent methods do have a measurable impacton perspective taking and usage modelling. Furthermore, the data indicate thatframing usability issues for the designer according to certain evaluation methodswill support them to adopt a more systematic top-down, breadth-®rst problem-

Supporting the anticipation-feedbac k loop 27

Dow

nloa

ded

by [

Uni

vers

ity o

f W

inds

or]

at 1

4:37

11

Nov

embe

r 20

14

Page 5: Toward methods for supporting the anticipation-feedback loop in user interface design

solving approach which was advocated in many prescriptive accounts of designactivity (for an overview, see Ball and Ormerod 1994). The last section ®nally pres-ents a discussion of the experimental ®ndings and an outlook.

2. Features of design problem solvingDesign is an ubiquitous cognitive task. The goal of design is to improve some humancondition through invention or change of artefacts. Design can thus be conceived ofas the process of deriving functions to be achieved from a set of vaguely formulatedrequirements and producing descriptions of artefacts capable of generating thesefunctions (Gero 1990). Many professional activities in modern technology boundsocieties are unimaginable without extensive design activity.

From the point of view of Cognitive Ergonomics, design is a demanding and ill-structured problem-solving task. Archer (1969) emphasized the characteristic struc-ture of design problems as a cyclic process of two intertwined cognitive components:on the one hand there is a logical, analytical process responsible for the analysis ofrequirements, functions and evaluation and on the other hand design a� ords acreative, synthetical component responsible for solution development and integra-tion of modules. Design not only involves the integration of these complementaryprocesses, but also the integration of diverse knowledge domains comprising, forexample, technical opportunities, user behaviour and goals, typical usage situationsand, in the case of a user interface designer, human information processing (Gero1990, 1995, Goel and Pirolli 1992). Whereas on an organizational level the need tomanage these demands has led to process models for user-centred system design(Gould and Lewis 1985, Whiteside et al. 1987), systematic investigation of themental activities involved in design tasks on the individual level have been sparse.Nevertheless, such analysis is regarded as an important research issue in usabilityresearch and as a crucial prerequisite for the development of supporting tools thatare better suited to the needs of people involved in user interface design (Dumas andRedish 1993, May and Barnard 1995, Newman 1998).

As impressively re¯ected in a recent book edited by Cross et al. (1996) there existsa growing interest for cognitive design processes in related disciplines like cognitivescience (e.g. Browne and Chandrasekaran 1989, Goel and Pirolli 1992), softwaredesign (Kant and Newell 1985, Guindon 1990, Visser 1990), mechanical engineeringdesign (Ullman et al. 1988), architecture (Akin 1986, Chan 1990) and also engin-eering psychology (e.g. Ball et al. 1994). Researchers from Cognitive Science andArti®cial Intelligence have started to develop software-based tools which aim atsupporting the cognitive processes of designers directly (e.g. Gero 1990, Carrolland Rosson 1992, May and Barnard 1995). Although the need is realized in theusability engineering context, such insight does not yet seem to have stimulatedempirical studies of system usability as a constraint in design problem solving.

As a guiding theoretical foundation of the empirical investigation presented inthis paper, our analysis will be based on the design space hypothesis put forward byGoel and Pirolli (1992). This hypothesis which was empirically supported by studiesof the authors states that `problem spaces exhibit major invariants across designproblem-solving situations and major variants across design and non-design prob-lem-solving situations’ (Goel and Pirolli 1992: 399). The idea behind the notion of aproblem space is that the structure of information processing is formed by the fea-tures of the information processing system and the features of the task environment.If we knew these feature sets, we would make predictions about properties of the

28 M. Plach and D. Wallach

Dow

nloa

ded

by [

Uni

vers

ity o

f W

inds

or]

at 1

4:37

11

Nov

embe

r 20

14

Page 6: Toward methods for supporting the anticipation-feedback loop in user interface design

information processing in a certain problem-solving session. More important than

the rather general characteristics of the human information processing system

assumed by Goel and Pirolli (1992)Ðsuch as the limited short-term memory capacity

(Miller 1956) or the sequentiality of symbolic information processing (Newell and

Simon 1976)Ðare the characteristics of the task environment. Taking a somewhat

more abstract view than the typical characterization of a problem space in terms of

states and operators (Newell and Simon 1972), the authors identi®ed a set of features

that are common to design problem environments in general. Table 1 summarizes

the twelve features discussed by Goal and Pirolli (1992: 401±402).

Although common to other design problems as well, the following two of the

dozen features in table 1 seem to be dominant in user interface design. The ®rst one

(input/output feature) states that the input to a design problem is `information about

people who will use the artefact, the goals they want to satisfy, and the behaviour

believed to lead to goal satisfaction. The output is the artefact speci®cation’ (Goel

and Pirolli 1992: 402). Functional concepts mediate between input and output. We

assume that the input/output feature plays a dominant role in user interface design,

because user goals and assumptions are much more salient in interface design in

comparison to design problems where the user is more remote (such as when facing,

for example, electrical engineering problems). The second important characteristic of

user interface design in the present context is the feedback feature. This feature rests

on the fact that `there is no genuine feedback from the world during the problem-

solving session. It must be simulated or generated by the designer during the

problem-solving session’ (Goel and Pirolli 1992: 402). Along the same line,

Newman (1998) argued that simulation is essential in design and evaluation, because

the exact real-world conditions are not available during design. This is even more

true for user interface design, because with the end user being the ®nal valid source of

feedback it is usually harder to de®ne intrinsic measures in advance which can be

used as representative feedback.

Supporting the anticipation-feedbac k loop 29

Table 1. Overview of features of design problems according to Goel and Pirolli (1994).

Distribution of information lack of information in start state, goal state, andtransformation function

Nature of constraints constraints are either nomological or social, political, legaleconomical, etc. Nomological constrains undeterminedesign solutions

Size and complexity problems typically span time scales of days, months or evenyears

Component parts line of decomposition of the problem is dictated by practiceof designer

Interconnectivity of parts parts are not logically but contingently interconnectedRight and wrong answers design problems do not have right or wrong answers, but

better and worse onesInput/output see textFeedback loop see textCosts of errors penalty for being wrong can be highIndependent functioning of artefact is required to function independently of the designer

artefactDistinction between speci®cation and construction or delivery are distinct

speci®cation and deliveryTemporal separation speci®cation precedes delivery

Dow

nloa

ded

by [

Uni

vers

ity o

f W

inds

or]

at 1

4:37

11

Nov

embe

r 20

14

Page 7: Toward methods for supporting the anticipation-feedback loop in user interface design

From the features just described, it can be concluded that user interface designerswill make extensive use of mental simulation of feedback. More speci®cally, weassume that designers will try to take the perspective of the user, anticipate theuser’s goals, assumptions, and understanding of the information and potential be-haviour at any point of the interaction sequence.

Mechanisms responsible for taking the perspective of the user seem to be indis-pensable for the goal analysis and the evaluation of preliminary solutions. Newman(1998) postulates that the usability of the system generally depends to a great extenton the skill of the team and the individual designer to take the perspective of theuser. Whereas well known formal methods for user modelling do exist, e.g. GOMS(Card et al. 1983), TAG (Payne and Green 1986) or CTA (Barnard 1987, 1991) weare concerned in this paper with the informal and heuristic way designers deal withthe perspective-taking aspect of interface design. For this part of mental activity weadopt the term anticipation-feedbac k loop (AFL), a term which was originally intro-duced in the context of intelligent user interfaces by Wahlster (1991).

3. Mental simulation in design problem solvingFigure 1 illustrates the role of the AFL in user interface design. In general, there aretwo di� erent ways for the developer to gather feedback.

(1) Direct feedback: here, feedback is received either through face-to-face inter-action with the user or is mediated by HCI experts.

(2) Simulated feedback: within the actual phase of solution development, thedesigner is forced to make dozens of decisions even for small problems.Although there are advances in the automated simulation of user±systeminteraction (Wallach and Plach 1999), designers still have to rely on themental simulation of (not seldom rather idiosyncratic) models of user±systeminteraction.

Before we review central results from research on simulation in design, it may behelpful to look at mental simulation on a more generic level. Mental simulationÐmore precisely, qualitative mental simulation (De Kleer and Brown (1983)Ðhas

30 M. Plach and D. Wallach

Usability expert

Formal evaluation

Usability testing

Designprocess

simulation

Designer

Designrepresentation,prototype

Participating user

Figure 1. Anticipation-feedback loop (dashed lines), formal feedback loop (solid lines).

Dow

nloa

ded

by [

Uni

vers

ity o

f W

inds

or]

at 1

4:37

11

Nov

embe

r 20

14

Page 8: Toward methods for supporting the anticipation-feedback loop in user interface design

been reported in such diverse areas as, for example, mental rotation of letters(Shepard and Cooper 1982), simulation of causal models (Kahneman and Tversky1982), process control (Wallach 1998) and software design (Kant and Newell 1984,Guindon 1990). Stevens and Collins (1980: 182) succinctly characterized the functionof mental simulation and emphasized the imperfect and dynamic aspects of it.`Simulation models . . . make it possible to represent certain properties of theworld. The properties may be both incomplete and incorrect, but by knowing howthey interact, it is possible to `run’ the model under di� erent conditions to examinethe consequences. Thus a simulation model is like a motion picture that preservesselected properties of the world’ (Dutke 1994). (Some researchers have put forwardcritical comments concerning the question whether simulation represents a distincttype of reasoning (e.g. Rips 1986).)

Empirical investigations have demonstrated that simulations also play an import-ant role in design problem solving. The mental realization of interaction sequenceswithin simple scenarios seems to be an essential and broadly used mode of reasoningin the design of interactive systems. It was even proposed as a prescriptive approachin object-oriented design (Carroll et al. 1994). In the domain of software design,Guindon (1990) found that (scenario) simulations appear throughout the wholedesign session. Analysis of verbal think aloud protocols in Guidon’s study suggeststhat simulations help to understand given and inferred requirements, as well as todiscover new requirements and to develop solutions. Since participants in her studyhad to solve a rather technical design problem (design of a n-¯oor, m-lift system) withthe user interface just being one minor aspect of the whole problem, it may bespeculated that simulation of user models is even more signi®cant in the design ofuser interfaces.

Recent methodological debates on usability evaluation methods have alsoemphazized the signi®cance and inevitableness of mental simulations for user inter-face design. Newman (1998) noted that evaluators tend to impersonate the user.Hasgodan (1996) termed this process self-simulation . In a survey he also presentedempirical data that not only supports this view, but also indicates that designers areaware of this fact.

It is obvious that the high prevalence of self-simulation bears an incalculable risk:`Right or wrong, designers and evaluators of these systems are prone to adopt theattitude that `since I could use it, I’m competent to design it and test it too’ (Pheasant1991, cited from Newman 1998: 317). The extent and precision to which simulationin the AFL takes place may di� er substantially, both within and between designers.For example, a user model may be super®cial if it just builds upon unquestionedeveryday assumptions about what a typical user might know. It may be thorough ifdi� erent possible user goals and assumptions at a certain point of interaction arerepresented and evaluated explicitly. Hence, the comprehensible claim to pay moreattention to the quality of simulations employed in analytical simulations of user’sactivities is not surprising: `Each of these simulations can thus be conducted in avariety of ways, and each variation a� ects the simulation’s completeness and accu-racy, with a cumulative e� ect on the evaluation as a whole’ (Newman 1998: 318).

The extensive use of mental simulations on the basis of concrete usage scenarioshas been attributed to constraints imposed by the human cognitive architecture.Kant and Newell (1984: 106) argue that only the concrete situation can uncoverwhat requirements actually mean, `because human memory is essentially associativeand must be presented with concrete retrieval cues to make contact with the relevant

Supporting the anticipation-feedbac k loop 31

Dow

nloa

ded

by [

Uni

vers

ity o

f W

inds

or]

at 1

4:37

11

Nov

embe

r 20

14

Page 9: Toward methods for supporting the anticipation-feedback loop in user interface design

knowledge’. Although this feature of the human cognitive architecture may providean explanation for the indispensability of mental simulation, our knowledge aboutconditions and functions of mental simulation in design problem-solving neverthe-less remains rather fragmentary. Since scenario-based processing requires to keep inmind a set of conditions describing the scenario, one may speculate that mentalsimulation of an interaction model probably leads to high cognitive load, whichmight have negative consequences for the remaining resources left for other prob-lem-solving processes. Empirical ®ndings concerning perspective-taking processes inspeech planing indicate such a resource dependency (Horton and Keysar 1998).

In sum, although the ®ndings mentioned so far shed some light on the cognitiveprocesses involved in the AFL, a more ®ne-grained analysis of simulations is neededto understand in which way perspective-taking constrains design problem solving.

4. Usability evaluation methods as cognitive supportThe growing need for an early focus on usability issues in software developmentprojects has not only led to the development of usability evaluation methods but alsoto an increase of research endeavours aiming to solve the question which method ormethod combination to choose in practical development contexts. Analytical tech-niques are typically distinguished from empirical methods. The later are often simplyreferred to as usability testing. Representative analytical methods are Heuristic Eva-luation (Molich and Nielsen 1990) or Cognitive Walkthrough (Lewis et al. 1994)detailed guidelines (Smith and Mosier 1986) and, although often considered a taskanalysis method, GOMS (Card et al. 1983). (The term Heuristic Evaluation isreserved for a method using a short list, usually less than a dozen principles orguidelines (Molich and Nielsen 1990, Nielsen 1994). Typically, the evaluation resultsfrom more than one evaluator are aggregated. Some practitioners also use longlistsÐtypically more than a dozen and sometimes even several hundreds of guide-lines (Smith and Mosier 1986). In this paper, the term guideline-oriented methodgenerally refers to short lists of guidelines, because long lists cannot be used online inany systematic way by designers.)

Various research groups have compared di� erent analytical methods in relationto usability testing (e.g. Je� ries et al. 1991, Karat et al. 1992, Desurvire and Thomas1993, for a comprehensive overview of the literature, see Gray and Salzman 1998).The comparisons are based on criteria as, for example, e� cacy (how many usabilityproblems does the method ®nd), e� ciency (what e� ort is needed in terms of human-power and time to ®nd usability problems), errors (type of errors found usingdi� erent methods), and simplicity (how easy is the method to learn). Withoutgoing into further detail here, there is some evidence that usability testing uncoversmore usability problems than analytical methods (Dumas and Redish 1993).Nevertheless, Je� ries and Desurvire (1992: 92) stress that `advantages of ®ndingproblems very early . . . outweigh the rather modest numbers of problems guidelinesor the cognitive walkthrough identi®ed. . . . One must add . . . they can be applied bythe developers themselves.’

Generally, the studies available are often hard to compare because they useddi� erent experts and it is therefore not surprising that the results are overallrather inconsistent. Based on these de®ciencies Gray and Salzman (1998) presentedan in-depth review of a series of in¯uential studies which compared di� erent evalua-tion methods. Gray and Salzman (1998: 206) argue `that methodological problems inthe studies call into question our knowledge about the e� cacy of various methods’.

32 M. Plach and D. Wallach

Dow

nloa

ded

by [

Uni

vers

ity o

f W

inds

or]

at 1

4:37

11

Nov

embe

r 20

14

Page 10: Toward methods for supporting the anticipation-feedback loop in user interface design

The critique applied points to low power, wildcard e� ects, lack of experimentalcontrol, and a lack of internal and conclusion validity. Beyond the critique pre-sented, Gray and Salzman as well as some of the commentators of their articlehave alluded to another important point: there exist structurally di� erent analyticalmethods.

On the one hand, there are guideline-oriented methods like Heuristic Evaluation(Molich and Nielsen 1990). In general, guidelines-based methods summarize any listof attributes given to evaluators for the purpose of determining whether any itemfrom the list is true for the interface (prototype). Opposed to guideline-orientedmethods, walkthrough methods like Cognitive Walkthrough (Polson et al. 1992,Wharton et al. 1994) require evaluators to perform a given set of tasks or `areasked to evaluate the steps of a task as they would be performed by the user (some-times a ¯owchart is given, other times a listing is provided)’ (Gray and Salzman1998: 213).

4.1. Cognitive e� ects of di� erent methodsWhat has not been considered su� ciently in the literature is that the type of method(guidelines versus walkthrough) requires di� erent types of initial information (®rstprinciples versus scenarios) for the evaluator and that this might have an e� ect onthe type of reasoning process (simulation versus feature checking).

Generally, applicable guidelines, e.g. concerning short-term memory capacitylimitation for example, are decontextualized. Hewett and Adelson (1998) arguethat this is the reason why it is di� cult for abstract statements to either guide orconstrain design. Moreover, problem speci®cations which are usually stated insought behaviours do not provide cues leading to the retrieval of relevant ®rstprinciples. The ®rst principles are generalizedÐand therefore decontextualizedÐstatements, and so there is a mismatch between the formulation of the cue and thetarget (Hewett and Adelson 1998: 317). Put di� erently, guidelines methods examinethe properties of the interface and infer usability problems. Therefore, it maybe hypothesized that, when given guidelines, designers tend to concentrate on thedevelopment of solutions and then try to apply the guidelines. In contrast, using awalkthrough technique means that the evaluator tries to `measure’ simulated per-formance. Here the analytical step comes afterwardsÐin a stage where found usabil-ity problems have to be attributed to interface features. In evaluation theory, thisdi� erence has been termed intrinsic versus payo� measure (Scriven 1977).

4.2. Cognitive walkthroughAlthough there have been some remarkable successes in applying the GOMS meth-odology in tasks that can be characterized as knowledge lean and highly routinized(Card et al. 1983, Gray et al. 1990), modelling user±system interaction will be pri-marily left in the near future to the designer’s mental simulation of envisioning himor herself in the role of a user. E� ective cognitive tools therefore will have to bedesigned in order to support mental simulation in the anticipation-feedback loop. Apromising vantage point is the so-called cognitive walkthrough (CW) and its variants(Polson et al. 1992, Rowley and Rhoades 1992, Wharton et al. 1994, Rizzo et al.1997). The cognitive walkthrough technique is based upon a model of exploratorylearning (Polson and Lewis 1990). It is assumed that the user learns to performthrough a process of goal-directed guessing on the basis of the cues provided bythe system interface. The method therefore is especially suited for walk-up systems,

Supporting the anticipation-feedbac k loop 33

Dow

nloa

ded

by [

Uni

vers

ity o

f W

inds

or]

at 1

4:37

11

Nov

embe

r 20

14

Page 11: Toward methods for supporting the anticipation-feedback loop in user interface design

e.g. information booths and ATMs, where users typically interact with the systemwithout reading an instruction manual in advance. The underlying theory also bor-rows some concepts from cognitive complexity theory, an extension of GOMS (foran overview, see May and Barnard 1995). The cognitive walkthrough is generallycommunicated not on ground of its theoretical underpinnings but as a structuredmethod based on a model of user±system interaction as a cyclical process of actiongeneration and evaluation of action consequences. In a CW, designers have to walkthrough a task and identify possible goals the user might have at a given point of theinteraction sequence. The participant of a CW must then answer a set of questionsfor each goal and assess whether the user will perform the action necessary for goalsatisfaction. At the core of the method lies a straightforward model of the user±system interaction which is sometimes presented in a ¯ow-chart diagram. This modelcan be succinctly characterized by the follow steps the user has to go through: (1) theuser starts with a vague goal, (2) the user explores the interface to ®nd an actionassumed to lead to goal satisfaction, (3) the user performs the action assumed to leadto goal satisfaction, and (4) the user observes the interface to check whether theaction has led to goal satisfaction.

Associated with each single step, the cognitive walkthrough method provides aset of questions which, when answered, can help to judge whether users will continueand overcome obstacles. Some of the guiding questions are: (1) Will the user betrying to achieve the right e� ect? (2) Will the user know that the correct action isavailable? (3) Will the user know that the correct action will achieve the desirede� ect? (4) If the correct action is taken, will the user see that things are goingokay? (Wharton et al. 1994).

Taking into account that the evaluation process which we just described in itsgeneral structure has to be performed on each subtask possible in using the interface,the major practical drawback of the method is evident: performing a CW rigorouslyis tedious and will lead to over-detailed results. It will therefore often go beyondpractical time limitations. This is especially true if one stands to the originallyproposed, rather formal version of a CW. There have thus been attempts to speedup the method (Rieman et al. 1991, Rowley and Rhoades 1992). The method is stillunder development and it can be assumed that more e� ective versions may emerge(Preece et al. 1994). Despite this general drawback, the CW has the advantage ofalready being applicable in early phases of the development without the need of anavailable interactive prototype (Je� ries and Desurvire 1992).

CW was initially introduced as an evaluation technique to be applied within thescope of analytical reviews with the exclusive goal to evaluate a design and to com-municate the results to designers. However, Polson et al. (1992) noted that CW canalso be performed more informally by the individual designer. Far from proposing asubstitution of formal evaluation conducted by experienced usability specialists,other authors have pointed to the added advantages of having designers evaluatethe usability of the design. Wright and Monk (1991), for example, proposed a pro-cedure where users complete tasks, think aloud while working and designers evaluatethe design. In an empirical study, the authors found that designers evaluating theirown systems detected more usability problems than similarly experienced evaluatorswho have not been involved in design. Following this conception in the currentpaper, we investigate methods which can be used by designers themselves.

Apart from these e� ciency-related aspects, CW can be characterized as a ques-tion-based method. It has been pointed out that the explicit posing of questions

34 M. Plach and D. Wallach

Dow

nloa

ded

by [

Uni

vers

ity o

f W

inds

or]

at 1

4:37

11

Nov

embe

r 20

14

Page 12: Toward methods for supporting the anticipation-feedback loop in user interface design

concerning the usage of the to be designed artefact is an e� ective method for guidingdesign (Carroll 1996). Mack (1992) presents a framework of questions which,although following a broader and less theoretically rooted concept, covers most ofthe questions explicated in a CW. In general, the function of the questions is to relateassumed mental processes of the user with states and features of the system. Incombination with the model of the user±system interaction, these questions serveas an external sca� old which not only speci®es the steps of the evaluation process butalso helps the evaluator to activate and focus on the relevant knowledge needed to

assess the usability of the design (Kant and Newell 1984).In sum, the most interesting cognitive support aspect of the CW method seems to

be found in its systematization of cognitive processes which is regarded as central indesign, that is, the mental simulation of usage on the basis of scenarios in theanticipation-feedback loop. In contrast to guideline-oriented methods, designerswho use a walkthrough-based evaluation scheme seem to be prompted to representusers’ cognition more explicitly. As a guiding conceptual framework for the designer,this method not only appears to lead to a more thorough perspective-taking processbut may also evoke a more pronounced top-down, breadth-®rst problem-solvingstrategy. This strategy can be characterized by designers developing a solution atone level of abstraction for all subparts before then moving to a new level of detail(Ball et al. 1994). This approach is generally considered to lead to better resultsbecause, for example, unjusti®able early commitments can thus be avoided. In thepresent context, a top-down, breadth-®rst strategy should be re¯ected in designersgiving user goals a higher priority as constraints than functional concepts as design.

4.3. GuidelinesDesign guidelines help to guide designers towards making sound decisions. Guide-lines have their origin in psychological research and design experience. Method-ologies to derive guidelines from their embodiment in the principles of cognitivepsychology have been elaborated by Marshall et al. (1987). Moreover, theoreticalframeworks such as ISO 9241 (1996) provide guidance for applying guidelines.

As opposed to scenario-based methods, guidelines `describe general ergonomicprinciples which are independent of any speci®c dialogue technique’ (ISO 1996).Therefore, they are detached from speci®c user goals and therefore tend to beused only after design has materialized. In other words: guidelines may promote adepth-®rst or even bottom-up strategy, because conformity to guidelines can not betested without a concrete solution at hand (Hewett and Adelson 1998). Table 2summarizes the di� erences between cognitive walkthrough and guidelines.

Supporting the anticipation-feedbac k loop 35

Table 2. Feature of evaluation methods (see text for an explanation).

Method Guidelines Walkthroughs

Initial information framework general principles scenariosType of cue decontextualized contextualizedHypothetical dominance of design functional concepts, user goals

problem-solving category (goals, solutionsfunctions, solutions)

Type of measure intrinsic simulated payo�Hypothetical reasoning strategy bottom-up top-down

Dow

nloa

ded

by [

Uni

vers

ity o

f W

inds

or]

at 1

4:37

11

Nov

embe

r 20

14

Page 13: Toward methods for supporting the anticipation-feedback loop in user interface design

5. ExperimentTo understand better the role of perspective taking in design problem solvingdepending on di� erent usability evaluation frameworks, we conducted an experi-mental study. We hypothesized that walkthrough methods which are based on thegeneration of scenarios lead to a more explicit anticipation of the user perspectiveand a more thorough evaluation of design options than context-independent guide-lines. In contrast to this hypothesis, it could be the case, that although guidelines aredecontextualized, designers are engaged in simulations when confronted with theproblem to apply a guideline in the AFL. If this was the case, we would expect nodi� erences in the degree to which designers take the perspective of a user.

Furthermore, we hypothesized that the di� erent methods will have an impact onthe general solution strategy. More speci®cally, we expect that a walkthrough-basedrationale of how to ensure the design for usability will more strongly endorse thetendency to adopt a top-down, breadth-®rst strategy than guidelines.

5.1. MethodTo investigate the cognitive processes of designers, we decided to carry out a thinkaloud protocol analysis. Because protocol analysis tends to be an extremely tediousand time consuming form of data analysis, we chose an experimental task withnarrowed scope but which nevertheless allowed to have a full iteration cycle thatcould be completed as a whole within an acceptable time frame. A task that ful®lsthese requirements is the interaction design of a pager. Pagers are small (matchboxsized) mobile telecommunication devices that can only receive messages. Pagerscome in two types: numerical and alphanumerica l versions. Alphanumerical pagerscan receive and display text messages as opposed to numerical pagers which typicallyjust receive and display the telephone number of the person calling the pager. Formatters of simplicity, we decided to have participants design the interface of anumerical pager.

5.1.1. Participants. Ten persons participated in the experiment. All of them weremasters’ students of Computer Science or Information Science with theoreticaland practical education in technical aspects of user interface design. None of theparticipants had ever used a pager him- or herself, although seven participantswere familiar with the general idea of a pager. Furthermore, none of the partici-pants was familiar with the cognitive walkthrough method. All participants, how-ever, had been confronted with guidelines before the experiment, although theyhave never systematically used guidelines for speci®cation. Participants were paid$12 for participation.

5.1.2. Procedure and design. Participants received a design brief, consisting of thefollowing instruction packages: a cover story explaining the use of pagers in gen-eral, rough functional requirements of the device, technical constraints , and thinkaloud instructions adopted from Ericsson and Simon (1993). These instructions in-cluded two training examples for thinking aloud.

In addition, it was pointed out in the instructions that special attention should bepaid to the usability of the interface to be designed, i.e. an interface that is easy tolearn and easy to use. Participants were told that in order to reach this aim, they aregiven a conceptual framework for how to ensure that the system will rate high onusability. Depending on random assignment to one of two conditions, participants

36 M. Plach and D. Wallach

Dow

nloa

ded

by [

Uni

vers

ity o

f W

inds

or]

at 1

4:37

11

Nov

embe

r 20

14

Page 14: Toward methods for supporting the anticipation-feedback loop in user interface design

received either a checklist of nine standard context-independent guidelines (e.g.`provide explicit feedback’, `design for error’, `put the user in control’, `minimizeworking memory load’, etc.) or the rationale of the CW method. For a full list ofguidelines, see table 3.

For each guideline, an example was presented. The rationale of the CW methodconsisted of a ¯owchart comprising the steps the user has to go through wheninteracting with the device and a set of four questions presented in Section 4.1.For each question type, an example was presented. It is important to note thatparticipants were not instructed to follow a particularly structured procedure toapply the guidelines or the CW method. Instead, they were told to use the informa-tion presented in the way they wished. Our goal was to present a conceptual frame-work for informally evaluating the result of design decisions throughout the designsession and not a formal procedure.

Participants were given paper and pencil to make sketches or use in any way theyliked. The experiment was self-paced with an upper limit of 3 hours. Participantsreceived a personalized stopping rule: that is, they were instructed to stop wheneverthey thought (1) they have covered the complete functionality, (2) the design isspeci®ed to a degree su� cient for communicating the design in principle to peoplewho are responsible for implementation (participants were not required to producenice drawings), and (3) a further speci®cation would have a� orded further informa-tion gathering or expertise in a domain they did not have (e.g. graphical design foricons). All instructions were presented on paper and left with the participantthroughout the whole design session. After the instruction phase, the experimenterleft the room. Participants were told, however, that they could ask at any point ifthey were unclear about the materials or their task. Participants were tested indivi-dually in single sessions, design sessions were videotaped, and the verbal reports weretranscribed. Two participants, one from each experimental condition, had to beexcluded from the analysis because they repeatedly fell into longer phases of silencedespite being prompted for verbalization.

5.2. Data analysisTo measure potential e� ects of di� erent usability supporting methods on designcognition in the AFL, two category systems were developed and used to code ver-balizations according to the research questions of the present study: (1) depth ofperspective taking and (2) direction of constraining. To reduce coding complexity,

Supporting the anticipation-feedbac k loop 37

Table 3. Checklist of guidelines.

Strive for consistency interaction sequences should follow few general rulesProvide explicit feedback system should inform the user at any point about its state

and the result of user actionsPut the user in control put the user in control, e.g. provide a consistent way to

terminate a dialogueAvoid modi avoid modi, i.e. a speci®c user actions should lead to the

same results independent of the systems’ stateDesign for error minimize the demage produced by errorsError messages make error messages comprehensibleMinimalistic design `less is more’; avoid visual and functional clutterLanguage of the user use terms and signs that are comprehensible for the userMemory load minimize memory load. Prefer recognition over recall

Dow

nloa

ded

by [

Uni

vers

ity o

f W

inds

or]

at 1

4:37

11

Nov

embe

r 20

14

Page 15: Toward methods for supporting the anticipation-feedback loop in user interface design

verbalizations which were classi®ed as monitoring and control of problem-solving,de¯ections, and instruction reading were screened out from the transcripts beforecoding. Two independent raters coded the segments. Segmentation, i.e. separatingthe text into meaningful smallest units, was done separately for both categorysystems because they were based on di� erent units of analysis.

5.2.1. Category system 1: depth of perspective taking. One general goal of thisstudy was to show that di� erent conceptual frameworks for usability evaluationhave a impact on the problem-solving process of designers with respect to the ef-fort taken to represent user cognition. There are di� erent ways such a hypothesiscould be tested. In our study, we distinguish three levels of depth of perspectivetaking (Clark 1996): (1) role, (2) assumption of mutual knowledge, and (3) simu-lated cognition. To illustrate the respective content, the following paragraph pres-ents descriptions and examples of segments coded this way.

5.2.1.1. Role. This category contains segments describing that there is anaction performing role, an information presentation role, or an information per-ceiving role. In principle, the role can either be assigned to the user or the device.Statements framed in this mode genuinely do not assume any problems in theinteraction. This means, the designer assumes that the information presented is ac-curate and understood and actions to be taken are correct.

For illustration, consider the following statement that was coded as [role]: `So theuser has to do something with the pager, so we need buttons’.

5.2.1.2. Assumption of mutual knowledge. This category comprises segmentswhere designers recur on information that could justify that the user will under-stand what to do next. In general, the basis for this assumption can be twofold:(1) the user knows because of universal knowledge, and (2) the user knows be-cause there is enough evidence in the previous dialog. (In principle, there is a thirdpossibility, that is, the system tells the user what to do, but this possibility can beexcluded for numerical pagers.) This di� erentiation is also re¯ected in a descrip-tion of what an analyst has to keep in mind when conducting a CW (Whartonet al. 1994: 315). For illustration, consider the following examples.

[Mutual knowledge, based on universal knowledge]: `Therefore always the ®rstmessage should be displayed ®rst, that is somehow logical’.

[Mutual knowledge based on previous dialog]: `That shouldn’t be a problem (forthe user), if this has already happened’.

5.2.1.3. Simulated cognition. Finally the most demanding and profound wayfor the designer to mentally represent the user are speci®c representations of usercognition, either as concrete intentions or concrete assumptions. Examples forsuch concrete representations of user cognition are:

[Simulated cognition]:`Let’s say, he wants to know what time it is, at what day we are, . . . .’[Simulated cognition]:`I want to start now, when I now get a message, then beep.’

The last examples have also been chosen to demonstrate that it does not make senseto distinguish cases where the designer speaks of him or herself as the user by using

38 M. Plach and D. Wallach

Dow

nloa

ded

by [

Uni

vers

ity o

f W

inds

or]

at 1

4:37

11

Nov

embe

r 20

14

Page 16: Toward methods for supporting the anticipation-feedback loop in user interface design

®rst person pronouns (self-simulation) and cases where this is indicated linguistically

by using third person pronouns.

5.2.2. Category system 2: order of constraining. The category system presented

above is useful to analyse the rigor with which the perspective of a user is taken.

More challenging is the question whether it can be shown that di� erent conceptual

frameworks for usability evaluation lead to a change in solution strategy.

In general, constraints imposed by assumed user goals and plans could be viewed

as any other technical constraint. However, because these constraints are wanted per

se in user interface design, they should have the highest priority. In contrast, func-

tional constraints including usability guidelines like `be consistent’ are not valued for

their own sake, that is, they are determined by the overall system con®guration

(Smith and Browne 1993). Moreover, the order in which di� erent constraints are

taken into account has a marked e� ect on possible solutions because `later decisions

are constrained by earlier decisions in that they are taken within the context of an

existing partial solution, and each solution further limits the range of possible alter-

natives’ (Logan 1989: 189).

As has been described in previous sections, users’ cognition is central in the

Cognitive Walkthrough approach. For participants who were given this method as

a conceptual framework we therefore expect to ®nd more verbalizations indicating

that user goals lead to a functional speci®cation and to functional speci®cations that

are evaluated with respect to a certain user cognition. In contrast, participants in the

guideline condition of the study were expected to be more oriented toward functional

concepts and solutions. That is, we expected to ®nd more verbalizations where

functional speci®cations lead to detail solutions and solutions which are evaluated

from a functional view point.

To examine this hypothesis, we coded segments of verbal protocols according to

four classes: (1) anticipated user cognition leading to functional speci®cation or

solution (u.cog!func,sol), (2) functional speci®cation or solution leading to evalua-

tions with respect to a user cognition (func,sol!u.cog), (3) functional speci®cations

leading to detail solution, (func!sol) and (4) solution leading to evaluation with

respect to a functional concept (sol!func). Initially it was planed distinguish

between u.cog!func and u.cog!sol, and vice versa. However, the ratings for this

level of detail were rather inconsistent. Raters noted that it is generally clear to

distinguish a functional concept in close neighbourhood of a detail solution, but

that it is often rather arbitrary to distinguish a detailed solution from a functional

concept when analysed in the context of a user goal. We therefore decided not to

make distinctions between solutions and functional concepts in the context of goals.

For illustration, annotated examples of segments coded according to this scheme are

presented in the following paragraph. (All examples of coded segments in this paper

are rough translations of German statements which, in the original versions, are

usually more noisy, i.e. paraverbal expressions, pauses, exclamations and repetitions

of sentence structures are not re¯ected in the English translation.):

(1) `Then the user wants to read a new message [user cognition !], hm, it would

be best, if this could be done somehow automatically [functional speci®cationor solution]’.

Supporting the anticipation-feedbac k loop 39

Dow

nloa

ded

by [

Uni

vers

ity o

f W

inds

or]

at 1

4:37

11

Nov

embe

r 20

14

Page 17: Toward methods for supporting the anticipation-feedback loop in user interface design

(2) `Do I need other types of signalling messages [functional speci®cation orsolution !], the user might be in a situation when he doesn’t want anannoying loud beep [user cognition]’

(3) `We should build something like a memory full thing [functional speci®cation!]; we could do this with an unexpected behaviour, like messages blinking,instead of permanently showing them [solution]’.

(4) `With `back’ we get there [solution !], yes, we should keep that consistent[functional speci®cation]’.

5.3. Results and discussionThe inter-rater reliability measured by the ratio of concordantly coded segments tothe number of all segments were in the range of [0.61, 0.87], with higher concordancegenerally found for category system 2. This level of concordance is judged as accep-table (Ericsson and Simon 1993). The mean number of segments which could becoded with category system 1 was 40.9 for the guidelines method versus 57.5 for thewalkthrough method, and 58.7 versus 67.92 for category system 2, respectively. Thisresult already indicates that participants in the walkthrough group overall verbalizedmore relevant statements. The ®nding is supported by the di� erences in mean time tocomplete the task that just failed to reach signi®cance in a U-test (53 min in theguideline condition versus 86 min in the walkthrough condition, p < 0.08) with thissample size. This result is compatible with the general result that walkthroughmethods are rather tedious if used according to the formal procedure speci®ed inthe initial papers (May and Barnard 1995). Moreover, the data indicates that even ifused in a rather informal way, walkthroughs seems to slow down evaluators ordesigners, respectively. However, the mean absolute di� erence of 33 min is notnecessarily critical, especially if it is taken into account that in design sometimes`slower is better’.

Table 4 presents the results for the category system `depth of perspective taking’.As can be seen, the e� ect of the di� erent methods used to guide the design forusability on the level of perspective taking of designers is distinctly present and inthe direction expected.

Compared with the guideline condition, participants in the cognitive walk-through condition clearly verbalized more statements that were coded as an antici-pated user cognition. The di� erence reached statistical signi®cance. However, thiswas not the case for the categories `role’ and `mutual knowledge’. Table 4 also showsthat aside of category system 1, segments had been counted which unambiguouslyreferred to speci®c guidelines (e.g. `be consistent’). Although the e� ect is in theexpected direction, that is, participants in the guideline condition refer more oftento speci®c guidelines, the di� erence did not reach signi®cance. This ®nding may beattributed to the fact, that all participants knew about basic usability guidelines inadvance. On the other hand, note that even participants in the guideline conditionquite often verbalized a simulated user cognition. This result underscores the ®ndingof Guindon (1990), that simulation is an ubiquitous phenomenon in design. Thepattern evolving for the relative frequencies (relative to the number of coded seg-ments in each group separately) is almost identical to the one of the absolute num-bers, indicating that although the guideline group verbalized less, the frequencieswithin the categories were comparable between groups.

The results for category system 2 presented in table 5 con®rm and extent these®ndings. Participants in the cognitive walkthrough condition clearly verbalized more

40 M. Plach and D. Wallach

Dow

nloa

ded

by [

Uni

vers

ity o

f W

inds

or]

at 1

4:37

11

Nov

embe

r 20

14

Page 18: Toward methods for supporting the anticipation-feedback loop in user interface design

statements in which an anticipated user cognition seems to drive design problemsolving. That is, we ®nd more instances of anticipated user cognition leading to afunctional speci®cation or detailed solution (u.cog!func,sol) and more functionalspeci®cations or solutions evaluated with respect to the constraint of an anticipateduser cognition (func,sol!u.cog). This ®nding is signi®cant for u.cog!func,sol andalmost reaches the level of signi®cance for sol,func!u.cog. The sharp di� erence inhigh-level design considerations is contrasted with the ®nding that on a functionaland detail solution level of the problem no di� erences were found.

Although we can observe a slight tendency for participants in the guideline groupto produce a higher percentage of functional concepts leading to detail solutions(func!sol) and solutions that were evaluated in the light of a functional concept(sol!func), these could not be substantiated on the level of statistical signi®cance.Again, the relative frequencies within the groups do not support the view that thedi� erences found may be attributable to the fact that, in general, participants in thewalkthrough group just verbalized more.

6. General discussionThe results of this study suggest that the anticipation of users’ cognition and simula-tion of user±system interaction plays an important role in the design of user inter-faces. The study thus con®rms the central function of simulation in design problemsolving which has been emphasized by other authors (Guindon 1990, Carroll andRosson 1992). Furthermore, the results obtained by analysing the data on the basisof category system 1 indicate that the depth of perspective taking induced by di� erentusability methods may di� er substantially depending on the type of method used as aguiding instrument to ensure usability. Although the explicit representation andanticipation of speci®c user goals and assumptions can be found in both experi-mental conditions, participants who were in the walkthrough condition were clearly

Supporting the anticipation-feedbac k loop 41

Table 4. Depth of perspective taking.

Cog. walkthrough Guidelines

Type % Mean % Mean U-test

Role 10 6 11 5 p < 0:67Mutual knowledge 28 16.6 25 10.3 p < 0:56Sim. cognition 50 28.3 37 15.3 p < 0:04*

Guideline 11 6.6 26 10.3 p < 0:24

Table 5. Direction of constraining.

Cog. walkthrough Guidelines

Type % Mean % Mean U-test

u.cog!func, sol 26 16.47 11 5.75 p < 0:03*func, sol!u.cog 14 11 6 3.2 p < 0:08func!sol 20 13.25 34 21.75 p < 0:66sol!func 40 27.20 49 28 p < 1:00

Dow

nloa

ded

by [

Uni

vers

ity o

f W

inds

or]

at 1

4:37

11

Nov

embe

r 20

14

Page 19: Toward methods for supporting the anticipation-feedback loop in user interface design

much more concerned with constraints imposed by anticipated goals and assump-tions than participants who were supported by context-independent usability guide-lines. Simply providing the rationale of a scenario-based walkthrough method seemsto facilitate the consideration of speci®c user goals.

In contrast, the results do not support the possibility that, whatever the guidingmethod, designers will mentally simulate user±system interaction to the same extentbecause the design problem per se constrains the solution process. Designers in theguidelines group do not compensate the lack of initial focus on user goals by gen-erating scenarios and anticipating user goals themselves. It may be argued thatguidelines even entrap designers to cut down simulations because they wronglysuppose that generating a solution and applying the guideline will do the job.

It is noteworthy that the results obtained with both category systems are quitecompatible. Especially the results from category system 2 show that the method doesnot only lead to a more elaborate perspective-taking process, it also demonstratesthat these elaborations seem to regulate the problem-solving process beyond simula-tions, i.e. they in¯uence solution development. Guidelines apparently lead to focuss-ing on the functional and detail solution level of the problem structure, resulting inmental activities that are more remote from user cognition. The signi®cant accumu-lation of detail and functional level statements suggests that participants in theusability guidelines condition bridge the gap between requirements given in theinstruction and detail solution rather quickly and prefer a depth-®rst strategy. Adepth-®rst expansion of design has recently been defended on grounds that it doesnot represent an unsystematic, opportunistic strategy and may even be necessary tosome extent because it helps to reduce complexity (Ball and Ormerod 1995).However, a depth-®rst manner of solution development has to be viewed as critical,because it will automatically make too early commitments more likely. The risk thatsuccessive partial solutions will be triggered by surface cues (provided by the detaileddescription already present at early phases when using a depth-®rst strategy) will bemuch higher compared with a solution process which is characterized by extensivehigh-level considerations. This seems to be an important ®nding because it relatesconceptions of what usability means for the individual designer and evaluator withthe design problem-solving process in general.

Although the pattern of the results obtained suggests that the e� ect of di� erentframeworks for how to design usable interfaces has a rather strong e� ect on theAFL, some quali®cations are in place.

Although participants in the present study were not novices, one has to becautious when generalizing these results because participants did not have theamount of experience that would qualify them as expert user interface designers.It could be argued that the results obtained are partly attributable to non-expertsbeing generally more `susceptible’ to the experimental manipulation and the e� ectswould disappear when the experiment would be conducted with expert users.Although we doubt such a strong interpretation based on the fact that most ofthe participants in the study were highly skilled, future research will have toanswer whether the e� ects found scale up to real expert designers and evaluators.

We also recognize that phases in which designers go about by themselves are justone part of practical design activity and that the majority of design decisions (at leastin larger projects) are taken in group sessions (Olson et al. 1992). However, we wouldlike to reply that a considerable amount of design work, especially problem decom-positions and modules which need one to several hours for a ®rst comprehensive

42 M. Plach and D. Wallach

Dow

nloa

ded

by [

Uni

vers

ity o

f W

inds

or]

at 1

4:37

11

Nov

embe

r 20

14

Page 20: Toward methods for supporting the anticipation-feedback loop in user interface design

solution proposal are quite often delegated to individual designers. We propose that,particularly in this situation, supporting methods may be needed even more than ingroup sessions where there obviously exist correctives in the form of colleagueswhich could limit idiosyncratic and biased decisions. For this purpose, CW andguideline-based methods can be combined, using the CW to ensure the over alltask match and then providing the corresponding guidelines to accomplish thespeci®cation.

In sum, the present study shows that investigating the cognitive support functionof usability methods is a paying e� ort which provides useful information for thedesign of supporting usability methods that are tuned to the needs of designers.

AcknowledgementsThe work was supported by the Fraunhofer-Institute for Nondestructive Testing,SaarbruÈ cken, Germany.

ReferencesAKIN, O. 1986, Psychology of Architectural Design (London: Pion).ANDERSON, J. R. 1983, The Architecture of Cognition (Cambridge, MA: Harvard University

Press).ARCHER, L. B. 1969, The structure of the design process, in G. Broadbent and A. Ward (eds),

Design Methods of Architecture (New York: Wittenborn).BALL, L. J., EVANS, J. ST. B. T. and DENNIS, I. 1994, Cognitive processes in engineering

design: a longitudinal study, Ergonomics, 37, 1753±1786.BALL, L. J. and ORMEROD, T. C. 1995, Structured and opportunistic processing in design: a

critical discussion, International Journal of Human±Computer Studies, 43, 131±151.BARNARD, P. J. 1987, Cognitive resources and the learning of human±computer dialogs, in J.

M. Carroll (ed.), Interfacing Thought: Cognitive Aspects of Human±ComputerInteraction (Cambridge, MA: MIT Press), 112±158.

BARNARD, P. J. 1991, Bridging between basic theories and the artifacts of human±computerinteraction, in J. M. Carroll (ed.), Designing Interaction. Psychology at the Human±Computer Interface (Cambridge, MA: MIT Press), 103±127.

BROWNE, D. C. and CHANDRASEKARAN, B. 1989, Design Problem Solving (San Mateo:Morgan Kaufman).

CARD, S. K., MORAN, T. P. and NEWELL, A. 1983, The Psychology of Human±ComputerInteraction (Hillsdale: LEA).

CARROLL, J. M. 1996, Human±computer interaction: psychology as a science of design,Annual Review of Psychology, 48, 61±83.

CARROLL, J. M., MACK, R. L., ROBERTSON, S. P. and ROSSON, M. B. 1994, Binding objects toscenarios of use, International Journal of Human±Computer Studies, 41, 243±276.

CARROLL, J. M. and ROSSON, M. B. 1992, Getting around the task±artifact framework: howto make claims and design by scenario, ACM Transactions on Information Systems, 10,181±212.

CHAN, C. S. 1990, Cognitive processes in architectural design problem solving, Design Studies,11, 66±80.

CLARK, H. H. 1996, Using Language (Cambridge: Cambridge University Press).CROSS, N., CHRISTIAANS, H. and DORST, K. 1996, Analysing Design Activity (New York:

Wiley).DE KLEER, J. and BROWN, J. S. 1983, Assumptions and ambiguities in mechanistic mental

models, in D. Gentner and L. A. Stevens (eds), Mental Models (Hillsdale: Erlbaum),1565±1590.

DESURVIRE, H. and THOMAS, J. C. 1993, Enhancing the performance of interface evaluationusing non-empirical usability methods, Proceedings of the Human Factors andErgonomics Society 37th Annual Meeting (Santa Monica: Human Factors andErgonomics Society), 1132±1136.

Supporting the anticipation-feedbac k loop 43

Dow

nloa

ded

by [

Uni

vers

ity o

f W

inds

or]

at 1

4:37

11

Nov

embe

r 20

14

Page 21: Toward methods for supporting the anticipation-feedback loop in user interface design

DIAPER, D. 1989, Task Analysis for Human±Computer Interaction (Chichester: EllisHorwood).

DUMAS, J. S. and REDISH, J. C. 1993, A Practical Guide to Usability Testing (Norwood:Ablex).

DUTKE, S. 1994, Mentale Modelle: Konstrukte des Wissens und Verstehens [Mental Models:Constructs of Knowledge and Understanding] (GoÈ ttingen: angewandte Psychologie).

ERICSSON, K. A. and SIMON, H. A. 1993, Verbal Reports as Data, revd (Cambridge, MA: MITPress).

GERO, J. S. 1990, Design prototypes: a knowledge representation schema for design, AIMagazine , 11, 27±36.

GOEL, V. 1995, Sketches of Thought (Cambridge, MA: MIT Press).GOEL, V. and PIROLLI, P. 1992, The structure of design problem spaces, Cognitive Science, 16,

395±429.GOOD, M. D. 1988, Software usability engineering, Digital Technical Journal, 6, 125±133.GOULD, J. D. and LEWIS, C. 1985, Designing for usabilityÐkey principles and what designers

think of them, Communications of the ACM, 28, 300±311.GRAY, W. D., JOHN, B., STUART, R., LAWRENCE, D. and ATWOOD, M. E. 1990, GOMS meets

the phone company: Analytic modeling applied to real-word problems, Proceedings ofINTERACT `90 (Amsterdam: Elsevier), 634±639.

GRAY, W. D. and SALZMAN, M. C. 1998, Damaged merchandise? A review of experimentsthat compare usability evaluation methods, Human±Computer Interaction, 13, 203±263.

GUINDON, R. 1990, Knowledge exploited by experts during software system design,International Journal of Man±Machine Studies, 33, 279±304.

HACKOS, J. T. and REDISH, J. C. 1998, User and Task Analysis for Interface Design (NewYork: Wiley).

HASGODAN, G. 1996, The role of user models in product design for assessment of user needs,Design Studies, 17, 19±33.

HEWETT, T. T. and ADELSON, B. 1998, Psychological science and analogical reminding in thedesign of artifacts, Behavior Research Methods, Instruments and Computers, 30, 314±319.

HORTON, W. S. and KEYSAR, B. 1996, When do speakers take into account common ground?,Cognition, 59, 91±117.

ISO 9241, 1996, Ergonomic Requirements for O� ce Work with Visual Display Terminals(VDTs)ÐPart 10: Dialog Principles (Geneva: ISO).

JEFFRIES, R. and DESURVIRE, H. 1992, Usability testing vs. heuristic evaluation: was there acontest?, SIGCHI Bulletin, 24, 39±41.

JEFFRIES, R., MILLER, J. R., WHARTON, C. and UYEDA, K. M. 1991, User interface evaluationin the real world: a comparison of four techniques, Proceedings of the ACM CHI’91Conference on Human Factors in Computing Systems (New York: ACM), 119±124.

KAHNEMAN, D. and TVERSKY, A. 1982, The simulation heuristic, in D. Kahneman and A.Tversky (eds), Judgement Under Uncertainty: Heuristics and Biases (Cambridge:Cambridge University Press), 201±208.

KANT, E. and NEWELL, A. 1985, Problem-solving techniques for the design of algorithms,Information Processing and Management, 20, 97±118.

KARAT, C.-M., CAMPBELL, R. and FIEGEL, T. 1992, Comparison of empirical testingand walkthrough methods in user interface evaluation, Proceedings of the ACMCHI’92 Conference on Human Factors in Computing Systems (New York: ACM),397±404.

LOGAN, B. S. 1989, Conceptualizing design knowledge, Design Studies, 10, 188±195.MACK, R. 1992, Questioning design: toward methods for supporting user-centered software

engineering, in T. Landauer (ed.), Questions and Information Systems (Hillsdale:Erlbaum), 101±130.

MARSHALL, C., NELSON, C. and GARDINER, M. M. 1987, Design guidelines, in M. M.Gardiner and B. Christie (eds), Applying Cognitive Psychology to User-InterfaceDesign (New York: Wiley), 243±278.

MAY, J. and BARNARD, P. 1995, The case for supportive evaluation during design, Interactingwith Computers, 7, 115±143.

44 M. Plach and D. Wallach

Dow

nloa

ded

by [

Uni

vers

ity o

f W

inds

or]

at 1

4:37

11

Nov

embe

r 20

14

Page 22: Toward methods for supporting the anticipation-feedback loop in user interface design

MAYHEW , D. 1992, Principles and Guidelines in Software User Interface Design (EnglewoodCli� s: Prentice-Hall).

MILLER, G. A. 1956, The magical number seven plus or minus two: some limits of our capacityfor information processing, Psychological Review, 63, 81±87.

MOLICH, R. and NIELSEN, J. 1990, Improving the human±computer dialogue,Communications of the ACM, 33, 338±348.

NEWELL, A. and SIMON, H. A. 1972, Human Problem Solving (Englewood Cli� s: Prentice-Hall).

NEWELL, A. and SIMON, H. A. 1976, Computer science as a empirical inquiry: symbols andsearch, Communications of the ACM, 19, 113±126.

NEWMAN, W. M. 1998, On simulation, measurement, and piecewise usability evaluation,Human±Computer Interaction, 13, 316±323.

NIELSEN, J. 1994, Heuristic evaluation, in J. Nielsen and R. L. Mack (eds), Usability InspectionMethods (New York: Wiley), 25±62.

OLSON, G. M., OLSON, J. S., CARTER, M. R. and STORROSTEN, M. 1992, Small group designmeetings: An analysis of collaboration, Human±Computer Interaction, 7, 347±374.

PAYNE, S. J. and GREEN, T. R. G. 1986, Task±action grammars: a model of the mentalrepresentation of task languages, Human±Computer Interaction, 2, 93±133.

POLSON, P. G. and LEWIS, C. 1990, Theory based design for easily learned interfaces, Human±Computer Interaction, 5, 191±220.

POLSON, P., LEWIS, C., RIEMAN, J. and WHARTON, C. 1992, Cognitive walkthroughs: amethod for theory-based evaluation of user interfaces, International Journal of Man±Machine Studies, 36, 741±773.

PREECE, J., ROGERS, Y., SHARP, H., BENYON, D., HOLLAND, S. and CAREY, T. 1994, Human±Computer Interaction (Harlow: Addison-Wesley).

RIEMAN, J., DAVIES, S., HAIR, D. C., ESEMPLARE, M., POLSON, P. and LEWIS, C. 1991, Anautomated cognitive walkthrough, Proceedings of ACM CHI’91 (New York: ACM),417±428.

RIZZO, A., MARCHIGIANI, E. and ANDREADIS, A. 1997, The AVANTI project: prototypingand evaluation with a cognitive walkthrough based on Norman’s model of action,Proceedings of ACM DIS’97 (New York: ACM), 305±310.

ROWLEY, D. E. and RHOADES, D. G. 1992, The cognitive jog through: a fast-paced userinterface evaluation procedure, Proceedings of ACM CHI’92 (New York: ACM),389±395.

SCRIVEN, M. 1977, The methodology of evaluation, in A. A. Bellack and H. M. Kliebard (eds),Curriculum and Evaluation (Berkley: McCutchan), 334±371.

SHEPARD, R. N. and COOPER, L. A. 1982, Mental Images and their Transformation(Cambridge, MA: MIT Press).

SMITH, G. F. and BROWNE, G. J. 1993, Conceptual foundations of design problem solving,IEEE Transactions on Systems, Man, and Cybernetics, 23, 1209±1219.

SMITH, S. L. and MOSIER, J. N. 1986, Guidelines for Designing User Interface Software. ESD-TR-86-278 (Bedford, MA: MITRE Corporation).

STEVENS, A. L. and COLLINS, A. 1980, Multiple conceptual models of a complex system, in R.E. Snow, P. Federico and W. E. Montague (eds), Aptitude, Learning, and Instruction,vol. 2 (Hillsdale: Erlbaum), 177±198.

ULLMAN, D. G., DIETTERICH, T. G. and STAUFFER, L. A. 1988, A model of the mechanicaldesign process based on empirical data, Arti®cial Intelligence for Engineering Design,Analysis and Manufacturing, 2, 33±52.

VISSER, W. 1990, More or less following a plan during design: opportunistic deviations inspeci®cation, International Journal of Man±Machine Studies, 33, 247±278.

WAHLSTER, W. 1991, User and discourse models for multimodal communication, in J. W.Sullivan and S. W. Tyler (eds), Intelligent User Interfaces (Reading, MA: Addison-Wesley), 45±67.

WALLACH, D. 1998, Komplexe Regelungsprozesse [Complex System Control] (Wiesbaden:DUV).

WALLACH, D. and PLACH, M. 1999, Cognitive architecturesÐa theoretical foundation forHCI, in H.-J. Bullinger and J. Ziegler (eds), Human±Computer Interaction:Ergonomics and User Interfaces (Mahwah: Erlbaum), 491±495.

Supporting the anticipation-feedbac k loop 45

Dow

nloa

ded

by [

Uni

vers

ity o

f W

inds

or]

at 1

4:37

11

Nov

embe

r 20

14

Page 23: Toward methods for supporting the anticipation-feedback loop in user interface design

WHARTON, C., RIEMAN, J., LEWIS, C. and POLSON, P. 1994, The cognitive walkthroughmethod: a practitioner’s guide, in J. Nielsen and R. L. Mack (eds), UsabilityInspection Methods (New York: Wiley), 105±140.

WHITESIDE, J. A. 1986, Usability engineering, UNIX Review, 4, 22±37.WHITESIDE, J. A., BENNETT, J. and HOLTZBLATT, K. 1988, Usability engineering: our experi-

ence and evolution, in M. Helander (ed.), Handbook of Human±Computer Interaction(Amsterdam: Elsevier), 791±818.

WRIGHT, P.C. and MONK, A. F. 1991, A cost-e� ective evaluation method for use by designers,International Journal of Man±Machine Studies, 35, 891±912.

About the authorsMarcus Plach received his PhD in Cognitve Science from Saarland University (Germany) in1997. He was then leading a project on user-centered system design at the Fraunhofer Institutefor Non-destructive Testing. Since 1999 he has been an Assistant Professor at Saarland Uni-versity. Together with Dieter Wallach he founded ERGOSIGN in 1999, a consultancy spe-cializing on usability engineering and e-learning. Marcus Plach has been a lead consultant fora variety of international companies and institution, e.g. Credit Suisse and the EuropeanPatent O� ce.

Dieter Wallach received his PhD in Cognitive Science from Saarland University (Germany) in1996. In 1997 he was a Visiting Scholar at the Department of Psychology at Carnegie MellonUniversity, Pittsburgh, USA. From 1998 to 2000 he was a senior scientist at the University ofBasle, Switzerland. Together with Marcus Plach he founded ERGOSIGN in 1999, a consul-tancy specializing on usability engineering and e-learning. In 2001 he was appointed Professor-ships for Software-Engineering and Human±Computer Interaction at the Universities ofApplied Sciences at Heilbronn and Kaiserslautern (Germany).

46 M. Plach and D. Wallach

Dow

nloa

ded

by [

Uni

vers

ity o

f W

inds

or]

at 1

4:37

11

Nov

embe

r 20

14


Top Related