supporting user-adapted interface design: the use-it system

32
ELSEVIER Interacting with Computers 9 (1997) 73- 104 Interacting with Computers Supporting user-adapted interface design: The USE-IT system D. Akoumianakis, C. Stephanidis” Institute of Computer Science, Foundation for Research and Technology-Hellas, Science and Technology Park of Crete, P.0. Box 1385, Heraklion, Crete, GR-71110 Greece Abstract This paper describes USE-IT, a knowledge-based tool for automating the design of interactions at the physical level, so as to ensure accessibility of the target user interface by different user groups, including people with disabilities. To achieve this, USE-IT elicits, manipulates and interprets repre- sentations of design knowledge in order to reason about, select and decide upon lexical adaptation constituents of a user interface. Adaptation constituents are attributes of abstract interaction object classes. USE-,IT generates a collection of adaptation rules (i.e. a lexical specification scenario), based on design constraints generated from three basic knowledge sources: (a) the user model, (b) the task schema, and (c) a set of platform constraints (i.e. interaction objects, attributes, device availability, etc.). A data structure called the adaptability model tree has been designed to (i) facilitate the development of plausible semantics of adaptation at the lexical level of interaction, (ii) allow unification of design constraints, and (iii) enable selection of maximally preferred design options. The output of USE-IT can be subsequently interpreted by the run-time libraries of a high-level user interface development toolkit, which provides the required implementation support for realizing the user-adapted interface on a target platform. 0 1997 Elsevier Science B.V. Keywords: User interface adaptation; Design representation; Design assistance 1. Introduction In recent years, user interface adaptation has been the subject of considerable attention in several research efforts aiming to advance User Interface Software and Technology (UIST) towards greater quality of use of the resulting interactive systems. The primary focal point of such activities lies within the realm of designing, developing and * Corresponding author. Tel.: +30 81 91741; fax: +30 81 391740; e-mail: [email protected] 09%5438/97/$17.00 0 1997 - Elsevier Science B.V. All rights reserved PII SO953-5438(97)00007-6

Upload: d-akoumianakis

Post on 05-Jul-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Supporting user-adapted interface design: The USE-IT system

ELSEVIER Interacting with Computers 9 (1997) 73- 104

Interacting with Computers

Supporting user-adapted interface design: The USE-IT system

D. Akoumianakis, C. Stephanidis” Institute of Computer Science, Foundation for Research and Technology-Hellas,

Science and Technology Park of Crete, P.0. Box 1385, Heraklion, Crete, GR-71110 Greece

Abstract

This paper describes USE-IT, a knowledge-based tool for automating the design of interactions at the physical level, so as to ensure accessibility of the target user interface by different user groups, including people with disabilities. To achieve this, USE-IT elicits, manipulates and interprets repre- sentations of design knowledge in order to reason about, select and decide upon lexical adaptation constituents of a user interface. Adaptation constituents are attributes of abstract interaction object classes. USE-,IT generates a collection of adaptation rules (i.e. a lexical specification scenario), based on design constraints generated from three basic knowledge sources: (a) the user model, (b) the task schema, and (c) a set of platform constraints (i.e. interaction objects, attributes, device availability, etc.). A data structure called the adaptability model tree has been designed to (i) facilitate the

development of plausible semantics of adaptation at the lexical level of interaction, (ii) allow unification of design constraints, and (iii) enable selection of maximally preferred design options. The output of USE-IT can be subsequently interpreted by the run-time libraries of a high-level user interface development toolkit, which provides the required implementation support for realizing the user-adapted interface on a target platform. 0 1997 Elsevier Science B.V.

Keywords: User interface adaptation; Design representation; Design assistance

1. Introduction

In recent years, user interface adaptation has been the subject of considerable attention

in several research efforts aiming to advance User Interface Software and Technology (UIST) towards greater quality of use of the resulting interactive systems. The primary focal point of such activities lies within the realm of designing, developing and

* Corresponding author. Tel.: +30 81 91741; fax: +30 81 391740; e-mail: [email protected]

09%5438/97/$17.00 0 1997 - Elsevier Science B.V. All rights reserved PII SO953-5438(97)00007-6

Page 2: Supporting user-adapted interface design: The USE-IT system

74 D. Akoumianakis, C. Srephanidis/lnteracting with Computers 9 (1997) 73-104

implementing interactive computer systems which can better accommodate the end-user abilities, requirements, preferences and interests.

User interface adaptations may be considered in the context of several different dimensions such as the timing of the adaptation, the controlling agent, the level of the adaptation, etc. [l]. As a result, interactive computer systems may be distinguished into adaptable and adaptive systems. A system is called adaptable if it provides tools that make

it possible for the end user to change the system’s characteristics [2]. On the other hand, an adaptive system can change its own characteristics automatically on the basis of making assumptions about the current user at run-time [l-3].

Despite the substantial research efforts on user interface adaptation and the correspond- ing results, the relevant literature lacks universal definitions of what may be considered as a user interface adaptation constituent, the software architectures that are required to support adaptation of user interfaces during their design, development and implementation life cycles, as well as the underlying knowledge components that may provide the means to reason about and set the value of adaptation constituents. This paper addresses some of the above issues in the context of user-adapted interface design for disabled users.

1.1. Related work

Some of the early attempts to construct adaptable systems are OBJECTLENS [4], BUTTONS [5] and Xbuttons [6]. All these systems allow the user to modify certain

aspects of their interactive behaviour while working with them. More recently, the AURA project (Adaptable User Interfaces for Reusable Applications) of the ESPRIT-II Programme of the European Commission has investigated thoroughly the issue of adaptability [7] and the underlying architectural abstractions for adaptable systems. Adaptability has also been the chief objective in the development of the PODIUM

system [8]. In addition to the above research efforts towards adaptability, a number of systems have

been developed to investigate the complementary technique of adaptivity. The state of the

art in adaptive user interfaces includes OPADE [9], AIDA [IO] and UIDE [ 1 I], as well as the results of several projects at national and international levels, such as AID [12] and FRIEND21 [ 131. In addition to the above architectures for adaptable and/or adaptive user interfaces, there have been a few other proposals which, however, are more narrow in scope. Zimek [14] described the design of an architecture for adaptable and adaptive

UIMS in production. This architecture comprises four functional units, namely a user modelling component, a task modelling component, a strategy component and a UIMS. An alternative architecture has been described by Arcieri et al. [ 151. Finally, there has been substantial work towards the development of dedicated tools and techniques driving and supporting adaptive behaviour, such as GUMS [ 161, UM [ 171, UMT [ 181, BGP-MS [ 191 and PROTUM [20,21].

1.2. Research questions and rationale

Despite the above research work, user interface adaptation techniques, as implemented today, suffer from several shortcomings which limit the capabilities and opportunities for

Page 3: Supporting user-adapted interface design: The USE-IT system

D. Akoumianakis. C. Stephanidis/lnreracting with Computers 9 (1997) 73- 104 15

intelligent and co-operative user interface behaviour. These shortcomings come under two

main headings, namely:

1. Narrow and pre-determined set of adaptation constituents. Recent work has explored only a few cases of user interface adaptation (e.g. task simplification, prompting, help, menu order), but it neglects a wide range of basic design issues, such as, for example, the choice of input/output device, the interaction techniques used, feedback modalities, etc., which traditionally form the core of what is commonly referred to as physical (or

lexical) design. Instead, the adaptations supported are tightly coupled to the user inter- face development platform (i.e. toolkit), and as a result, the range of adaptable and adaptive behaviour, as well as the type and style of interaction that can be supported.

are limited. 2. Lack of comprehensive support of user integace adaptation in the context of user

inter&e development environments, Adaptation is typically not embedded in user

interface development systems; thus, adaptable and adaptive interfaces cannot be easily maintained. Moreover, recent research has not considered architectural abstrac-

tions for user interface adaptation; the relevant literature reports only on prototype systems and application-specific architectures [ 11.

To cater for some of the above shortcomings, we have devised an approach which

allows the articulation of design knowledge towards the rational selection of maximally preferred adaptations of lexical attributes of abstract interaction objects. This paper

describes the adopted approach and the supporting tool environment. The paper is organized as follows. The next section describes the primitives of user interface adaptation at the physical level. Then, the architecture of the USE-IT system is presented, detailing by

means of an example the way in which the user interface adaptation primitives have been embedded in the various software components. Following this, we comment upon recent experience in the use of the system and the results of its evaluation. The paper concludes with a summary and identification of future work.

2. Designing user-adapted interactions at the physical level

In what follows, we will be concerned with adaptations that take place during the design

and development phases of a user interface (as opposed to run-time) and which are initiated by the user interface development team (as opposed to the system or the end user). The appropriation of the benefits of this type of adaptation (i.e. adaptability) has only recently been practically realized (see for example [22,23]).

In the present context, user interface adaptability means that a user interface can be

automatically adapted, during the initial design and development phases, to cater for the diverse abilities, requirements and preferences of different user groups, as well as the specificities (i.e. interaction elements) of a particular development platform, such as MS- Windows. This type of adaptability is initiated neither by the user nor by the system; it is identified and implemented by the developer of the user interface of the interactive appli- cation. This paper claims that such adaptability may be automated and supported by a tool in the form of design assistance. There are several reasons why user interface designers

Page 4: Supporting user-adapted interface design: The USE-IT system

76 D. Akoumianakis, C. Stephanidis/lnteracting with Computers 9 (1997) 73-104

should be empowered with high-level tools supporting user interface adaptability during the early design and development phases of a user interface. First, providing support for user adaptability during interface design ensures that the resulting interface will be acces-

sible by the target user group. Second, user interface adaptability is a prerequisite for adaptivity, as the latter assumes that adaptations of the user interface may be initiated and controlled at run-time by making assumptions about the user.

2.1. An example

In order to substantiate the value of this type of adaptability, a typical scenario is briefly

elaborated and used as a reference example throughout the rest of the paper. Additionally, Appendix A summarizes the content, structure and scope of some of the resulting knowl- edge descriptions underpinning the example. The example involves the design of a user interface for a communication aid intended to be used by motor-impaired users. It is assumed that the target implementation platform is Windows’95 enhanced with automatic scanning of interaction objects. This entails that the Windows’95 object library is aug- mented at two levels: (a) the interaction level, by augmenting all basic controls, including top-level windows and window management operations, so that dialogue with binary

switches is enabled; and (b) the programming level, by introducing new parameters to basic Windows’95 controls in order to allow programming access by interface developers

to all aspects of the augmented interaction techniques. These interaction properties have been introduced in the MSTOOL toolkit [24] which provides the interface development facilities to account for scanning of all Windows’95 object classes (i.e. Button,

Scrollbar, CheckBox, RadioButton, GroupBox, EditControl, Combo~ox), as well as the supported scanning attributes (i.e. dm_borderstyle,

d.m_borderwidth, em_borderstyle, dm_bordercolor.red,

dm_bordercolor.green, dm_bordercolor.blue, em_borderwidth,

em_bordercolor.red, em_bordercolor.green, em_bordercolor.

blue, timescan, scanmode). In the above example, the “dm_” prefix indicates that the scanning is in dialogue mode

(i.e. the user has selected to do dialogue with an object), while the “em_” prefix indicates that scanning is in exit mode (i.e. when selecting, the scanning focus will move to the next

object in the sequence). Moreover, the attribute scanmode may take one of the following values:

1 = one switch and time scanning 2 = two switches and time scanning 3 = two switches, no time scanning 4 = five switches in one hand, no time scanning

Finally, for the attribute colour, we assume that:

(red = 0,green = 255,blue = 0) is green (red = 0,green = 0,blue = 0) is black (red = 255,green = 0,blue = 0) is red (red = 0,green = 0,blue = 255) is blue

Page 5: Supporting user-adapted interface design: The USE-IT system

ll. Akoumianakis, C. Sfephanidis/lnteracting with Computers 9 (1997) 73-104 77

Clearly, existing tools for interface development do not account for such enhanced

physical-level interaction facilities. Thus, one plausible option to design such an interface would be to simply hard-code the desired adaptations into the system. However, this solution is clearly sub-optimal for all parties concerned. A more appropriate solution is

to provide tools for the design, development, implementation and run-time support of adaptability of user interface constituents at the lexical, syntactic and semantic levels of

human-computer interaction.

2.2. Supporting user-adapted interaction design

To achieve this goal, a unified user interjace development platform [23,221 has been designed and implemented, comprising a number of design and development tools, which amongst other things, support explicitly the notion of developer-initiated adaptation.

Adaptability, during the design phase, may address different levels of interaction (see Fig. 1). At the semantic level, adaptability involves the adaptation of content that the user interface communicates to the user. At the syntactic level, adaptability concerns the determination of suitable task structures (i.e. decomposition of user tasks), and dialogue syntax (i.e. a delete-file function may be realized either through object-function or func- tion-object syntax) through which the semantics of the information to be communicated to

the user is conveyed. Finally, at the lexical level, adaptability entails the provision of alternative interactive behaviour of the physical layer (i.e. the user interface objects, their attributes and the way in which interaction is physically handled). Fig. 1 depicts two exemplar physical interaction alternatives, namely the conventional visual desktop and a direct manipulation 3D auditory environment. It follows, therefore, that adaptability may be relevant to all three levels of interaction, depending on the requirements posed to the

design team (i.e. adaptation assignments for lexical attributes of abstract physical interaction object classes).

Tools

Fig. 1. Levels of user interface adaptability

Page 6: Supporting user-adapted interface design: The USE-IT system

78 D. Akoumianakis, C. Stephanidis/Interacting with Computers 9 (1997) 73-104

In the present work, we have focused our initial efforts to tackle the lexical level, as this is of primary concern with regard to accessibility. The derivation of such assignments is accomplished by focusing on design knowledge and extension of the range of user inter- face adaptation constituents, in an attempt to support lexical user interface design through

rational selection of maximally preferred design solutions.

The approach reported is inherently different from conventional efforts directed towards the automation of the design of lexical and sometimes syntactic components of

a user interface. In particular, whereas in the past automatic lexical design has been approached through a cycle requiring the rapid prototyping of lexical and syntactic com- ponents of a user interface and their subsequent refinement by the designer, the present work advocates the use of high-level tools to encapsulate, reuse and ultimately automate user interface design at the lexical level. This is to be accomplished through the separation of user interface design and development concerns and the construction of appropriate

software tools to deal with each class of problems respectively. The proposed method is based on the principle that designers of user interfaces are

provided with tools to elicit, represent and articulate design constraints so as to produce a collection of maximally preferred assignments to lexical attributes of interaction. Such assignments, called lexical adaptability rules, can be subsequently interpreted and applied by the run-time libraries of user interface development toolkits for the realization of the derived rules on a target development platform (see Fig. 2).

The example depicted in Fig. 2 is based on the assumption that the user interface development toolkit, as part of its Application Programming Interface (API), offers the developer the facility to utilize the externally derived lexical adaptability decisions. It is

also important to note that such decisions may relate to non-trivial attributes of interaction objects (i.e. initiation feedback, interim feedback, completion feedback, navigation and topology policy of container objects, input/output device, input/output techniques and their parameters, etc.), which enhance the range of decisions that designers need to under- take. User interface development .oolkits that offer such capabilities to developers have

been constructed as part of a European collaborative research and development project (see Acknowledgements). It therefore follows that lexical adaptability decisions may potentially cover a wide range of lexical attributes of interaction objects, beyond the conventional defaults of a particular system.

Device - Mouse, menu.inpTechnique = MousePick, menu.outDevice - VDU, menu.outTechnque = On_screen.Popup,

Fig. 2. Communication between user interface development system and lexical adaptability tools.

Page 7: Supporting user-adapted interface design: The USE-IT system

D. Akoumianakis, C. Srephanidis/Interacting with Computers 9 (1997) 73-104 79

2.3. User inteface adaptation constituents

In order to provide high-level support for user interface adaptability during the early design and development phases, a tool called USE-IT has been designed and developed. USE-IT derives and subsequently unifies design constraints pertaining to the user, the task that is to be performed with the user interface and the platform constraints (i.e. availability of input/output devices, interaction objects and their attributes) to compile a collection of maximally preferred adaptation assignments for the lexical level of human-computer interaction. Consequently, the adaptation constituents are attributes of abstract physical interaction object classes. These are instances of abstract interaction object classes bound to a particular interaction technology. In the recent literature [25,26], the term abstract

interaction object (AIO) has been associated with several properties, briefly summarized as follows: (a) AIOs are application domain independent; (b) they encapsulate all the necessary interaction properties (i.e. appearance, placement, behaviour, state, etc.) by means of attributes (i.e. size, width, colour, and methods such as selection, activation,

state change, etc.); (c) they preserve a degree of independence from particular windowing systems and environments (i.e. they are platform independent).

In the context of the present work, the term is used in a broader sense to include additional properties such as the following: (i) AIOs are adaptable to the end user (i.e. their attributes can be adapted through reasoning); and (ii) AIOs are metaphor independent (e.g. an AIO, such as the Button object, can be applicable for both the Desktop and Rooms metaphor, through perhaps different realizations). An abstract interaction object, when bound to a particular interaction metaphor or lexical technology, inherits additional attributes specific to that interaction metaphor.

An adaptable physical interaction object class AI0 is a relation defined as the Cartesian product of a set of attributes A 1, . . ..A.. An adapted physical interaction object class is a specific subset of the Cartesian product of A,, .,., A,, derived through reasoning. It is important to note that these definitions are logical definitions and, as such, they are

bound by the underlying user interface development tool. In other words, the range of attributes of an object as well as the possible assignments to these attributes may differ depending on the tool used to implement the interface. Consequently, the power of such a tool lies in the level of abstraction that it supports to facilitate access to non-trivial

attributes (i.e. topology, accessPolicy, navigationPolicy, initiationFeedback, interimFeed- back, completionFeedback, etc.).

Example. Consider an AI0 called But ton, such as Button = TP X AP X ID X OD, where TP, AP , ID and OD are attributes defined as follows:

TP(TopologyPolicy) = (horizontal, vertical} AP(AccessPolicy) = (byKeyboard, bySpeech} ID(InputDevice) = (lswitch, 2switch, Sswitch] OD(OutputDevice) = { Crt, Braille)

An adaptation decision is a plausible assignment of a value to an attribute of the abstract interaction object class Button. As such, the range of plausible adaptations of an abstract interaction object is a subset of the Cartesian product of its attributes, i.e. for our example,

Page 8: Supporting user-adapted interface design: The USE-IT system

80 D. Akoumianakis, C. StephanidMnteracting with Computers 9 (1997) 73-104

the range of plausible adaptations is

ACTP X AP X ID X OD.

A maximally preferred adaptation decision for an attribute of an AI0 class is deter- mined by rational selection of the maximally preferred option from the set of plausible alternatives for this attribute. The following section elaborates on the issue of maximal preference by considering the binding of adaptability decisions to a certain context of use.

2.4. Binding adaptation of abstract interaction objects to interaction metaphors and task

contexts

Two important primitive concepts which determine the rational selection of a maxi- mally preferred adaptation decision are the prevailing metaphor of interaction and the

application-specific task context. In general, interaction metaphors may be either embedded in the user interface (i.e. menus as interaction objects follow the “restaurant”

metaphor) or may characterize the properties of and the attitude of the overall interaction environment (i.e. the desktop metaphor presents the user with an interaction environment based on sheets of paper called windows, folders, etc.). In the present work, it is assumed that each development platform (e.g. OSF/Motif, MS-Windows) provides an embodiment for a particular interaction metaphor (e.g. the visual desktop). Consequently, each of those platforms provides the implementational support that is required for the interactive environment of the metaphor. Different interaction metaphors may be facilitated either through the enhancement of existing development platforms or by developing new ones. An example of the latter case is CommonKit [27], which supports non-visual interaction based on the non-visual Rooms interaction metaphor. Consequently, depending on the

interaction metaphor (e.g. visual desktop, non-visual Rooms), two things typically change, namely the associated interaction object classes, their attributes and the attribute value

range or attribute adaptability range (i.e. set of constants from which the value of the attribute may be drawn). In the present work, we have experimented with one non-visual interaction metaphor [28] and the Windows’95 embodiment of the desktop enhanced with automatic scanning facilities [24].

Task awareness is another critical technical requirement that needs to be supported explicitly. The principle behind task awareness is based on the argument that the user interface may be required to exhibit different interactive behaviour during different inter-

action contexts. For instance, to practically support the development of an interface which combines Desktop and Book behaviour, the developer needs constructs which are suffi- ciently expressive to describe when properties of each metaphor are to be applied and how. It follows, therefore, that in the context of the present work, task awareness reflects the requirement for differentiating lexical and syntactic properties of the dialogue, depending on what the user is trying to accomplish or what the interface is attempting to convey. The work to support task-aware interface design and development involves a number of issues, which can be briefly summarized as follows. Designers and developers need to be provided with explicit constructs to specify different (desirable) states in the dialogue. Additionally, there is a need for mechanisms and techniques for binding interaction objects to different dialogue states. Through such a localization of the interactive

Page 9: Supporting user-adapted interface design: The USE-IT system

D. Akoumianakis, C. Stephanidis/lnteractin,g with Computers 9 (1997) 7% 104 81

behaviour of an object, it is made possible to practically support fusion of metaphors,

provided that the underlying toolkits preserve the principle of toolkit inter-operability

[29]. Thus, it will be possible to derive and assign different interactive properties to interaction objects based on the requirements of the dialogue state and the metaphor that is deemed as appropriate.

To support the contextual binding of adaptation decisions to specific interaction meta- phors and contexts of use, USE-IT supports the explicit representation of metaphor and task-oriented design knowledge. This means that the designer is able to declare the type of interaction objects that are available in a particular interaction metaphor, the attributes of these objects and the interaction techniques which are supported, as well as the interaction requirements that should prevail in any adaptation decisions in a particular task context. More specifically, the translation from abstract physical interaction objects to platform-

specific objects is largely determined by the context assigned to the interaction task. Parameters of the context of an interaction include the range of objects which are avail- able, their attributes, and the sequence of tasks that the designer has declared, as well as the metaphor of interaction during this task.

2.5. User intet$ace adaptability rules

Adaptations of lexical attributes of abstract physical interaction objects are derived in a sequence of three phases: (i) reasoning about alternative adaptations in a given context, (ii) selection of the plausible adaptations, and (iii) deciding on the most optimal ones.

In the present version of USE-IT, a typical lexical adaptation rule follows the format:

(M,TCI,Obj,AttrName,AttrValue)

where M is the interaction metaphor, TC is the application-specific task context [30], Ob j

is the abstract physical object class [27], At trName is the objects attribute being adapted, and AttrValue is the proposed adaptation assignment. USE-IT collects such rules in an ASCII file which is subsequently utilized by high-level user interface development toolkits in order to implement the lexical user interface design on a target platform.

3. The architecture of the USE-IT system

In order for USE-IT to accomplish its task, it comprises: (i) a representation of design elements (i.e. models of the user, the task, and the target development platform); (ii) a representation of adaptation constituents (at the lexical level of interaction); (iii) algorithms and an inference engine to reason about, select plausible and decide on

maximally preferred adaptations. Fig. 3 depicts the architecture of USE-IT. The philosophy characterizing this architec-

ture is based on the notion of deriving maximally preferred lexical adaptability rules by eliciting and representing design constraints (pertaining to the user, the task and platform characteristics), and reasoning towards an unambiguous statement depicting lexical adaptations for attributes of abstract physical interaction object classes.

Page 10: Supporting user-adapted interface design: The USE-IT system

82 D. Akoumianakis, C. Stephanidislnteracting with Computers 9 (1997) 73-IO4

Fig. 3. The architecture of USE-IT.

3.1. Platform constraints

Constraints related to the target development platform are mainly of two types. The first category relates to the range of interaction objects that the platform offers and their attributes. The second category relates to the input/output devices and interaction tech- niques which have been integrated in the platform. With respect to the first class of constraints, USE-IT provides the designer with tools to build a description of the inter-

action elements suitable for the target application. This entails full definition of the range

(a) rnwducing new elements of interaction object classes b the enband desktop metaphor

Fig. 4. Representing interaction elements of a particular platform: (a) introducing new elemerits, (b) example of

interaction object classes in the enhanced desktop metaphor.

Page 11: Supporting user-adapted interface design: The USE-IT system

D. Akoumianakis, C. Stephanidis/lnteracting with Computers 9 (1997) 73- IO4 83

of adaptable constituents (i.e. interaction objects and their lexical attributes) which should

be considered by the adaptation engine. Thus, the designer may declare a new interaction

metaphor, including the logical interactors and their attributes. This is achieved via a context-sensitive dynamic pop-up menu which is activated by right mouse button press on a selected node and initiates the dialogue of Fig. 4(a). At the end of this iterative phase, the designer establishes a hierarchy of object classes and attributes, such as that depicted in Fig. 4(b).

It is important to note that this is a useful facility, as certain interaction objects and some of their attributes have been empirically found to be inappropriate for specific user groups. For instance, when designing for a user with cognitive impairment or learning difficulties,

the use of menus and pop-up windows has been found to be confusing and inappropriate [31]. In such a case, the designer needs to notify the adaptation engine that the specific interaction objects or some of their attributes should either not be considered or be con-

strained in terms of interactive behaviour (i.e. the initiation feedback of a menu should always be in non-speech audio), irrespective of whether or not they are supported by the target development platform. In a different instance, the designer may wish to describe the platform constraints associated with the realization of a new interaction metaphor. For instance, the designer may wish to declare the availability of the CommonKit [32] so that USE-IT derives lexical adaptability decisions for the objects offered in CommonKit. It is important to note that USE-IT does not implement interaction metaphors. It only models their realization in terms of interaction objects, attributes and parameters.

The second category of platform-specific constraints relates to the availability of input/ output devices. USE-IT offers another tool which allows the designer to declare which

devices are available, the integrity constraints that should be satisfied and the construction of domain-specific models of a set of input/output devices. USE-IT supports a model- theoretic view of input/output devices whereby the designer simply defines in a declarative notation the operational requirements of these devices. This view of input/output devices

is based on the assumption that a device may be defined in terms of (a) the range of control acts that it requires, (b) the contact sites through which the device can be accessed, and (c) the human (physical) actions that it presupposes.

Following [33], a device is considered to be anything that requires human action. Thus,

a device can be a single hardware device, two devices manipulated by using two hands simultaneously, a virtual device or a graphical device. From the above, it follows that the suitability of a particular input/output device to a certain user depends on the availability of human resources to match the device resource requirements as expressed by the required control acts, contact sites and physical actions. However, these requirements

may vary depending on the context of use, as this may be influenced by the environment or other constraints. Consequently, the designer should also be allowed to develop a description of the available devices in the current context of use. USE-IT supports this

activity and permits the designer to reuse an existing device description or develop a new one from scratch.

This is accomplished as follows. The designer develops a representation of a particular device through the assignment of control acts, contact sites and physical actions required for the operation of the device. Thus, for example, the keyboard may be modelled as an input device requiring movement of one hand, contact through either thejngertips of hand,

Page 12: Supporting user-adapted interface design: The USE-IT system

84 D. Akoumianakis, C. StephanidMnteracting with Computers 9 (1997) 73-104

a hand-heldpointer or a mouth stick, as well as physical actions such as isolation of’nger

movement, ability to initiate, control and reproduce movement, etc. This is illustrated in Fig. 5, which depicts the basic functionality offered by USE-IT for constructing such device models.

It is important to note that, for a particular device, the designer may assign more than one control act and contact site, thus giving rise to disjunction in the device model. A device description such as that depicted in the tree of Fig. 5 is subsequently translated into

a declarative statement depicting all alternatives for using a particular device. The result- ing disjunctive clauses are stored in an internal device frame which is continuously revised

as the designer builds additional knowledge. In the case where the designer considers that an existing model will suffice for the problem at hand, then this model can be reused.

3.2. User-centred design constraints

User-centred design constraints are elicited by a User Model Acquisition Tool which enables the user interface designer to interactively construct and manipulate a user model. A user model is a collection of declarative clauses in a User Modelling Language (UML) depicting the abilities of a user in relation to the operation of alternative devices (i.e. ability to read Braille, sufficiency of tactile discrimination, etc.), rather than actual input/

output devices. In addition, the need for specific interaction techniques (i.e. scanning, abbreviation expansion, etc.) is identified. In order to build user awareness into the process of selecting appropriate input/output devices, the User Model Acquisition Module requires a description of the human resource availability. This is done by declaring the control acts, contact sites and physical actions possessed by the user.

Fig. 5. Definition of device-specific functions.

Page 13: Supporting user-adapted interface design: The USE-IT system

D. Akoumianakis, C. Stephanidis/lnteracting with Computers 9 (1997) 73-104 x5

Formally. a user model is a collection of declarative clauses in UML, depicting the

abilities of the user for whom the interface is being designed. User characteristics are represented in UML as collections of clauses of the form

evalue(Parameter, Value)

where the value for a parameter can be Boolean or scalar. Inferential facilities are provided

through a rule base (editable by the designer) comprising condition-action rules (i.e. A - B), where A is a conjunctive clause such that A, A A2 A . . . A A,,, where Ai E {x:x:= evalue (Parameter,Value) V control_act(Scalar) V contact_

si te(Scalar)}. Similarly, B is a conjunctive clause such that B, A B, A A B,, where Bi E (y : _Y = constraint(user,Attribute,Value)].

The collectionofdeclarative clauses evalue(P;, V,), contact_site(Scalar), con-

straint(user&,S;), together with the set of parameters P, where Pi E P, and the set of constants C, where Vi,Si E C, and the rule base constitute the UML which is used to describe user characteristics. Consequently, UML is a logical representation language

Fig. 6. Building a user model

Page 14: Supporting user-adapted interface design: The USE-IT system

86 D. Akoumianakis, C. Stephanidisbzteracting with Computers 9 (1997) 73-104

which may be formally defined by the triad UML = (C, Pr, ‘3) where

l C is the set of UML constant symbols and is defined as the union of the parameter constants, their values and adaptation values;

l Pr= {x : x= evalue(Parameter,Value) V control_act(Scalar) V contact_

site(Scalar) V constraint(user,Attr,Value)); l % is the rule set.

A user model is constructed interactively by declaring the abilities possessed by a particular user (see Fig. 6 and Fig. 7). There are six classes of abilities, namely motor, visual, hearing, tactile, communication and learning abilities. For each one of those classes the designer can allocate a range of specific ability parameters, each time listed in the list

box of the Assessment Considerations dialogue (see Fig. 6 and Fig. 7), as required. In our example, the specific parameters of motor abilities contain the identification of the user’s reliable control acts and contact sites (see Fig. 6), but also possession of functional capabilities such as ability to push and pull, ability to produce the control act on

demand, etc. To develop a user model, the designer describes a prospective user by instantiating each one of the ability classes (i.e. motor, vision, hearing, tactile, com- munication and learning) and, for each one, as many of their associated parameters as required. The window entitled Human Abilities Hierarchy in Fig. 7 contains the specific abilities possessed by the user being modelled.

The underlying representation of ability classes and parameters forms a network which can be refined and updated or developed from scratch by the designer, according to the requirements of a particular scenario of use. This means that USE-IT does not operate

upon predefined ability classes and corresponding parameters. Instead, USE-IT provides a shell which allows the designer to build a desirable description of characteristic abilities which influence the current design scenario. This was necessitated mainly by two reasons. The first relates to the broad range of user characteristics that are usually needed to

describe users, and which are not always known or cannot be predicted in advance. Consequently, the designer should be allowed to modify and sometimes totally redefine the contents of the knowledge base and the inferencing facilities that have been used in a

Fig. 7. Building a user model (cont.).

Page 15: Supporting user-adapted interface design: The USE-IT system

D. Akoumianakis. C. Stephanidis/lnreracring with Computers 9 (1997) 73-104 87

particular context. The second and most important reason accounts for the fact that exist- ing assessment manuals suggest clusters of context-independent abilities. Thus, they

would recommend a scanning device if the user possessed the abilities of gross temporal control, visual tracking skills, and control movements and contact sites that allow the operation of a switch. However, the switch may be perfectly appropriate in a totally different scenario (e.g. a young and computer-illiterate child using an educational software application requiring a small number of selection targets). Towards this end, USE-IT offers facilities for designers to incrementally declare new user modelling parameters and thus to expand UML as necessary (see Fig. 8). In this figure, the designer has requested update of the current set of user parameters. The system displays the existing hierarchy of

parameters per ability class (see Human Abilities window in the background). The designer then selects to update the currently defined set of motor abilities (via a dynamic pop-up menu) and the system displays the dialogue Insert a new parameter in Fig. 8. In this dialogue, the designer defines the new parameter (i.e. maximum number of selection targets) and declares the type of the parameter (i.e. scalar). This brings up the dialogue for setting up the value set for the parameter. Once this is defined, the designer may select

from the main dialogue to update the corresponding hierarchy of parameters as well as the internal databases of the system.

As already mentioned, user centred-design constraints are declared by a three-argument predicate constraint (user, Constituent, assignment). Suchconstraints are derived automatically by interpreting the contents of the selected device model against the

Fig. 8. Declaring new user modelling parameters

Page 16: Supporting user-adapted interface design: The USE-IT system

88 D. Akoumianakis, C. StephanidMnteracting with Computers 9 (1997) 73-104

current user model. The interpreter is a routine which translates the disjunctive semantics of a device model into a set of rules and subsequently runs these rules against the current user model. Thus, a declarative statement such as constraint (user, input_

device, joys tick) depicts a typical design constraint pertaining to the current user. Such constraints may be elicited for all lexical attributes of abstract interaction object classes.

3.3. Task-oriented design constraints

In addition to user-centred design constraints, the designer of the interface is provided with a tool which enables the declaration of syntactic knowledge and thereby the deriva-

tion of task-oriented design constraints. Syntactic knowledge is structured around the notion of a task context. Task contexts are application domain specific characterizations of “dialogue states”; they may be conceived as the context of a given task in which the

user is engaged at a particular time. Knowledge about each task context of a given application is collected and represented in a task context schema language. This is a representation tool which combines three types of characterizations.

First, a set of attributes which: (a) identify the type of task context (i.e. input or output); (b) classify the task context according to the primary interaction task that is performed (i.e. selection, positioning, orientation, quantification, rotation, etc.); (c) declare the application requirements during the task context depending on the previous two characterizations.

Second, the designer is allowed to specify aggregation policies which should determine the adaptation of lexical user interface constituents during a particular task context and a set of initial preference relations which are primarily aimed at capturing intentions, or specific ergonomic guidelines and experimentally justified results, that a designer may wish to convey during a particular task context (see Fig. 9). An aggregation policy is

F=“;’

Heuristic Rule Knowledge Base

inputDevice

Initial preference expressions

p~f~OGtIEk4@WVJMPlU~~

Fig. 9. Steps in preference-based aggregation.

Page 17: Supporting user-adapted interface design: The USE-IT system

D. Akoumianakis, C. Srephanidis/lnreracting with Computers 9 (1997) 73- 104 89

defined by declaring a design objective (e.g. speed of cursor movement, precision in positioning, frequency, accuracy, effective target width, final positioning time or other

statistically significant measures of ergonomic quality), the relevant adupration con-

stituent and the associated task context. A heuristic rule derives a partial ordering of design alternatives based on the assigned design objective (i.e. if speed-of_

cursor_movement (true), then prefer continuous to discrete devices). A preference ordering is a ranking of equivalent classes of alternatives with respect to a criterion C which is suggested by the quality attribute. These equivalent classes are also called indifference classes, since an agent (i.e. the system, the designer or the user) prefers alternatives in one class to those in another. That is, one indifference class ranks above another if its members are preferred to those of the other class. We adopt the notation (x, . . , y ) to represent indifference classes. Thus, (x, y ) means that x and y are of the same

equivalent class with respect to a criterion C.

Fig. 10. Task context building tools

Page 18: Supporting user-adapted interface design: The USE-IT system

90 D. Akoumianakis, C. StephanidMnteracting with Computers 9 (1997) 73-104

The general representation scheme which is used to declare an aggregation policy is as follows:

policy(taskContextName,lexicalAttribute,boolExpression, Criterion).

An example of a declaration of an aggregation policy is the following:

policy(selection,inputDevice, speed_of_cursor_movement(true), continuous,discrete).

Finally, the designer is allowed to explicitly declare a set of preference expressions which are to be used when aggregating towards task context oriented design constraints. A preference expression is a four-argument predicate such as

E(taskContextName,lexicalAttribute,boolExpression,

Criterion).

The predicate E may denote either a preference or an indifSerence relation. It is impor-

tant to note that the designer is free to declare as many (if any) preference and indifference relations as may be appropriate or desirable. Moreover, it is possible to declare aggrega- tion policies and/or preference and indifference relations applicable to all task contexts. To facilitate the elicitation of task-oriented design constraints, based on syntactic knowledge

Fig. 11. Elicitation of task context requirements.

Page 19: Supporting user-adapted interface design: The USE-IT system

D. Akoumianakis, C. Srephanidis/Inreracting with Computers 9 f 1997) 73-104 91

Fig. 12. Aggregation policy and preference profile

such as the above, an inference engine has been designed which consults the task context

schema and computes indifference classes per adaptation constituent based on the aggregation policy specified and the set of initial preference and indifference relations.

In the present version of the prototype, the inference engine assumes equal voting power

for all preference expressions (i.e. simple majority rule for aggregation). Task-oriented design constraints follow the format of their user-oriented counterparts. The inference engine comprises a set of preference constraints which take the form of general derivation rules or integrity constraints.

Task contexts can be interactively built by the designer, using the task context building tools of USE-IT (see Figs. 10-13). The example depicted in these figures refers to our hypothetical example scenario where the task context hierarchy for a communication aid

for speech-motor impaired users is being constructed. The designer declares the intention for differentiating lexical interaction in two task contexts, namely visual-keyboard and message-editor (see Fig. lo).’

The associated meanings of these task contexts is as follows. The user interface should exhibit different lexical behaviour depending on the current task context. Having compiled the task context hierarchy, the design proceeds to declare the application-specific task

‘The task context Any-Other may be employed by the designer to capture default behaviour during

incremental design steps [34].

Page 20: Supporting user-adapted interface design: The USE-IT system

92 D. Akoumianakis, C. Stephanidis/Interacting with Computers 9 (1997) 73-104

Fig. 13. Building task context and interaction object specific default and conditional rules.

context requirements for each task context and to develop a preference-based representa- tion of lexical user interface constituents during each task context. In Fig. 11, the designer specifies the task context requirements, which entails the identification of the type of the

task context (i.e. input/output), as well as the characterization of what the interface is trying to accomplish (i.e. convey a message, etc.).

In Fig. 12, the designer assigns the aggregation policy: speed_of_cursor_ movement (true) to the lexical constituent input-device, and subsequently provides the initial preference expressions. The resulting task context schema is as follows:

1 :policy(Visual-keyboa,inputDevice,precision_in_positioning,indirect,direct) 2:preference(Visual-keyboard,inputDevice,precision_in_positioning,trackball,

data-tablet) 3:indifference(Visual-keyboard,inputDevice,precision_in_positioning(t~e),

mouse,trackball) 4:indifference(Visual-keyboard,inputDevice,precision_in_positioning(t~e),

data_tabletjoystick) 5:indifference(Visual-keyboard,inputDevice,precision_in_positioning(t~e),

joystick,lightpen)

Page 21: Supporting user-adapted interface design: The USE-IT system

D. Akoumianakis, C. Srephanidis/Interacring with Computers 9 (1997) 7.3-104 93

In a similar manner, the designer develops representations for additional lexical con-

stituents, such as inputTechnique, outputDevice, OutputTechnique,

initiationFeedback, etc., for particular task contexts. For instance, the designer might choose that while in the task context alert, non-visual interaction methods should be preferred to visual ones, when setting the value of outputDevice. In such a case, the designer would need to assign the lexical user interface constituent outputDevice to the criterion:

metaphor(nonVisua1)

A heuristic rule in the knowledge base would subsequently update the corresponding task context schema with the clause:

policy(alert,outputDevice,metaphor(nonVisual),nonVisual, visual)

If additional knowledge is available regarding the relative preference between the output

devices supporting non-visual interaction (i.e. Braille, speech synthesizer), then the task context scheme is updated accordingly. If such knowledge is not available then the algorithm which compiles indifference classes will derive only one class containing all possible options satisfying the assigned criterion (i.e. both Braille and speech synthesizer would be included).

Following this phase, the system recursively attempts to set the value of all lexical constituents that have been declared by deriving any missing information through the preference constraints and, subsequently, it compiles the indifference classes for the

current attribute. In the case of our example, the indifference classes for the two constituents are:

1. for inputDevice we derive two indifference classes: < trackball ,mouse > and <: data-tablet, joystick, lightpen > ;

2. for outputDevice there is only one indifference class which is

< Braille,speech_synthesiser >.

Finally, platform-oriented constraints can be built into the system by explicitly naming the input/output devices available and their relationship with the high-level design criteria

(i.e. Braille isa device that supports non-visual interaction, mouse isa continuous and relative device, etc.). The resulting representation is a semantic network.

3.4. Declaration of default and conditional rules

The designer may also declare default and conditional values for certain attributes of an interaction object which cannot be set through reasoning. Fig. 13 depicts the dialogue

through which default values and conditional rules are declared. A default value or a conditional rule is usually associated with a specific attribute and may have a universal

scope (i.e. for all interaction object classes and all task contexts) or bound scope (i.e. associated to a subset of interaction object classes for a subset of the task context hierarchy), which is defined by selecting the appropriate options in the first two list boxes of the dialogue Conditional rules in Fig. 13. Then the designer selects the adaptation

Page 22: Supporting user-adapted interface design: The USE-IT system

94 D. Akoumianakis, C. Stephanidis/Interacting with Computers 9 (1997) 73-104

constituent and its specific parameter to which the rule is to be assigned and selects its desired value from the assignment list box.

Having defined the scope of the rule, the designer may subsequently decide to introduce this rule either as a default (in which case the defaults radio button is pressed) or as a

conditional rule. Selecting the latter option activates the lower part of the dialogue grouped by the Conditionals control (which was initially disabled) and the designer is allowed to define the conditions subject to which the rule will be triggered. During this phase the window Definition of conditional expression is continuously updated so that at the end it depicts the rule in terms of its implication (i.e. assignment) as well as its conditionals (i.e.

rule body). Thus in the case of our example depicted in Fig. 13, the conditional rule asserts that timescan is to be set to 100, if and only if the user possesses the abilities specified in

the body of the rule. Defaults and conditional rules are stored in a separate knowledge base which is con-

sulted by the adaptation engine at run-time.

3.5. Using model trees to represent plausible adaptability rules

From the above discussion, it follows that lexical attributes may be adapted either through a default or conditional rule (typically the case for trivial attributes of interaction object classes), or alternatively, through reasoning based on the three sets of design constraints described in the previous section. The latter case is used to adapt non-trivial lexical attributes of interaction whose assignment may be derived by deduction, from the

current state of the knowledge bases. To facilitate adaptation decisions based on the three sets of design constraints identified above, a data structure has been developed which

serves the purpose of consolidating the semantics of adaptation of a particular attribute into a formal representation which allows USE-IT to decide on the maximally preferred option. This data structure is referred to as the adaptability model tree of an adaptation constituent. An adaptability model tree is attribute-specific and, once compiled, it encapsulates all plausible adaptability decisions for a particular attribute of an abstract interaction object class. Formally, an adaptability model tree is defined as follows:

Dejinition. Let II be the set of Horn clauses depicting plausible adaptability rules for an attribute Ai, resulting from the user model (e.g. the devices which can be used by the user). Also, let 12 be another set of clauses comprising adaptability rules for Ai pertaining to the

particular task context (e.g. the devices which are required by a given task context). Finally, let 13 be the set of plausible adaptability rules for Ai as imposed by the system’s

configuration (e.g. available devices). Then the adaptability model tree of the interpreta- tion I= {I,, Z2, Z3) is a tree structure such that there is a one-to-one correspondence between the branches of the tree structure and the interpretations in I= {I,, Z2, Is). The formal properties of an adaptability model tree and the applicable operators have been reported in [30].

To demonstrate the details of this data structure, as well as the semantics that it can accommodate, let us consider a hypothetical scenario. Let us assume that the attribute to be adapted is inputDevice and that the user and task-oriented constraints are as follows:

U consfraints = {keyboard, data-tablet, joystick) T cOnStrai”fS = (mouse, trackball, keyboard, data-tablet}

Page 23: Supporting user-adapted interface design: The USE-IT system

D. Akoumianakis, C. StephanidMnteracting with Computers 9 (I 997) 73- 104

Let us further assume that the device availability constraints are:

DA constraints = {mouse, trackball, keyboard, data-tablet,

joystick, lightpen}

95

Given the above sets of constraints, the adaptability model tree for attribute inputDevice is depicted in Fig. 14. From this figure, it follows that the total number of branches in an adaptability model tree equals the number of constraint sets. In other words, each branch in

the tree corresponds to a constraint set. The intersection of the three branches defines the minimal model tree which satisfies all design constraints. Thus, for the situation described in Fig. 14, the minimal model tree is defined by a set of three alternatives. namely

MIN Model =

(input_device(keyboard), input_device(data_tablet),

input_device(keyboard) A

1

input_device(data_tablet)

Any one of the elements of this set could be a plausible adaptation for the attribute input-device. However, for the purposes of the present work, USE-IT decides in favour of the solution which preserves maximal multi-modality. Thus, the maximally preferred option is defined by the expression:

input_device(keyboard) h input_device(data_tablet)

Consequently, the adaptability decision which is compiled for this attribute is as follows:

Metaphor.taskContext.Object.input_device= [keyboard, data--tablet]

The above procedure is applied for the adaptation of all attributes for which there is no default or preference expression in the corresponding knowledge bases.

inpu<_devibe(joystick) input_device(mouse)

I

inputdevice(trackbal1)

I input_device(trac!&all I

I I input_&vice(joystick)

I input c!evice(lightpen)

Fig. 14. Adaptability model tree for inputDevice.

Page 24: Supporting user-adapted interface design: The USE-IT system

96 D. Akoumianakis, C. Stephanidis/lnteracting with Computers 9 (1997) 73-104

Sample output of the USE-IT tool for our example scenario is depicted in Fig. 15. Currently, USE-IT adapts all abstract interaction object classes of an interaction metaphor assigned by the designer, for each task context of a particular user interface. Additionally, the file depicted in Fig. 15 may contain decisions for more than one interaction metaphor,

if this is desired. Finally, in the current version of the prototype, the system translates the derived adaptability decisions to a particular format which is required by the underlying user interface development toolkit. This format requires that for a particular interaction metaphor, USE-IT counts the total number of adaptability decisions derived and generates a file in the format (see also Appendix A):

< Task Context, Object Class, Attribute, Assignment >

4. Experience with USE-IT

The USE-IT system has been fully implemented using the PDC Visual Prolog

visual_desktop 430

Run_Training_Module

R~_T~a~in@_~odul~ Run_Trainin@_Modulc Run_Training_Module Run_Training_Madule Run_Trainin@_Rodule Run_Training_Modult Run_Trainin@_Rodule Run_Training_Module Run_Trainin@_Module Run_Trainin@_Module Run_Trainlng_ModuIc Run_Training_f+Iodule Rm_Traininp_Hodule Run_Training_Module ~_T~ai~~~g_~o~le Run_Traininq_Module

DeskTop_FrmeWindow input-device 1 Switch/TimeScan QeskTop_FK~eU~ndow inputTechnique scanning2D ~es~op_Fr~e~~ndow scanmode 1 ~e~Top_F~~eW~daw tilnescan 100 ReskTop_F~~eWi~dow dm_bordercoIor. red 1 ~es~op_F~~eWindow dnt_bordcrcolor.green 1 DeskTop_Fr~eW~dow drp_bordercolor.blue 1 DeskTop_F~~eWi~dow &_borderstyle 2 De~Top_F~~eW~d~ ~_bo~~eKwid~ 1 De~kTo~_F~~@Window em_borderwidtb 3 De~Top_FK~eWindow em_bordercolor.red 1 DcskTop_FrameWindow em_bordercolor. green 10 De~kTop_F~~eW~~dow em_bordercolor.blue 255 DcskTop_FramcWindaw ela_borderstyle 1 DeskTop_~r~cWindow oucput_dwicc vdu DeskTop_~~~eWindow ouGputTecbnique displaflechnique

Fig. 15. Overview of the outcome of USE-IT.

Page 25: Supporting user-adapted interface design: The USE-IT system

D. Akoumianakis, C. Stephanidis/lnreracting wirh Computers 9 (1997) 73-104 97

Environment (for 32-bit) and runs under Windows’95. USE-IT has been used in two

practical application domains, namely the construction of accessible hypermedia applica- tions for blind users and the development of communication aids for speech-motor and language-cognitive impaired users, in the context of the TIDE ACCESS project TPlOOl (see Acknowledgements). In this context, the knowledge bases of the system were devel- oped through a questionnaire-based knowledge acquisition exercise. Subsequently, the content of these knowledge bases and corresponding inferencing facilities were validated through dedicated workshops. In particular, the user models (including preferences, defaults and conditional rules) were populated by consulting seven users of each target

category, namely blind, speech-motor and language-cognitive impaired users, i.e. a total of 21 individuals [35]. The task context schemes for each application were compiled through intensive collaboration with the application developers at different sites. During

this phase, the exact task contexts, the object classes in each task context and the desired adaptation constituents were identified and agreed upon. Then the acquired knowledge was encoded into USE-IT, giving rise to specific adaptability scenarios. Finally, the generated output (i.e. collection of adaptability decisions) was delivered back to applica- tion developers to become embedded into user interface implementations for the target applications [36]. As an illustration, we provide in Appendix A details of design knowl-

edge captured and underlying representation formalisms, as well as an extract from the file generated by USE-IT for an interpersonal communication aid application for the speech-

motor impaired users (example of Section 2.1). From this extract, it follows that USE-IT generated a total of 430 adaptability decisions for specific lexical attributes of the inter- action object classes used (see A.1.6 section in the Appendix). The attributes considered for adaptation in this context are those considered important to ensure the accessibility of

the user interface by the intended users. It can also be seen that for some of the appearance attributessuchas font.name, font.pointSize, font-italics, etc.,USE-IT did not derive any decisions as they were not accounted for in the adaptability scenario by the design team. This, however, need not be the general case, as USE-IT provides the facilities for encoding adaptation rationale either as default, preference or conditional rules for all lexical attributes of a platform.

Finally, it is important to mention that the USE-IT inference engine is built in such a

way that it facilitates dynamic updates of the current constraint set by considering previous decisions. Thus, the value of an attribute (or a parameter of an attribute) may be set by deduction based on a previously committed assignment. In the case of the example of Section 2.1 and the corresponding design recommendations depicted in Appendix A, the attributes OutputTechnique and inputTechnique were set by deduction based on the assign-

ment made (through compilation of the adaptability model tree) for the attributes out- putDevice and inputDevice respectively. On the other hand, the scanning parameters were set by default and conditional rules. Thus in the extract depicted in Appendix A, the design recommendations derived (i.e. adaptability decisions) correspond to different inference strategies which are embedded in USE-IT.

5. Evaluation of USE-IT

This section briefly describes how USE-IT was evaluated and summarizes the key

Page 26: Supporting user-adapted interface design: The USE-IT system

98 D. Akoumianakis, C. StephanidMnteracting with Computers 9 (1997) 73-104

Table 1 ASQ scores

Site 1 2 3 4

Score 1st 2nd 3rd 1st 2nd 1st 2nd 1st 2nd

Scenario 1 6.5 6 6 3 2 3 3 3 3

Scenario 2 6 6.5 6.6 2 2 3 3 2 3

Scenario 3 6 6.5 1 3 2 3 3 3 3

results. The technique used is metric-based and is referred to as subjective evaluation using the IBM Usability Satisfaction Questionnaires [37]. This technique measures the subjective opinion of users based on the following metrics:

l ASQ score for a participants satisfaction with the system for a given scenario; l OVERALL metric provides an indication of the overall satisfaction score;

l SYSUSE metric is an indication of the system’s usefulness; l INFOQUAL metric is the score for information quality; l INTERQUAL metric is the score for interface quality.

The evaluation of USE-IT was planned to take place in two stages. During the first stage, the usability evaluation scenarios were presented to the partners involved, together with detailed guidance regarding the operational details of the evaluation. It is important to mention that subjective usability evaluation using the IBM questionnaires requires a scenario-based procedure. To this effect, there scenarios were developed to facilitate the evaluation. After each scenario, the user was requested to fill in the ASQ questionnaire [37], while at the end of the three scenarios, users would fill in the CSUQ questionnaire

]371. The second stage of the evaluation involved the collection and analysis of results. The

evaluation was carried out by a total of nine users at four different sites. Prior to the evaluation, all users had been provided with the same instruction materials in which brief

descriptions of the scenarios were included. Following the evaluations, all users responded to a short questionnaire detailing their background, experience and familiarity with the

Table 2

CSUQ scores

Site 1 2 3 4

Metrics 1st 2nd 3rd 1st 2nd 1st 2nd 1 St 2nd

OVERALL 5.5 6.5 6.5 2.1 2.8 3.25 3.1 3.05 3.26

SYSUSE 6.25 7 6.5 2.75 3 3 3.23 3.5 3.5

INFOQUAL 5 6.5 6 2.6 2.6 2.4 3 2.1 3 INTERQUAL 3,5 I 6.5 3 3 3 2.6 3 3

Page 27: Supporting user-adapted interface design: The USE-IT system

D. Akournianakis, C. Stephanidis/Interacting with Computers 9 (1997) 73-104 99

concepts of the ACCESS project. The data collected were subsequently analysed to calculate the metrics described above. The results are summarized in Table 1 and

Table 2. The analysis of these metrics, together with the accompanying user reports, contributed

to the identification of several usability problems which were subsequently refined.

6. Summary and future work

This paper has described an approach which facilitates reasoning about user interface

design knowledge, selecting plausible lexical-level adaptability rules for attributes of abstract interaction objects and deciding on the maximally preferred ones. The proposed approach has provided the basis for the development of a knowledge-based user interface design assistant, called USE-IT, whose purpose is to automatically generate adaptability rules at the lexical level of a user interface, according to the requirements, abilities and preferences of the user. The tool is part of a powerful user interface development environment which facilitates the construction of unijed interfaces for different user groups. The current version of USE-IT is utilized to provide lexical adaptability for user interfaces of interactive applications in the domains of interpersonal communication aids for speech-motor and cognitive-language impaired users, and hypermedia applica- tions accessible by both blind and sighted users, in the context of the ACCESS (TPI 001)

project of the TIDE Programme of the European Commission. Extensions are already underway to support syntactic adaptability; this is important as there are application domains in which the user tasks and dialogue sequence may differ depending on the target

user groups.

Acknowledgements

Part of this work has been carried out in the context of the ACCESS project (TPlOOl) funded by the TIDE Programme of the European Commission. Partners in this consortium

are: CNR-IROE (Prime Contractor), Italy; ICS-FORTH, Greece; University of Athens, Greece; RNIB, UK; SELECO, Italy; MA Systems Ltd., UK; Hereward College, UK; National Research and Development Centre for Welfare and Health, Finland; VTT, Finland; PIKO Systems, Finland; University of Hertfordshire, UK.

Appendix A. Extract from the USE-IT output for an exemplar interpersonal communication aid application

The appendix provides an indicative example of design knowledge, underlying representation and derived recommendations for one of the application demonstrators, namely the development of an interpersonal communication aid for speech-motor impaired users.

Page 28: Supporting user-adapted interface design: The USE-IT system

100 D. Akoumianakis, C. StephanidMnteracting with Computers 9 (1997) 73-104

Appendix A.1. Design knowledge

Appendix A. 1.1. Task contexts

Any-other

Run_Training_Module Run_Editor_Module Goto_Configuration Goto_Main_No_Distance Goto_Main_Distance

Attention-Message Editing Scroll_left Scroll-right Delete-next Clear-Editor

Appendix A.1.2. Examples of design heuristics

heuristic(“input_device”,“Scanning(tre)”,”switch access”,“indirect”) heuristic(“output_device”,“interaction_metaphor(visu~)“,“visual”,“non_visual”)

Appendix A.1.3. Extract from design criteria

criterion(“Run_Editor_Module”,“input_device”,”Scanning(true)”,”switch access”,“indirect”)

criterion(“Run_Training_Module”,”input_device”,“Scanning(true)”,“switch access”,“indirect”)

criterion(“Run_Training_Module”,”output_device”,“interaction_ metaphor(visual)“,“visual”,“non_visual”)

criterion(“Run_Editor_Module”,“output_device”,”interaction_ metaphor(visual)“,“visual”,“non_visual”)

criterion(“Clear_Editor”,“input_device”,”Sc~ning(tme)“,“switch access”,“indirect”) criterion(“Clear_Editor”,“output_device “,“interaction_metaphor(visual)“,”

visual”,“non_visual”)

Appendix A.1.4. Extract from task context preference projile

p(“Run_Editor_Module”,“input_device”,”Scanning(tme)“,” 1 SwitchiTimeScan”,“5 Switches/NoTimeScan”)

i(“Run_Editor_Module”,“input_device”,”Scanning(~e)“,“l Switch/TimeScan”,“2

Page 29: Supporting user-adapted interface design: The USE-IT system

D. Akoumianakis, C. Stephanidis/lnteracting with Computers 9 (1997) 73-104

Switches/NoTimeScan”)

101

i(“Run__Editor_Module”,“input_device”,”Scanning(true)“,” 1 Switch/TimeScan”,“2

Switches/TimeScan”)

The indifference classes compiled are:

["l Switch/TimeScan" , "2 Switches/NoTimeScan”, “2 Switches/ % 1 st indifference class

TimeScan"]

["5 Switches/NoTimeScan"] % 2nd indifference class

Appendix A. 1.5. Extract from application requirements declarations

apl_req("uoa_demo","Run_Editor_Module", ’ input", "selection", flsize of selection set","small")

Appendix A. 1.6. Interaction object classes

metaphor(“visual_desktop”) object_.class(“visual_desktop “,“DeskTop_FrameWindow”) object_class(“visual_desktop”,“DeskTop_Button”) has_attributes(“DeskTop_FrameWindow”,”general”) has_attributes(“DeskTop_Button”,”general”) has_attributes(“DeskTop_Button”,”appearence”) has_general_attribute(“general”,”input_device”) has_general_attribute(“general”,”inputTechnique”) has_general_attribute(“general”,”output_device”) has_general_attribute(“general”,”outputTechnique”)

has_appearence_attribute(“DeskTop_Button”,”font.name”) has_appearence_attribute(“DeskTop_Button”,”font.pointSize”) has_appearence_attribute(“DeskTop_Button”,”font.italics”) has_appearence_attribute(“DeskTop_Button”,”font.underline”) has_appearence_attribute(“DeskTop_Button”,”font.strikeout”) has_appearence_attribute(“DeskTop_Button”,”font.orient”)

Appendix A.2. Design recommendations

Appendix A.2.1. Extract from the generated lexical adaptability rules?tpb=-6pt>

visual-desktop 430 % Counter indicating the metaphor and the total number of adaptability

decisions derived

Page 30: Supporting user-adapted interface design: The USE-IT system

102 D. Akoumianakis, C. Stephanidis/lnteracting with Computers 9 (1997) 73-104

Appendix A.2.2. Listing of rules produced per lexical attribute of object classes per task

context

Run_Training_Module % Task context identifier

DeskTop_FrameWindow % Object class identifier

Run_Training_Module

DeskTop_FrameWindow

input_devicelSwitch/TimeScan

inputTechnique scanningaD

scanmodel

timescan

dm_bordercolor.redl

dm_bordercolor.greenl

dn_bordercolor.bluel

dm_borderstyle 2

dm_borderwidthl

em_borderwidth3

em_bordercolor.redl

em_bordercolor.greenlO

em_bordercolor.blue255

em_borderstylel

output_devicevdu

outputTechniquedisplayTechnique

% Adaptability model tree

% Deduction based on previous assignment

% Conditional rule

% Conditional rule

% Conditional rule

% Conditional rule

% Conditional rule

% Conditional rule

% Conditional rule

% Conditional rule

% Conditional rule

% Conditional rule

% Conditional rule

% Conditional rule

% Adaptability model tree

% Deduction based on previous assignment

References

[l] H. Dieterich, U. Malinowski, T. Kuhme, M. Schneider-Hufschmidt, State of the art in adaptive user

interfaces. Adaptive user interfaces: Principles and practice, in: M. Schneider-Hufschmidt, T. Kuhme,

U. Malinowski (Eds.), Adaptive User Interfaces, North-Holland, Amsterdam, 1993, pp. 13-48.

[2] R. Opperman, Adaptively supported adaptability, International Journal of Human-Computer Studies 40

(1994) 455-472.

[3] Brajnik, et al. A flexible tool for developing user modelling applications, in Proc. 3rd Int. Workshop on User

Modelling, UM’92, DFKL Document, Dagstuhl, Germany, 1992, pp. 42-66.

[4] K. Lai, T. Malone, Objectlens: A spreadsheet for cooperative work, Proc. of the Conference on CSCW,

ACM, New York, 1988, pp. 115-124.

[5] A. MacLean, K. Carter, L. Lovstrand, T. Moran, User-tailorable systems: Pressing the issues with buttons,

CHI ‘90, ACM, New York, 1990, pp. 175-182.

[6] G. Robertson, D. Henderson, S. Card, Buttons as first class objects on an XDesktop, UIST ‘91, ACM, New

York, 1991, pp. 35-44.

[7] F. Keller, A demonstrator based investigation of adaptability, in: M. Schneider-Hufschmidt, T. Kuhme,

U. Mallinowski (Eds.), Adaptive User Interfaces, North-Holland, Amsterdam, 1993, pp. 183-196.

[8] HE. Sherman, H.E. Shortliffe, A user-adaptable interface to predict users’ needs, in: M. Schneider-

Hufschmidt, T. Kuhme, U. Mallinowski (Eds.), Adaptive User Interfaces, North-Holland, Amsterdam,

1993, pp. 285-315. [9] B. De Carolis, F. de Rosis, Modeling adaptive interaction of OPADE by Petri nets, SIGCHI 26 (2) (1994)

48-52.

[IO] A.H. Cote-Munoz, AIDA: An adaptive system for interactive drafting and CAD applications, in:

Page 31: Supporting user-adapted interface design: The USE-IT system

D. Akoumianakis, C. Stephanidis/lnteractinR with Computers 9 (1997) 73-104 103

M. Schneider-Hufschmidt, T. Kuhme, U. Mallinowski (Eds.), Adaptive User Interfaces, North-Holland,

Amsterdam, 1993, pp. 225-240.

[ 1 I] P. Sukavrriya, J. Foley, Supporting adaptive interfaces in a knowledge-based user interface environment. in:

W.D. Gray, W.E. Hefley, D. Murray (Eds.), Proceedings of the 1993 International Workshop on Intelligent

User Interfaces, Orlando, FL, ACM Press, New York, 1993, pp. 107-l 14.

[ 121 P.D. Browne, Experiences from the AID project, in: M. Schneider-Hufschmidt, T. Kuhme. U. Mallinowski

(Eds.), Adaptive User Interfaces, North-Holland, Amsterdam, 1993, pp. 69-78.

[ 131 K. Okada, Adaptation by task intention identification, in: FRIEND 21 Conference Proceedings, Japan. 1995.

[ 141 S. Zimek, Design of an adaptable/adaptive UIMS in production, in: H.-J. Bullinger (Ed.), Human Aspects in

Computing-Design and Use of Interactive Systems and Work with Terminals, Elsevier, Amsterdam, 1991.

[I51

[I61

[I71

[I81

[I91

[201

L2ll

[221

~231

~241

~251

1261

1271

[281

r291

[30]

1311

[321

pp. 748~-752.

F. Arcieri, P. Dell’Ommo, E. Nardelli, P. Vocca, A user modeling system, in: H.-J. Bullinger (Ed.), Human

Aspects in Computing-Design and Use of Interactive Systems and Work with Terminals, Elsevier,

Amsterdam, 1991, pp. 440-447.

T. Finin, GUMS: A general user modeling shell, in: A. Kobsa, W. Wahlster (Eds.), User Models in Dialogue

Systems, Springer, Berlin, 1989, pp. 4 1 I-430.

1. Kay, The UM toolkit for reusable, long-term user models. User Modeling and User-adapted Interaction 4

(3) (1995) 149-196.

G. Brajnik, C. Tasso, A shell for developing non-monotonic user modeling systems, International Journal of

Human--Computer Studies 40 (1994) 3 l-62.

A. Kobsa, W. Pohl, The user modeling shell system BGP-MS, User Modeling and User-adapted Interaction

4 (2) (1995) 59-106.

H. Vergara, PROTUM-A Prolog based tool for user modeling. Bericht Nr. 55/94 (WIS-Memo IO),

University of Konstanz, Germany, 1994.

L.J. Orwant. Heterogeneous learning in the Doppelganger user modeling system, User Modeling and User

Adapted Interaction 4 (2) (1995) 107-130.

C. Stephanidis, A. Savidis, D. Akoumianakis, Towards user interfaces for all, in: The European Context for

Assistive Technology Conference Proceedings of 2nd TIDE Congress, 1995, pp. 167- 170.

C. Stephanidis, Towards user interfaces for all: Some critical issues. in: Proceedings of HCI International

‘95 Conference on Human Computer Interaction, Elsevier, Amsterdam, 1995, pp. 137- 143.

A. Savidis, G. Vemardos, C. Stephanidis, Embedding scanning techniques accessible to motor-impaired

users. in: The Windows object library, to appear in the Conference Proceedings of HCI International ‘97.

San Francisco, CA, 1997.

A.B. Myers, A new model for handling input, ACM Transactions on Information Systems 8 (3) (I 990) 289-

320.

J. Vanderdonct, F. Bodard, Encapsulating knowledge for intelligent automatic interaction object selection,

Proc. INTERCHI’93, Amsterdam, ACM, New York, pp. 424-429.

A. Savidis, C. Stephanidis, Developing non-visual interaction on the basis of the Rooms metaphor, in:

Companion of CHI ‘95 Conference on Human Factors in Computing Systems, ACM Press, New York,

1995, pp. 146-147.

A. Savidis, C. Stephanidis, A. Korte, K. Crispien, K. Fellbaum, A generic direct manipulation 3D auditory

environment for hierarchical navigation in non-visual interaction, in: Proceedings of the 2nd ACM

Conference on Assistive Technologies, Vancouver, Canada, ACM Press, New York, 1996, pp. 117-123.

C. Stephanidis, A. Savidis, Towards multimedia interfaces for all: A new generation of tools supporting

integration of design-time and run-time adaptivity methods. NSF/MULTIMEDIA ‘95 Workshop on

Adaptive Multimedia Technologies for People with Disabilities. San Francisco, CA, IO November 1995.

D. Akoumianakis, A. Savidis, C. Stephanidis, An expert user interface design assistant for deriving

maximally preferred lexical adaptability rules, in: J.K. Lee, J.Liebowitz, Y.M. Chae (Eds.), Critical Technology-Conference Proceedings of Third World Congress on Expert Systems (WCES-96). Seoul.

Korea. 1996, vol. II, pp. 1298-1315.

C. Stephanidis, A. Paramythis, A. Koumbis, Milestone 2.3.2: Design of appropriate interaction techniques

for speech-motor and language-cognitive impaired users, ACCESS Consortium, ICS-FORTH, 1995.

A. Savidis, C. Stephanidis. Developing dual user interfaces for integrating blind and sighted users: The

Page 32: Supporting user-adapted interface design: The USE-IT system

104 D. Akoumianakis, C. StephanidMnteracting with Computers 9 (1997) 73-104

HOMER UIMS, in: Proceedings of CHI ‘95 Conference on Human Factors in Computing Systems, Denver,

CO, ACM Press, New York, 1995, pp. 106-I 13.

[33] T. Blesser, J. Sibert, Toto: A tool for selecting interaction techniques, UIST ‘90, ACM, New York, 1990,

pp. 135-142.

[34] D. Akoumianakis, A. Savidis, C. Stephanidis, Design assistance for user-adapted interaction, in: F. Bodard,

J. Vanderdonckt (Eds.), Conference Proceedings of 3rd Eurographics Workshop on the Design, Specitica-

tion and Validation of Interactive Systems (DSV-IS ‘96), Namour, Belgium, 1996, pp. 129-143.

[35] D. Akoumianakis, C. Stephanidis, Internal report on user knowledge acquisition, ACCESS Consortium,

ICS-FORTH, Heraklion, Crete, 1996.

[36] A. Savidis, D. Akoumianakis, A. Paramythis, P. McNally, M. Koskinen, C. Stamatis, Internal report on the

progress of embedding design into implementation for the ACCESS demonstrators, ACCESS Consortium,

ICS-FORTH, Heraklion, Crete, 1996.

[37] R.J. Lewis, IBM computer usability satisfaction questionnaires: Psychometric evaluation and instructions

for use, International Journal of Human-Computer Interaction 7 (1) (1995) 57-78.