determining evaluation criteria for digital libraries' user interface: a review

25
Determining evaluation criteria for digital libraries’ user interface: a review Nadjla Hariri and Yaghoub Norouzi Science and Research Branch, Department of Library and Information Science, Islamic Azad University, Tehran, Iran Abstract Purpose – The present study aims to review the literature concerning Digital Libraries (DLs) and user interfaces in order to identify, determine, and suggest evaluation criteria for a DLs user interface. Accordingly, this study’s objectives are threefold: explore which criteria exert a significant relationship with the DLs user interface; identify a set of criteria that appears to be useful for evaluating DLs user interface; and determine evaluation criteria that have more frequency and occurrence in the related texts reviewed. Design/methodology/approach – To do it, first, identifying related texts was necessary. Consequently, keywords such as “DLs user interface evaluation”,” DLs user interfaces”, “DLs evaluation”, “DLs usability”, “user interface evaluation”, “DLs research”, “web sites user interface evaluation”, “user interface standards”, and the like have been searched in the web as well as in some leading databases including Emerald, Proquest, SagePub, ScienceDirect, LISA, ERIC, ACM, and Springer. After identifying and accessing more than 100 evaluative works and some related articles, theoretical and empirical, nearly 50 sources were chosen for final examination. Findings – After reviewing related texts, three major categories are identified: user interface and DLs; DLs and usability; and other studies related to user interface; each one of three identified categories has its own subcategories. Additionally, 22 evaluation criteria for assessing DLs interface have been identified. Research limitations/implications – The review does not claim to be comprehensive. Practical implications – Hopefully, criteria such as feedback, ease of use, match between system and the real world, customization, user support, user workload, interaction, compatibility, visibility of system status, user experience, flexibility, and accessibility which have been less considered should be applied more in future, particularly user-oriented, studies. Furthermore, it is expected that criteria mentioned here could help related bodies pay more attention to the evaluation of EISs, especially DLs interface. Originality/value – It can be said that this study has contributed to the research into the evaluation of DL interface. Keywords Digital libraries, Evaluation criteria, User interfaces Paper type Literature review Introduction Digital libraries (DLs) have exploded onto the scene in the past few years. Numerous research and practical efforts and large resources are expended on DL research and practice (Saracevic, 2000, p. 366). In other words, interest in the research and The current issue and full text archive of this journal is available at www.emeraldinsight.com/0264-0473.htm The authors offer special thanks to their colleagues, especially Dr Alireza Isfandyari-Moghaddam for his cooperation. The authors would also like to extend their appreciation to the anonymous reviewers of the article for their helpful comments. EL 29,5 698 Received October 2009 Revised January 2010 Accepted January 2010 The Electronic Library Vol. 29 No. 5, 2011 pp. 698-722 q Emerald Group Publishing Limited 0264-0473 DOI 10.1108/02640471111177116

Upload: yaghoub

Post on 25-Dec-2016

216 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Determining evaluation criteria for digital libraries' user interface: a review

Determining evaluation criteriafor digital libraries’ user

interface: a reviewNadjla Hariri and Yaghoub Norouzi

Science and Research Branch, Department of Library and Information Science,Islamic Azad University, Tehran, Iran

Abstract

Purpose – The present study aims to review the literature concerning Digital Libraries (DLs) anduser interfaces in order to identify, determine, and suggest evaluation criteria for a DLs user interface.Accordingly, this study’s objectives are threefold: explore which criteria exert a significantrelationship with the DLs user interface; identify a set of criteria that appears to be useful forevaluating DLs user interface; and determine evaluation criteria that have more frequency andoccurrence in the related texts reviewed.

Design/methodology/approach – To do it, first, identifying related texts was necessary.Consequently, keywords such as “DLs user interface evaluation”,” DLs user interfaces”, “DLsevaluation”, “DLs usability”, “user interface evaluation”, “DLs research”, “web sites user interfaceevaluation”, “user interface standards”, and the like have been searched in the web as well as in someleading databases including Emerald, Proquest, SagePub, ScienceDirect, LISA, ERIC, ACM, andSpringer. After identifying and accessing more than 100 evaluative works and some related articles,theoretical and empirical, nearly 50 sources were chosen for final examination.

Findings – After reviewing related texts, three major categories are identified: user interface andDLs; DLs and usability; and other studies related to user interface; each one of three identifiedcategories has its own subcategories. Additionally, 22 evaluation criteria for assessing DLs interfacehave been identified.

Research limitations/implications – The review does not claim to be comprehensive.

Practical implications – Hopefully, criteria such as feedback, ease of use, match between systemand the real world, customization, user support, user workload, interaction, compatibility, visibility ofsystem status, user experience, flexibility, and accessibility which have been less considered should beapplied more in future, particularly user-oriented, studies. Furthermore, it is expected that criteriamentioned here could help related bodies pay more attention to the evaluation of EISs, especially DLsinterface.

Originality/value – It can be said that this study has contributed to the research into the evaluationof DL interface.

Keywords Digital libraries, Evaluation criteria, User interfaces

Paper type Literature review

IntroductionDigital libraries (DLs) have exploded onto the scene in the past few years. Numerousresearch and practical efforts and large resources are expended on DL research andpractice (Saracevic, 2000, p. 366). In other words, interest in the research and

The current issue and full text archive of this journal is available at

www.emeraldinsight.com/0264-0473.htm

The authors offer special thanks to their colleagues, especially Dr Alireza Isfandyari-Moghaddamfor his cooperation. The authors would also like to extend their appreciation to the anonymousreviewers of the article for their helpful comments.

EL29,5

698

Received October 2009Revised January 2010Accepted January 2010

The Electronic LibraryVol. 29 No. 5, 2011pp. 698-722q Emerald Group Publishing Limited0264-0473DOI 10.1108/02640471111177116

Page 2: Determining evaluation criteria for digital libraries' user interface: a review

development of DLs has grown rapidly, with the appearance of special journal issues,conferences and workshops, as well as new print and on-line journals (Borgman, 1999;quoted in Thong et al., 2002). On the other hand, the DL is considered as a hot topic andan exciting area for research, whether empirical or theoretical, because of its relativenovelty and explosive as well as phenomenal growth (Isfandyari-Moghaddam andBayat, 2008). However, Xie (2006) declares that while DL research has developed overthe past decade, little has been done on the identification of evaluation criteria.Similarly, as emphasized by Mabe (2002), in DL research, evaluation is an essentialrequirement for answering the important questions “what is a good DL?” or “how canwe make DLs better?” There is little agreement about the definition of a DL nor howsuch a DL might be evaluated. According to related literature, this article believes thatlittle has been done on the identification of evaluation criteria especially concerningDLs user interface. This reality is also supported by Isfandyari-Moghaddam and Bayat(2008, p. 855) when they highlighted that legal issues, social issues, standards,metadata, user interfaces, information retrieval, management of intellectual and digitalrights, and interoperability are among DL fields which need more attention andinvestigation.

With regard to the DL user interface, over ten years ago Arms (2002) maintainedthat during the past few years, the quality of user interface design has improveddramatically. It is now assumed that new users can begin productive work without anytraining. Most importantly, there are now numerous examples of fine interfaces on theinternet which others can use as models and inspiration. Standards of graphical designget better every year. Good support for users is more than a cosmetic flourish. Elegantdesign, appropriate functionality, and responsive systems make a measurabledifference to the effectiveness of DLs. When a system is hard to use, the users may failto find important results, may misinterpret what they do find, or may give up indisgust believing that the system is unable to help them. A DL is only as good as theinterface it provides to its users. In fact, from Arms’ perspective, user interfaces affecthighly the effectiveness of DLs.

On the importance of user interface, Jeng (2005) posits that interface is one of themost important aspects of usability as it is the medium through which userscommunicate and interact with the system. Moreover, importance of the user interfacehas been widely stressed in other studies including Kim (2002) and Kani-Zabihi et al.(2006). In relation to evaluating user interfaces in the past, researchers such as Nielsen(1993b) and Shneiderman (1998) have offered some criteria, which are used in thepresent article as a basis for evaluation of the DL interface. But, in the context of DLs,finding evaluation criteria capable to be cited here seems difficult. Hence, this studyaims to consult with literature concerning DLs and user interfaces in order to identify,determine, and suggest evaluation criteria for DLs user interface.

Accordingly, this study’s objectives are threefold:

(1) explore which criteria exert a significant relationship with the DLs userinterface;

(2) identify a set of criteria that appears to be useful for evaluating DLs userinterface; and

(3) determine evaluation criteria that have more frequency and occurrence in therelated texts reviewed.

Determiningevaluation

criteria

699

Page 3: Determining evaluation criteria for digital libraries' user interface: a review

As a matter of fact, this research intends to answer two main questions posed as below:

RQ1. Which criteria, identified in the literature, have been used for evaluating DLsuser interface?

RQ2. Which criteria, used in the literature, have more frequency (importance)?

It can be said that this literature review attempts to meet some of Saracevic’s (2000,2004) aims mentioned in the form of a set of guidelines for evaluation of DLs in whichfive dimensions “construct for evaluation”, “context of evaluation”, “criteria reflectingperformance as related to selected objectives”, “measures reflecting selected criteria torecord the performance”, and “methodology for doing evaluation” were indicated.

Study methodologyThis study is essentially a literature review done by means of content analysis method.To do it, firstly, identifying related texts was necessary. Consequently, keywords suchas “DLs user interface evaluation”,” DLs user interfaces”, “DLs evaluation”, “DLsusability”, “user interface evaluation”, “DLs research”, “web sites user interfaceevaluation”, “user interface standards”, and the like have been searched in the web aswell as some leading databases including Emerald, Proquest, SagePub, ScienceDirect,LISA, ERIC, ACM, and Springer. After identifying and accessing more than 100evaluative works and some related articles, theoretical and empirical, nearly 50 sourceswere chosen for final examination. It should be mentioned that some of 100 evaluativeworks were used to write some parts of the article especially theoretical andbackground foundations. But, nearly 50 works concerning user interface particularlyin the field of DLs were left for further review. Then, main themes were extracted andthus three major categories were identified to provide the focus in the article. Thesecategories are as follows:

(1) user interface and DLs;

(2) DLs and usability; and

(3) other studies related to user interface.

In fact, this categorization is based on the authors’ perspective which, in turn, has beenformed according to the frequency of DL resources studied in this article. It is notablethat these three categories, in turn, are divided into some subcategories (Figure 1). Inother words, identified subcategories are indicative of the current situation andorientation of research topics and priorities regarding digital libraries.

Research findingsAs illustrated in Figure 1, each one of three identified categories “user interface andDLs”, “DLs and usability”, and “other studies related to user interface” has its ownsubcategories.

In order to find an answer for RQ1, “which criteria, identified in the literature, havebeen used for evaluating DLs user interface?”, the content of available texts wereanalyzed on the basis of the identified main and sub categories. It is important to notethat evaluation criteria extracted from the analysis are included in Tables I-III. Here,sources used to develop the article are debated according to categorization mentionedin Figure 1.

EL29,5

700

Page 4: Determining evaluation criteria for digital libraries' user interface: a review

User interface and DLsIn addition to the relationship between user interface and DLs and the importance ofthe user interface in the success or effectiveness of any DL, discussed above (seeIntroduction), we can also refer to Saracevic (2000) when he proposes seven generalclasses or levels of evaluation of DLs including social level, institutional level,individual level, interface level, engineering level, processing level, and content level.He adds that at interface level the objective is to assess how well a DL interfacesupports access, searching, navigation, browsing, and interactions with a DL.

Criteria Authors

Searching, navigation, language Baldacci et al. (1999)Guidance, user control, consistency, error management,compatibility, feedback, suitability for the task, user workload

Park and Lim (1999)

Searching, design, ease of use, learnability, feedback, user control Park (2000)Visibility of system status, match between system and the realworld, user control and freedom, consistency and standards, errorprevention, recognition rather than recall, flexibility and efficiency ofuse, aesthetic and minimalist design, error management, help anddocumentation

Peng et al. (2004)

Design, navigation, language Thong et al. (2002)Searching, user interaction, library customization, authentication,design, database communication protocols,

Dorner and Curtis (2003,2004)

Terminology, screen design and navigation Ramayah (2006)Representation (searching), architecture (design), and interfacing(user support)

Fox et al. (1993)

Searching, navigation, presentation Oliveira et al. (1999)Searching, navigation, guidance (help), presentation, consistency Marchionini et al. (1998)Searching, navigation, guidance (help), presentation Hill et al. (2000)

Table I.Identified criteria relating

to category “userinterface and DLs”

Figure 1.Main and sub categoriesidentified for evaluating

DLs user interface

Determiningevaluation

criteria

701

Page 5: Determining evaluation criteria for digital libraries' user interface: a review

Evaluation. Hill et al. (2000) evaluated user interface of Alexandria DL (ADL) to designthe system particularly its user interface based on users’ viewpoint. In a word, theymade use of the effect of user evaluation on the evolution of ADL. Peng et al. (2004)reported the results of a heuristic-based user interface evaluation of the gateway toelectronic media services (GEMS) system at Nanyang Technological University (NTU)in Singapore. Their research was based on Nielsen’s (1994) ten-user interface heuristics(see below):

(1) visibility of system status;

(2) match between system and the real world;

(3) user control and freedom;

(4) consistency and standards;

(5) error prevention;

(6) recognition rather than recall;

(7) flexibility and efficiency of use;

(8) aesthetic and minimalist design;

(9) help users recognize, diagnose and recover from errors; and

(10) help and documentation.

Design. Marchionini et al. (1998) did a collaborative effort to explore user needs in a DL,develop interface prototypes for a DL and suggest and prototype tools for digitallibrarians and users at the Library of Congress (LC). Interfaces were guided by an

Criteria Authors

Learnability, error management, searching, ease of use Shneiderman (1987; quotedin Reeves, 2003)

Navigation, presentation, consistency, searching, learnability,guidance

Quijano-Solıs and Novelo-Pena (2005)

Searching, design, navigation, language Goh et al. (2006)Searching, navigation Jose (2007)Learnability, efficiency, memorability, error management Kling and Elliott (1994)Learnability, efficiency, satisfaction, error management, feelings,thoughts, action

Ferreira and Pithan (2005)

Navigation, searching Madle et al. (2006)Searching, design, feedback, user control, ease of use, consistency France et al. (1999)Learnability, error management (recovery), context of use, userexperience

Blandford and Buchanan(2003)

Design, error management, ease of use, effectiveness, efficiency,satisfaction, learnability, organization of information, terminology,attractiveness, overall reaction, lostness, click cost, demographicfactors on performance

Jeng (2005)

Design, searching, navigation, guidance, language, interaction Chowdhury et al. (2006)User workload, guidance, error management, user control, design,consistency, match

Van House et al. (1996)

Design, navigation Kani-Zabihi et al. (2006)Searching, navigation, guidance, presentation Xie (2006)Searching, navigation, help, design, accessibility Xie (2008)

Table II.Identified criteria relatingto category “DLs andusability”

EL29,5

702

Page 6: Determining evaluation criteria for digital libraries' user interface: a review

assessment of user needs and aimed to maximize interaction with primary resourcesand support both browsing and analytical search strategies.

Oliveira et al. (1999) proposed a framework for designing and implementing the userinterface of a geographic DL. It allows the design and construction of customizableuser interfaces for applications based on Geographic DLs (GDL). The framework,which relies on two main concepts (i.e. a geographic user interface architecture and ageographic DL model), is open, customizable, and takes into consideration the variousaspects of user interaction with geographic data.

Ramayah (2006), emphasizing interface characteristics – terminology, screendesign and navigation – have a strong impact on perceived ease of use and itssubsequent effect on the intention to use an online library, examined interfacecharacteristics of online library of Universiti Sains Malaysia (USM). Finally, this studyhas empirically proven the impact of interface characteristics on perceived ease of useand its subsequent effect on the intention to use an online library.

Interaction. Fox et al. (1993), aiming to build a user-centered prototype DL andcovering issues of representation, architecture, and interfacing, did a survey. Finally,they declared that their investigations and investigations by others promise to move uscloser to an era of comprehensive, integrated, worldwide DLs.

The study performed by Thong et al. (2002) contributes to understanding useracceptance of DLs by utilizing the technology acceptance model (TAM). Three system

Criteria Authors

Language, navigation, customization, presentation, searching,guidance

Vilar and Zumer (2005)

Consistency, visibility, match, minimalist, memory, feedback,flexibility and efficiency, error message, prevent errors, closure,reversible actions, language, control, and documentation

Allen et al. (2006)

Language, presentation, navigation Yeung and Law (2006)Ease of use, presentation, learnability, navigation Tsakonas and

Papatheodorou (2006)Security, navigation, visibility, match, user control, consistency,error management, design, guidance, design, ease of use

Wenham and Zaphiris (2003)

Searching, navigation, ease of use, guidance, presentation, efficiency,consistency, user control

Hansen (1998)

Presentation, searching, navigation, feedback, learnability, usercontrol, error management, guidance, ease of use

Ahmed et al. (2006)

Guidance, presentation, learnability, efficiency Nielsen (1993a)Literature review Fetaji et al. (2007)Design, navigation, searching, consistency, feedback, errormanagement, user workload

Shneiderman (1998)

Language, navigation, searching, customization, guidance Chowdhury (2004)Navigation, ease of use, searching, learnability Shiri and Revie (2005)Literature review Hornbæk (2005)Usability, presentation, guidance Dzida (1995)User control, customization, error management, learnability, userexperience, guidance

Bevan (1995)

Navigation, design Schumaker (2007)Language, user control, consistency, presentation, ease of use, errormanagement, customization, user support, guidance

Aitta et al. (2008)

Table III.Identified criteria relatingto category “other studiesrelated to user interface”

Determiningevaluation

criteria

703

Page 7: Determining evaluation criteria for digital libraries' user interface: a review

interface characteristics, three organizational context variables, and three individualdifferences are identified as critical external variables that have impact on adoptionintention through perceived usefulness and perceived ease of use of the DL. Data wascollected from 397 users of an award-winning DL. The findings show that bothperceived usefulness and perceived ease of use are determinants of user acceptance ofDLs. In addition, interface characteristics (in terms of terminology clarity, screendesign, and navigation clarity) and individual differences affect positively perceivedease of use, while organizational context influences both perceived ease of use andperceived usefulness of DLs.

Common user interface. Baldacci et al. (1999) did a study on designing common userinterface for European Consortium for Informatics and Mathematics. In order toaddress “how to support effective interaction of users with heterogeneous anddistributed information resources”, Park (2000) compared usability, user preference,effectiveness, and searching behaviors in systems that implement interaction withmultiple databases through a common interface, and with multiple databases.According to findings, subjects were more satisfied with the results from the commoninterface, and performed better with the common interface than with the integratedinterface.

In a research entitled “a comparative review of common user interface softwareproducts for libraries”, Dorner and Curtis (2003) comparatively evaluated somecommon user interface software products. The data for this research were obtainedthrough two surveys:

(1) a survey sent to common user interface software vendors to collect informationabout their products; and

(2) a survey sent to a selection of New Zealand librarians who were asked to ratecommon user interface features.

As part of this research, 79 common user interface software features were identifiedand classified into groups of like features, which resulted in eight broader categories:

(1) searching;

(2) user interaction;

(3) customization;

(4) authentication;

(5) design;

(6) database communication protocols;

(7) after sale support; and

(8) software platforms supported.

Evaluation criteria based on these features were used in both surveys. In fact, theresults of the survey of librarians provided insight into the level of need for commonuser interface software products in New Zealand libraries. Finally, they concluded thatas almost all of the evaluation criteria were rated highly by the librarians consulted, itcan only be hoped that all of these criteria will become standard in the near future andthat common user interface software products will be incorporated into more NewZealand libraries. It is notable that a report of their survey was published in 2004.

EL29,5

704

Page 8: Determining evaluation criteria for digital libraries' user interface: a review

From what has been already said, we can compactly illustrate important issues andcriteria relating to “user interface and DLs” in the form of Table I.

As can be seen in Table I, criteria “searching”, “design”, and “navigation” have beenused more frequently compared to other ones identified in the related texts.

DLs and usabilityMost DL evaluation studies are mainly usability studies. In other words, the majorityof research on DL evaluation focuses on how users use a DL, essentially usabilitystudies, (Xie, 2006). Van House et al. (1996) recommend that the usability of DLsdepend on three key components:

(1) content;

(2) functionality; and

(3) the user interface.

According to Jeng (2005, p. 96), usability is multidimensional and in the literature it hasbeen used broadly and means different things to different people. Usability is defined as“the extent to which a product can be used by specified users to achieve specified goalswith effectiveness, efficiency and satisfaction in a specified context of use” (ISO, 1994).Emphasizing that usability has theoretical base on human-computer interaction, there aremany usability studies focusing on interface design (Jeng, 2005), and thus it is affected byuser interface (Roy et al., 2001), in this article usability of DLs and some related works arediscussed. Hence, because criteria used to evaluate DLs usability are generally linked tothe user interface, related issues extracted from related literature are indicated below.

Evaluation. Shneiderman (1987; quoted in Reeves, 2003 in the book “EvaluatingDLs: A user-friendly guide”) maintains that the usability of any type of computerprogram is determined by a combination of five user-oriented characteristics:

(1) ease of learning;

(2) high speed of user task performance;

(3) low user error rate;

(4) subjective user satisfaction; and

(5) user retention over time.

With reference to DLs, this means that:. learning how to access the resources in a DL should be intuitive or easy-to-learn;. finding a desirable resource should take minimal time;. errors of omission (not finding what the user wants) or commission (finding the

wrong things) should be rare;. searching should be a pleasant and rewarding experience; and. returning to the DL within a reasonable time should not require learning the user

interface all over again.

He adds that usability is about much more than the “look and feel” of the DL. Theinterface of a DL should communicate its functions and navigational structure to newusers with a minimum of “cognitive overload”.

Determiningevaluation

criteria

705

Page 9: Determining evaluation criteria for digital libraries' user interface: a review

Quijano-Solıs and Novelo-Pena (2005) did a survey aimed to explore, describe andexplain some of the usability characteristics in DLs evaluation in the Mexican context.The study is framed in the evaluation of a multinational and monolingual DL: the Miguelde Cervantes Virtual Library, from the University of Alicante in Spain. The evaluatorswere Mexican “expert” users (i.e. Spanish-speaking professional university librariansspecialized in electronic reference services) who were asked to carry on an evaluationinstrument based on usability criteria as taken from some models in developed countries.Results showed that most of users refer to DLs based on their own subject field.

Goh et al. (2006), emphasizing that many open source software packages are availablefor organizations and individuals to create DLs, however, a simple to use instrument toevaluate these DL software packages does not exist, tried to develop a checklist for DLevaluation and use this checklist on four DL software packages. To do such a study,features that characterized “good” open source DL software were determined from theliterature. First identified were essential categories of features that DL software shouldpossess. These categories were then decomposed into supporting features. From these, achecklist consisting of 12 categories of items that covered all such features wasdeveloped. The checklist was then used to evaluate four popular open source DLsoftware packages (CDSware, EPrints, Fedora, and Greenstone) for the purposes ofassessing suitability for use in a DL project to be undertaken by the authors. Using thechecklist, Greenstone was found to be the best performer, followed by CDSware, Fedoraand EPrints. Greenstone was the only software package that consistently fulfilled themajority of the criteria in many of the checklist categories. In contrast, EPrints was theworst performer due to its poor support for certain features deemed important in ourchecklist, and a total absence of functionality in other categories.

Jose (2007) evaluated the DL at Management Development Institute (India). Theobjective of the DL was to provide a single window access to various online resourcesand services being offered by the library. In fact, a formal evaluation was conducted inmid-2006, a year after the DL was established various methodologies were used toobtain both quantitative and qualitative data for evaluation. The following methodswere used for different categories of users:

. informal interviews with faculty;

. group sessions for researchers;

. questionnaire for students; and

. transaction log analysis for collecting quantitative data.

Instead of individuals, the study was limited to studying the usage pattern of variousgroups like faculty, researchers, etc. and the results of the evaluation provided usefulinsights into the usage pattern of these groups. A few interesting findings of the studywere the low awareness level about various resources available in the DL, and thepreference to use the same resources repeatedly instead of other resources of similarnature. The results of the evaluation were used in:

. Collection development policy – it was found that the majority of users preferusing online journals instead of print version. Hence, a decision has been taken tosubscribe to online journals wherever possible.

. DL user interface re-design – majority of users preferred simple and easy toaccess DL interface.

EL29,5

706

Page 10: Determining evaluation criteria for digital libraries' user interface: a review

. User education – a large percentage of users were unable to use variousresources/databases available in the DL due to lack of training. Thus, manytraining programs were organized for such users to make them comfortable withthe DL.

. Internal marketing – the evaluation confirmed that there was a low level ofawareness about the DL in general and various services/content available in itamongst many sections of users. So, appropriate steps were taken to increase theawareness about the DL.

Design. Kling and Elliott (1994) in a study entitled “DL design for usability” discussedtwo forms of DL usability – interface and organizational. In their opinion, interfaceusability dimensions are:

. Learnability – Ease of learning such that the user can quickly begin using it.

. Efficiency – Ability of user to use the system with high level of productivity.

. Memorability – Capability of user to easily remember how to use the systemafter not using it for some period.

. Errors – System should have low error rate with few user errors and easyrecovery from them. Also no catastrophic errors.

And, the organizational usability dimensions include:. Accessibility – Ease with which people can locate specific computer systems,

gain physical access and electronic access to their electronic corpuses. Thisdimension refers to both physical proximity and administrative/socialrestrictions on using specific systems.

. Compatibility – Level of compatibility of file transfers from system to system.

. Integrability into work practices – How smoothly the system fits into a person orgroup’s work practices.

. Social-organizational expertise – The extent to which people can obtain trainingand consulting to learn to use systems and can find help with problems in usage.

They believe that computer systems development communities, including the DLdesign community, usually have some consensus (cultural models) about the role ofusability in their development process. Hence, they discussed five typical models ofcomputer systems design that are treated as cultural models when they are taken forgranted within a professional community as the way to design all systems.Accordingly, they proposed a new model that incorporates “design for usability”principles into system design.

Ferreira and Pithan (2005), considering areas such as human-computer-interaction(HCI) (usability studies, in particular) and information science (IS) (especially studiesabout users’ necessities and behavior in information search and use), tried to integrateconcepts and techniques from these two areas, that is, to analyze the usability of theInfoHab DL, having as theoretical base the constructivist model of user study proposedby Kuhlthau (1991) and the criteria of usability established by Nielsen (2003). In orderto do so, a qualitative study with six users with different levels of academic formationand experience in the use of recovery systems was developed. Data were collected

Determiningevaluation

criteria

707

Page 11: Determining evaluation criteria for digital libraries' user interface: a review

through personal interviews, prototype of the library, direct observation, image, andsound records. The variables of this study included the following criteria: learnability,efficiency and effectiveness of the DL, management of errors, memorability and theuser’s satisfaction from the perspective of cognitive and affective aspects and theactions taken by the users during the information search process. Finally, they statedthat the results are evidence of the possible synergy between the HCI and IS fields.

Madle et al. (2006), during a five-year period, evaluated the use of the Nationalelectronic Library of Infection (NeLI) – a freely available portal to key evidence andguidelines in the infectious disease field in the UK – to identify navigation patterns toimprove the structure of NeLI; to determine user satisfaction with the quality andcontent of information on NeLI and the value placed on RAs to help promote theirdevelopment; to identify gaps in content; and to find out information about users suchas clinical interests and demographic information. To do this, they combinedqualitative and quantitative evaluation using the results of web access logs analysis,free text search query analysis and an online user survey (online questionnaire). In theend, they declared that five years of evaluating the use of the NeLI has helped redesignthe DL and taught them some valuable lessons for future evaluation methods to beemployed for NeLI or any other medical DL. They noted that they did not considerevaluation of NeLI as a one-off activity but believed it was a continuous process toensure that the library remains the key resource of information on infectious diseases.

Usability. DLs must reach out to users from all walks of life, serving informationneeds at all levels. To do this, they must attain high standards of usability over anextremely broad audience. Central to its evolution have been user studies, analysis ofuse patterns, and formative usability evaluation. Based on such a view, France et al.(1999) extrapolated that all these three components are necessary in the production ofsuccessful DL systems. They continue that libraries have come to exist in response toneeds in human communities. DLs are no exception. DL components – tools,frameworks, and collections – may grow out of a conception by providers, but DLs asa whole will thrive or wither only as they serve or fail to serve their user communities.Their experience with MARIAN – an innovative search system suitable for DLs thathas served as an alternative catalogue for the main Virginia Tech library – serves tovalidate this proposition. France et al. found that user behavior is not alwayspredictable, but that tracking it carefully can contribute to a usable product. And theyhave found that it matters not at all how good a search system you have if the userscannot find the “Do Search” button.

In a viewpoint paper, Blandford and Buchanan (2003) posit that if DLs are toachieve their full potential, they need to be usable and used - by people for whominformation retrieval is not generally the main goal. Therefore, in the paper, theyoutlined various views of “usability” and how they apply specifically to DLs.Emphasizing and discussing that there are great challenges to integrating userperspectives with technical developments, in terms of understanding those userperspectives, developing design processes that adequately accommodate them andensuring adequate communications between all stakeholders in design, they concludedthat in the short-term, checklists and guidelines may help in the development andimplementation of systems that work with users rather than against them. In thelonger term, a deeper understanding of user behaviors and user needs, anduser-oriented design techniques will be necessary.

EL29,5

708

Page 12: Determining evaluation criteria for digital libraries' user interface: a review

Jeng (2005) developed and evaluated methods and instruments for assessing theusability of DLs. In her research, she discusses the dimensions of usability, whatmethods have been applied in evaluating usability of DLs, their applicability, andcriteria. It is also found in the study that there exists an interlocking relationshipamong effectiveness, efficiency, and satisfaction. Table IV demonstrates variousperspectives on the attributes of usability, adopted from Jeng (2005).

Additionally, in a research paper written by Chowdhury et al. (2006), work on theusability and impact of DLs is reviewed. Specific studies on the usability and impact ofDLs in specific domains are also discussed in order to identify general and specificusability and impact measures. They found that the usability studies reviewed in thispaper show that a number of approaches have been used to assess usability. Inaddition to the technical aspects of DL design (e.g. architecture, interfaces and searchtools), there are a number of usability issues such as globalization, localization,language, culture issues, content, and human information behavior. DLs should,however, be evaluated primarily with respect to their target users, applications andcontexts. Finally, they declared that “Although there were relatively few evaluationstudies during the first period of development of DLs, this area of research hasattracted significant attention over the past few years the last five or so years”(Chowdhury et al., 2006, p. 671). . . “DLs will be an ubiquitous tool in our everyday lifeand activities in future” (Chowdhury et al., 2006, p. 673).

Users. Among usability studies done, some like Van House et al. (1996), dealing withthe application of user needs assessment and evaluation in iterative, user-centereddesign of a component of the University of California, Berkeley Digital Librariesproject named Cypress, attempted to understand users’ needs, find problems anddesired features and to assess overall user satisfaction. In these works, observing andinterviewing users about design elements through the use of focus groups andsatisfaction surveys has been applied. Van House et al.’s (1996) work demonstrates

Authors Attributes

Booth (1989) Usefulness, effectiveness, learnability, attitudeBrinck et al. (2002) Functionally correct, efficient to use, easy to learn, easy to remember,

error tolerant, and subjectively pleasingClairmont et al. (1999) Successfully learn and use a product to achieve a goalDumas and Redish (1993) Perform tasks quickly and easilyFurtado et al. (2003) Ease of use and learningGluck (1997) Useableness, usefulnessGuillemette (1995) Effectively used by target users to perform tasksHix and Hartson (1993) Initial performance, long-term performance, learnability,

retainability, advanced feature usage, first impression, and long-termuser satisfaction

ISO (1994) Effectiveness, efficiency, satisfactionKengeri et al. (1999) Effectiveness, likeability, learnability, usefulnessKim (2002) Interface effectivenessNielsen (1993b) Learnability, efficiency, memorability, errors, satisfactionOulanov and Pajarillo (2002) Affect, efficiency, control, helpfulness, adaptabilityShackel (1981) Ease of use, effectivenessShackel (1986, 1991) Effectiveness, learnability, flexibility, user attitude

Table IV.Attributes of usability

Determiningevaluation

criteria

709

Page 13: Determining evaluation criteria for digital libraries' user interface: a review

how a relatively straightforward, moderate level of effort involving users, not just forfeedback on an interface, but for an understanding of how they do their work, resultedin a significant improvement in the design of a component of the UC Berkeley project.

With the aim of determining user suggestions for DLs functionality and features,Kani-Zabihi et al. (2006) conducted a survey in which users’ suggestions for DLs weresolicited, as well as their ranking opinions on a range of suggested DL features. Thestudy revealed that, regardless of users’ information technology backgrounds, theirexpectations of DLs functionality are the same. However, based on users’ previousexperiences with DLs, their requirements with respect to specific features may change.Based on research findings, DL designers must therefore consider that users’ beingable to access reliable information easily and quickly is more important than the layoutand interfaces of DLs.

Highlighting this view that DL research has developed over the past decade, butlittle has been done on the identification of evaluation criteria, especially from users’perspectives, Xie (2006) identifies users’ criteria and applies them to the evaluation ofexisting DLs. In total, 48 subjects were instructed to develop and justify a set ofessential criteria for the evaluation of DLs. At the same time, they were requested toevaluate existing DLs by applying the criteria that they were developing. Acompilation of criteria developed by participants show that usability and collectionquality were the most important criteria for evaluating DLs. Service quality, systemperformance efficiency, and user opinion solicitation were also deemed essentialcriteria.

In another study, Xie (2008) declared that although millions of dollars have beeninvested into the development of DLs, there are still many unanswered questionsregarding their evaluation, in particular, from users’ perspectives. Hence, his study wasintended to investigate users’ use, their criteria and their evaluation of the two selectedDLs. A total of 19 subjects were recruited to participate in the study. They wereinstructed to keep a diary for their use of the two DLs, rate the importance of DLevaluation criteria, and evaluate the two DLs by applying their perceived importantcriteria. The results show patterns of users’ use of DLs, their perceived importantevaluation criteria, and the positive and negative aspects of DLs.

Based on what has been aforementioned, we can compactly include importantissues and criteria relating to “DLs and usability” in Table II. As the table shows,criteria “design”, “error management or recovery”, “searching”, and “navigation” havebeen used more frequently compared to other ones identified in the related resourcesconsulted in the present study.

Other studies related to user interfaceIn this section, some studies have been cited that are not directly related to evaluatingDL interface. Yet, familiarity with them can help researchers who are interested in DLsgeneralize some relevant evaluation criteria to area “the evaluation of DLs interface”.Accordingly, they are discussed next.

Evaluation. Wenham and Zaphiris (2003) in a project reviewed 27 available userinterface evaluation methods (e.g. focus groups, questionnaires, journalled sessions,self reporting logs, screen snapshots, formal usability inspection, Pluralisticwalkthrough, consistency inspection, standards inspection, thinking aloud protocol,question asking protocol, competitive analysis, affinity diagrams, blind sorting and

EL29,5

710

Page 14: Determining evaluation criteria for digital libraries' user interface: a review

card sorting) and selected a shortlist of methods, which are appropriate for evaluatingmature, post-implementation internet Banking web sites. The selected methods wereapplied to two UK internet-banking web sites (namely: LTSB and HSBC). Based on theexperience and the results of these evaluation exercises, the methods were evaluatedfor their usefulness. Finally, a set of heuristics was developed that can be used whenevaluating internet-banking web sites.

Vilar and Zumer (2005) compared and evaluated four user interfaces of web-basede-journals (Science Direct, ProQuest Direct, EBSCO Host and Emerald). To do this, thesystems were assessed in an expert study according to accepted guidelines regardinguser friendliness and functionality. User friendliness features studied were language(s)and type(s) of interface, navigation options, personalization, and screen features.Functions inspected were database selection, query formulation and reformulation,results manipulation, and help. Based on research findings, many similarities werefound, but some differences among the systems were also discovered and analyzed indetail. The greatest differences were found in the area of query formulation, andbetween the interface languages and types.

Allen et al. (2006, p. 412), highlighted that online medical information – whenpresented to clinicians – must be well-organized and intuitive to use so that theclinicians can conduct their daily work efficiently and without error. This implied thatit was essential to actively seek to produce good user interfaces that are acceptable tothe user. Accordingly, they developed a simplified heuristic evaluation (HE) suitablefor the evaluation of screen shots of web pages. The heuristics used were consistency,visibility, match, minimalist, memory, feedback, flexibility and efficiency, errormessage, prevent errors, closure, reversible actions, language, control, anddocumentation. Four usability experts reviewed 18 paper-based screen shots andmade a total of 108 comments. Their study found that a heuristic evaluation usingpaper-based screen shots of a user interface was expeditious, inexpensive, andstraightforward to implement. It should be noted that this study was done under theInfobutton project, which addresses the issue of information needs while using theweb-based clinical information system (WebCIS) present at Columbia UniversityMedical Center and New York Presbyterian Hospital.

Yeung and Law (2006, p. 452) indicated that “in spite of the increasing popularity ofinternet applications to the hotel industry, and the large number of publishedinternet-related articles in the hospitality and tourism literature, the topic of theusability of web sites has been largely overlooked by hospitality and tourismresearchers. In other words, the ease of use of hotel web sites remains largely unknownto hotel customers, practitioners, researchers, and policy makers”. Hence, they did anexploratory study in which a modified heuristic technique for evaluation wasdeveloped to measure the usability of hotel web sites. The research evaluated the websites of all members of the Hong Kong Hotels Association. To establish the attributesfor evaluating hotel web sites, 24 criteria for usability were selected from the studies ofAbeleto (2002) and Nielsen (1996, 1999). The selection was made based on theapplicability of each criterion to the hotel industry. Empirical evidence showed thatminor problems of usability existed on the web sites of Hong Kong hotels, and nosignificant difference was found among luxury, mid-priced, and economy hotels.

Tsakonas and Papatheodorou (2006) tried to do a research concerning electronicinformation services (EIS) including DLs, e-journal platforms, portals, e-prints and

Determiningevaluation

criteria

711

Page 15: Determining evaluation criteria for digital libraries' user interface: a review

other web-based information systems which provide services supporting users toperform intense work tasks that require complex interaction activities. This paperpresents a model that analyzes the attributes of the EISs’ components that affect userinteraction and correlates them in the usefulness and usability evaluation process. Inparticular, the two research aims:

(1) to confirm that usability and usefulness are two related properties ofinteraction; and

(2) to investigate which features of system and content are most important andinfluence the successful completion of information and work tasks.

A questionnaire survey was conducted to achieve the research aims. A total of 43participants, grouped in two classes, took part in the survey. The participants receivedeither personally or by e-mail the questionnaire and a letter explaining the researchaims and the remainder of the procedure. The analysis of the content and systemattributes suggests that user interaction is affected equally by content and systemcharacteristics. Finally, the study illustrates users’ preference for the attributes thatconstitute a useful system in contrast to those that support usability.

Hansen (1998), with the aim of investigating if the current user interface to anon-line WWW-based IR system with real users with real information needs providedsufficient support in order to conduct an information seeking task, used a set of datacollection and analysis methods from the area of Information Science andHuman-Computer Interaction (HCI). He collected and analyzed cognitive andstatistical data using a combination of both qualitative and quantitative datacollection methods such as questionnaires, open-ended questions and system logstatistics. Variables and correlation between the variables were measured andrequirement lists were elicited. Finally, the framework used, identified and recognizedseveral important factors that need to be supported in the design of a user interfacedesign. As a matter of fact, he observed several levels of work that must be understoodin order to understand information seeking in a context:

. the task environments (work-task, information seeking task and search task);

. the users specific goals and tasks;

. the users information seeking behavior; and

. the use of an IR system and its components, including the user interface.

Additionally, iterations between evaluations, requirements review and redesign couldcontinuously be executed, until a satisfactory level of design has been reached. Theframework also proves that an on-line based evaluation setting with real users andwith real information seeking tasks is feasible.

Ahmed et al. (2006) adopted a user-centered design and evaluation methodology forensuring the usability of IR interfaces. The methodology was based on sequentiallyperforming: a competitive analysis, user task analysis, heuristic evaluation, formativeevaluation and a summative comparative evaluation. After each round of testing, theprototype was modified as needed. The user-centered methodology had a major impactin improving the interface. This study used the Web of Science interface (http://wos.mimas.ac.uk) as a test-bed to conduct competitive analysis and user task analysis. Thischoice was made because the Web of Science is one of the best known and most widely

EL29,5

712

Page 16: Determining evaluation criteria for digital libraries' user interface: a review

used bibliographic services for the academic community in the UK. Results from thesummative comparative evaluation suggest that users’ performance improvedsignificantly in their prototype interface compared with a similar competitive system.They were also more satisfied with the prototype design. They posited that themethodology provides a starting point for techniques that let IR researchers andpractitioners design better IR interfaces that are both easy to learn to use and remember.

In conclusion, based on their experience with the prototype design, they offereddesigners a few general principles for designing effective IR interfaces as shown next:

. strive for consistency;

. support both novice and experienced users;

. make the interface actions visible to the users;

. assist users in refining the search query;

. offer informative feedback;

. offer simple error handling;

. permit easy reversal of actions;

. avoid complex navigation; and

. reduce short-term memory load.

Design. Nielsen (1993a) in a paper entitled “iterative user interface design” declaredthat redesigning user interfaces on the basis of user testing could substantiallyimprove usability. This statement is based on four examples of iterative user interfacedesign and measurement. On the basis of his review, in four case studies, the medianimprovement in overall usability was 165 percent from the first to the last iteration andthe median improvement per iteration was 38 percent. Iterating through at least threeversions of the interface is recommended, since some usability measures often decreasein some versions if the usability engineering process has focused on improving otherparameters. Nielsen used a conceptual graph of the relation between design iterationsand interface usability to emphasize that ideally, each iteration would be better thanthe previous version, but this is not always true in practice. Some changes in aninterface may turn out not to be improvements after all, therefore, the true usabilitycurve for a particular product would not be as smooth as in the conceptual graph.

Fetaji et al. (2007), maintaining that in designing the interface to computer-basedsystems, the human-computer interaction is left behind without consideration,reviewed the literature in human-computer interaction and analyzed the technologyaspect of human computer interaction. Also, general design principles were reviewed.According to all these issues, recommendations for designing a good human-computerinterface for an e-learning programming environment have been analyzed andproposed. These recommendations are: to investigate the advantages anddisadvantages of interaction styles and interface types that best support theactivities and styles of learning of users the system is aimed at; to choose the type ofinterface and interaction styles that best supports the system goals; to choose theinteraction styles that are compatible to user attributes and that support the usersneeds, which means to choose the styles that are more advantageous for aimed usersand to define the user class (experts, immediates or novices) that the system isdesigned for, where the human factors must be taken in consideration.

Determiningevaluation

criteria

713

Page 17: Determining evaluation criteria for digital libraries' user interface: a review

Furthermore, in relation to designing good as well as usable interfaces, reading BenShneiderman’s (1998) book, “Designing the user interface: strategies for effectivehuman-computer interaction”, is highly advised. This book provides a complete,current, and authoritative introduction to user-interface design. Readers will learnpractical techniques and guidelines needed to develop good systems designs –systems with interfaces the typical user can understand, predict, and control.

Usability. Chowdhury (2004), in a book chapter entitled “Access and usability issuesof scholarly electronic publications”, looks at the various access and usability issuesrelated to scholarly information resources. This work first examines the variouschannels through which a user can get access to scholarly electronic publications. Itthen discusses the issues and studies surrounding usability. Some importantparameters for measuring the usability of information access systems have beenidentified. Finally, the chapter studies some of the major problems facing the users ingetting access to scholarly information through today’s hybrid libraries, and mentionssome possible measures to resolve these problems.

Shiri and Revie (2005) investigated into the ways in which end-users perceive athesaurus-enhanced search interface, in particular thesaurus and search interfaceusability. To do this, 30 academic users, split between staff and postgraduate students,carrying out real search requests were observed during this study. Users were asked tocomment on a range of thesaurus and interface characteristics including ease of use,ease of learning, ease of browsing and navigation, problems and difficultiesencountered while interacting with the system, and the effect of browsing on searchterm selection. The results suggest that interface usability is a factor affectingthesaurus browsing/navigation and other information-searching behaviors.

Hornbæk (2005), in a literature review paper, posing that “how to measure usabilityis an important question in HCI research and user interface evaluation”, reviewedcurrent practice in measuring usability by categorizing and discussing usabilitymeasures from 180 studies published in core HCI journals and proceedings. Thediscussion distinguished several problems with the measures, including whether theyactually measure usability, if they cover usability broadly, how they are reasonedabout, and if they meet recommendations on how to measure usability. In manystudies, the choice of and reasoning about usability measures fall short of a valid andreliable account of usability as quality-in-use of the user interface being studied. Basedon the review, he discussed challenges for studies of usability and for research intohow to measure usability. The challenges are:

. to distinguish and empirically compare subjective and objective measures ofusability;

. to focus on developing and employing measures of learning and retention;

. to study long-term use and usability; to extend measures of satisfaction beyondpost-use questionnaires; to validate and standardize the host of subjectivesatisfaction questionnaires used;

. to study correlations between usability measures as a means for validation; and

. to use both micro and macro tasks and corresponding measures of usability.

In conclusion, it is argued that increased attention to the problems identified andchallenges discussed may strengthen studies of usability and usability research.

EL29,5

714

Page 18: Determining evaluation criteria for digital libraries' user interface: a review

Aitta et al. (2008) tried to define some usability heuristics for the evaluation of publiclibrary web services. Emphasizing that heuristics for library services are based onNielsen’s classical list of heuristics and results of previous usability research of libraryweb services, a total of 15 public library web sites were evaluated on the basis of theseapplied heuristics. One part of the study was supported through usability tests. Theresults of these studies were utilized to evaluate the applied heuristics. The appliedheuristics are divided into three categories:

(1) heuristics critical from the usability viewpoint;

(2) heuristics concerning major problems; and

(3) heuristics connected to minor usability problems but still important andconcerning conventions of web design.

The use of the heuristics and the results they give are evaluated to provide a basis fortheir use in future.

Standards. Dzida (1995) in a paper entitled “Standards for user-interfaces” providessome help in reading software-ergonomic standards and particularly in testingproducts for conformity. This paper is devoted to software-ergonomic IS0 standards, inparticular to the multipart standard IS0 9241 Parts 10-17. Other IS0 standards foruser-interfaces which do not rest on software-ergonomic research arc mentioned butnot commented. He declared that standards for user-interfaces are aimed atestablishing a minimum level of user-oriented quality. The concept of usability is alsointroduced as the general term of software-ergonomic quality.

Schumaker (2007) discusses the importance of user interface standards for bothprogrammer and end-user productivity. He posits that for user interface standards tobe useful to programmers, they needed to be set up in a way that made it easy fordevelopers to find the relevant standard. To make it easy for developers to find theinformation that they need, he organized user interface standards into four major areasnamely “navigation”, “forms”, “reports”, and “documentation”. For some relatedinformation on human-computer interaction standards refer also to Bevan (1995).

As shown in Table III, criteria “navigation”, “guidance”, and “presentation” havebeen used more frequently compared to other ones identified in the related resourcesconsulted in the present study.

Considering the findings included in Tables I, III, and IV, then RQ2, i.e. “whichcriteria, used in the literature, have more frequency (importance)?”, can be answered byreference to Table V.

According to Table V, Navigation, Searching, Design, Guidance, Error management(recovery), Presentation, Learnability, User control, Consistency, and Language werethe top ten-user interface evaluation criteria more used or cited in the related works.Comparing such a finding with Nielsen’s (1994) ten-user interface heuristics indicatesthat related works discussed here have largely utilized and somehow followed hissuggested ten criteria. In other words, among the top ten-user interface evaluationcriteria identified in this article, eight criteria namely Searching, Design, Guidance(help), Error management (recovery), Presentation, Learnability, User control, andConsistency are objectively compatible with Nielsen’s. One important value of ourfindings is that criteria identified and thus suggested here can be used specifically toevaluate a digital library user interface.

Determiningevaluation

criteria

715

Page 19: Determining evaluation criteria for digital libraries' user interface: a review

Based on these criteria, we have proposed a framework for the evaluation of the digitallibrary user interface based on the related literature. This framework can be seen inFigure 2.

ConclusionThe present study aimed to consult with literature concerning DLs and user interfacein order to identify, determine, and suggest evaluation criteria for DLs user interface. Itcan be said that this study has contributed to the research into the evaluation of DLinterface. The contributions of this study are:

(1) explored which criteria exert a significant relationship with the DLs userinterface;

(2) identified a set of criteria that appears to be useful for evaluating DLs userinterface;

(3) determined evaluation criteria that have more frequency and occurrence in therelated texts reviewed; and

(4) proposed a framework for DLs user interface evaluation based on the relatedliterature.

One important point that should be noted here is that these criteria ranked in Table Vand Figure 2 have literary warrant. Moreover, according to available findings, criteriaFeedback, Ease of use, Match between system and the real world, Customization, Usersupport, User workload, Interaction, Compatibility, Visibility of system status, Userexperience, Flexibility, and Accessibility which have been less considered should be

Criteria Rank

Navigation 1Searching 2Design 3Guidance 4Error management (recovery) 5Presentation 6Learnability 7User control 8Consistency 9Language 10Feedback 11Ease of use 12Match between system and the real world 13Customization 14User support 15User workload 16Interaction 17Compatibility 18Visibility of system status 19User experience 20Flexibility 21Accessibility 22

Table V.Highly used criteria toevaluate DLs interface

EL29,5

716

Page 20: Determining evaluation criteria for digital libraries' user interface: a review

applied more in future studies particularly user-oriented ones. Another notable value ofthe article is its relative comprehensiveness. In other words, because 22 evaluationcriteria identified here are extracted from a variety of components like evaluation,design, interaction, common user interface, usability, standards and so on futureresearchers can utilize all or part of them consistent with their own studies. Forinstance, inspired from the implications of this literature review, the authors arecurrently doing a research on Iranian DLs interface on the basis of ten evaluationcriteria including searching, integrity, guidance, navigation, design, error correction,information presentation, user control, interface language, and simplicity.

In conclusion, it is expected that research on the issues debated here will provide abetter understanding for professionals, students, users, managers, researchers,

Figure 2.Framework for usingdetermined criteria toevaluate DL interface

Determiningevaluation

criteria

717

Page 21: Determining evaluation criteria for digital libraries' user interface: a review

developers and designers of DLs user interface to make optimal decisions when dealingwith them. As a closing remark, accepting that DLs play a vital role in offeringdistance services, their interface need more attention because they are treated as agateway for entering DLs information environment. Hopefully, criteria as well as theframework mentioned in the present article help related bodies pay attention more thannow to the evaluation of EISs especially DLs interface.

References

Abeleto (2002), “Objective evaluation of likely usability hazards – preliminaries for user testing”,available at: www.abeleto.com/resources/articles/objective.html (accessed October 10,2009).

Ahmed, S.M.Z., McKnight, C. and Oppenheim, C. (2006), “A user-centered design and evaluationof IR interfaces”, Journal of Librarianship and Information Science, Vol. 38 No. 3,pp. 157-72.

Aitta, M., Kaleva, S. and Kortelainen, T. (2008), “Heuristic evaluation applied to library webservices”, New Library World, Vol. 109 Nos 1/2, pp. 25-45.

Allen, M., Currie, L.M., Bakken, S., Patel, V.L. and Cimino, J.J. (2006), “Heuristic evaluation ofpaper-based web pages: a simplified inspection usability methodology”, Journal ofBiomedical Informatics, Vol. 39, pp. 412-23.

Arms, W.Y. (2002), Digital Libraries, MIT Press, available at: www.cs.cornell.edu/wya/DigLib/MS1999/Chapter7.html (accessed October 10, 2009).

Baldacci, M.B. (1999), “Implementing the common user interface for a digital library: the ETRDLexperience”, available at: www.ercim.org/publication/ws-proceedings/DELOS8/baldacci.html (accessed October 10, 2009).

Bevan, N. (1995), in Anzai, M. and Ogawa, S. (Eds), “Human-computer interaction standards”,Proceedings of the 6th International Conference on Human Computer Interaction,Yokohama, Japan, July 1995, Elsevier, pp. 885-90.

Blandford, A. and Buchanan, G. (2003), “Usability of digital libraries: a source of creativetensions with technical developments”, TCDL Bulletin, available at: www.ieee-tcdl.org/Bulletin/current/blandford/blandford.html (accessed October 10, 2009).

Booth, P. (1989), An Introduction to Human-Computer Interaction, Lawrence ErlbaumAssociates, London.

Brinck, T., Gergle, D. and Wood, S.D. (2002), Designing Web Sites that Work: Usability for theWeb, Morgan Kaufmann, San Francisco, CA.

Chowdhury, G.G. (2004), “Access and usability issues of scholarly electronic publications”,in Gorman, G.E. and Rowland, F. (Eds), Scholarly Publishing in an Electronic Era,International Yearbook of Library and Information Management, 2004/2005, FacetPublishing, London, pp. 77-98.

Chowdhury, S., Landoni, M. and Gibb, F. (2006), “Usability and impact of digital libraries: a review”,Online Information Review, Vol. 30 No. 6, pp. 656-80.

Clairmont, M., Dickstein, D. and Mills, V. (2007), “Testing of usability is the design of a newinformation gateway”, available at: www.library.arizona.edu/library/teams/access9798(accessed October 10, 2009).

Dorner, D.G. and Curtis, A. (2003), “A comparative review of common user interface softwareproducts for libraries”, available at: www.icesi.edu.co/biblioteca/contenido/pdfs/CUI_report_final.pdf (accessed October 10, 2009).

EL29,5

718

Page 22: Determining evaluation criteria for digital libraries' user interface: a review

Dorner, D.G. and Curtis, A. (2004), “A comparative review of common user interface products”,Library Hi Tech, Vol. 22 No. 2, pp. 182-97.

Dumas, J.S. and Redish, J.C. (1993), A Practical Guide to Usability Testing, Ablex, Norwood, NJ.

Dzida, W. (1995), “Standards for user-interfaces”, Computer Standards & Interfaces, Vol. 17,pp. 89-97.

Ferreira, S.M. and Pithan, D.N. (2005), “Usability of digital libraries: a study based on the areas ofinformation science and human-computer-interaction”, OCLC Systems & Services, Vol. 21No. 4, pp. 311-23.

Fetaji, M., Loskoska, S., Fetaji, B. and Ebibi, M. (2007), Investigating Human ComputerInteraction Issues in Designing Efficient Virtual Learning Environment, BCI, Sufia.

Fox, E.A., Hix, D., Nowell, L.T., Brueni, D.J., Wake, W.C., Health, L.S. and Rao, D. (1993), “Users,user interfaces, and objects: envision, a digital library”, Journal of the American Society forInformation Science, Vol. 44 No. 8, pp. 480-91.

France, R.K. (1999), “Use and usability in a digital library search system”, available at: http://arxiv.org/ftp/cs/papers/9902/9902013.pdf (accessed October 10, 2009).

Furtado, E., Furtado, V., Lincoln, F. and Vanderdonckt, J. (2003), “Improving usability of anonline learning system by means of multimedia, collaboration and adaptation resources”,in Ghaoui, C. (Ed.), Usability Evaluation of Online Learning Programs, Information SciencePublishing, Hershey, PA, pp. 69-86.

Gluck, M. (1997), “A descriptive study of the usability of geospatial metadata”, Annual Review ofOCLC Research, available at: www.oclc.org/research/publication/arr/1997/gluck/gluck_frameset.htm (accessed October 10, 2009).

Goh, D.H.L., Chua, A., Khoo, D.A., Khoo, E.B.H., Mak, E.B.T. and Ng, M.W.M. (2006), “A checklistfor evaluating open source digital library software”, Online Information Review, Vol. 30No. 4, pp. 360-79.

Guillemette, R.A. (1995), “The evaluation of usability in interactive information systems”,in Carey, J.M. (Ed.), Human Factors in Information Systems: Emerging Theoretical Bases,Ablex, Norwood, NJ.

Hansen, P. (1998), “Evaluation of IR user interface – implications for user interface design”,Human IT, Vol. 2, pp. 28-41.

Hill, L.L., Carver. L., Larsgaard, M., Dolin, R., Smith, T.R., Frew, J. and Rae, M.A. (2000),“Alexandria digital library: user evaluation studies and system design”, Journal of theAmerican Society for Information Science, Vol. 51 No. 3, pp. 246-59.

Hix, D. and Hartson, H.R. (1993), Developing User Interfaces: Ensuring Usability through Product& Process, Wiley, New York, NY.

Hornbæk, K. (2005), “Current practice in measuring usability: challenges to usability studies andresearch”, International Journal of Human-Computer Studies, Vol. 64, pp. 79-102.

Isfandyari-Moghaddam, A. and Bayat, B. (2008), “Digital libraries in the mirror of the literature:issues and considerations”, The Electronic Library, Vol. 26 No. 6, pp. 844-62.

ISO (1994), ISO DIS 9241-11: Ergonomic Requirements for Office Work with Visual DisplayTerminal, Part 11: Guidance on Usability, International Organization for Standardization,Geneva.

Jeng, J. (2005), “Usability assessment of academic digital libraries: effectiveness, efficiency,satisfaction, and learnability”, Libri, Vol. 55, pp. 96-121.

Jose, A. (2007), in Madalli, D.P. (Ed.), “Evaluation of digital libraries: a case study”, Proceedings ofthe ICSD-2007, Bangalore, 21-23 February, pp. 229-38.

Determiningevaluation

criteria

719

Page 23: Determining evaluation criteria for digital libraries' user interface: a review

Kani-Zabihi, E., Ghinea, G. and Chen, S.Y. (2006), “Digital libraries: what do users want?”, OnlineInformation Review, Vol. 30 No. 4, pp. 395-412.

Kengeri, R., Seals, C.D., Harley, H.D., Reddy, H.P. and Fox, E.A. (1999), “Usability study of digitallibraries: ACM, IEEE-CS, NCSTRL, NDLTD”, International Journal on Digital Libraries,Vol. 2, pp. 157-69.

Kim, K. (2002), “A model of digital library information-seeking process (DLISP model) as a framefor classifying usability problems”, PhD dissertation, Rutgers University, New Brunswick,NJ.

Kling, R. and Elliott, M. (1994), “Digital library design for usability”, available at: www.csdl.tamu.edu/DL94/paper/kling.html (accessed October 10, 2009).

Kuhlthau, C. (1991), “Inside the search process: information seeking from the user’s perspective”,Journal of the American Society for Information Science, Vol. 42 No. 5, pp. 361-71.

Mabe, M. (2002), in Borgman, C., Solvberg, I. and Kovacs, L. (Eds), “Digital library classificationand evaluation: a publisher’s view of the work of the DELOS evaluation forum”,Proceedings of the 4th Delos Workshop on Evaluation of Digital Libraries: Test Beds,Measurements, and Metrics, ERCIM, Budapest, Hungary, available at: www.sztaki.hu/conferences/deval/presentations/html (accessed October 10, 2009).

Madle, G., Kostkova, P., Mani-Saada, J. and Roy, A. (2006), “Lessons learned from evaluation ofthe use of the National Electronic Library of Infection”, Health Informatics Journal, Vol. 12No. 2, pp. 137-51.

Marchionini, G., Plaisant, C. and Komlodi, A. (1998), “Interfaces and tools for the Library ofCongress National Digital Library Program”, Information Processing & Management,Vol. 34 No. 5, pp. 535-55.

Nielsen, J. (1993a), “Iterative user interface design”, IEEE Computer, Vol. 26 No. 11, pp. 32-41.

Nielsen, J. (1993b), Usability Engineering, Academic Press, Cambridge, MA.

Nielsen, J. (1994), “Enhancing the explanatory power of usability heuristics tools for design”,Proceedings of the ACM CHI’94 Conference on Human Factors in Computing System,Vol. 1, ACM Press, New York, NY, pp. 152-8.

Nielsen, J. (1996), “Top ten mistakes in web design”, Jakob Nielsen’s Alertbox, available at: www.useit.com/alertbox/9605.html (accessed October 10, 2009).

Nielsen, J. (1999), “Top ten new mistakes of web design”, Jakob Nielsen’s Alertbox, available at:www.useit.com/alertbox/990530.html (accessed October 10, 2009).

Nielsen, J. (2003), “Usability 101: introduction to usability”, Useit.com: Usable InformationTechnology, UseNet Alertbox, August, available at: www.useit.com/alertbox/20030825.html (accessed October 10, 2009).

Oliveira, J.L.D., Goncalves, M.A. and Medeiros, C.B. (1999), “A framework for designing andimplementing the user interface of a geographic digital library”, International Journal onDigital Libraries, Vol. 2, pp. 190-206.

Oulanov, A. and Pajarillo, E.F.Y. (2002), “CUNYþ ?Web: usability study of the web-based GUIversion of the bibliographic database of the City University of New York (CUNY)”,The Electronic Library, Vol. 20 No. 6, pp. 481-7.

Park, K.S. (2000), “Usability, user preferences, effectiveness, and user behaviors when searchingindividual and integrated full-text databases: implications for digital libraries”, Journal ofthe American Society for Information Science, Vol. 51 No. 5, pp. 456-68.

Park, K.S. and Lim, C.H. (1999), “A structured methodology for comparative evaluation of userinterface designs using usability criteria and measures”, International Journal of IndustrialErgonomics, Vol. 23, pp. 379-89.

EL29,5

720

Page 24: Determining evaluation criteria for digital libraries' user interface: a review

Peng, L.K., Ramaiah, C.K. and Foo, S. (2004), “Heuristic-based user interface evaluation atNanyang Technological University in Singapore”, Electronic Library and InformationSystems, Vol. 38 No. 1, pp. 42-59.

Quijano-Solis, A. and Novelo-Pena, R. (2005), “Evaluating a monolingual multinational digitallibrary by using usability: an exploratory approach from a developing country”,The International Information & Library Review, Vol. 37, pp. 329-36.

Ramayah, T. (2006), “Interface characteristics, perceived ease of use and intention to use anonline library in Malaysia”, Information Development, Vol. 22 No. 2, pp. 123-33.

Reeves, T.C., Apedoe, X. and Woo, Y.H. (2003), “Evaluating digital libraries: a user-friendlyguide”, NSDL.ORG, University of Georgia, Athens, GA, available at: dlist.sir.arizona.edu/398/01/DLUserGuideOct20.doc (accessed October 10, 2009).

Roy, M.C., Dewit, O. and Aubert, B.A. (2001), “The impact of interface usability on trust in webretailers”, Internet Research: Electronic Networking Applications and Policy, Vol. 11 No. 5,pp. 388-98.

Saracevic, T. (2000), “Digital library evaluation: toward an evolution of concepts”, LibraryTrends, Vol. 49 No. 2, pp. 350-69.

Saracevic, T. (2004), “Evaluation of digital libraries: an overview”, available at: www.scils.rutgers.edu/,tefko/DL_evaluation_Delos.pdf (accessed October 10, 2009).

Schumaker, D. (2007), “User interface standards”, Smart access, available at: http://msdn.microsoft.com/en-us/library/aa217660(office.11).aspx (accessed October 10, 2009).

Shackel, B. (1981), “The concept of usability”, Proceedings of the IBM Software and InformationUsability Symposium, Poughkeepsie, NY, September 15-18, pp. 1-30.

Shackel, B. (1986), “Ergonomics in design for usability”, in Harrison, M.D. and Monk, A.F. (Eds),People & Computers: Designing for Usability, Proceedings of the 2nd Conference of the BCSHCI Specialist Group, Cambridge University Press, Cambridge.

Shackel, B. (1991), “Usability – context, framework, definition, design and evaluation”, in Shackel, B.and Richardson, S.J. (Eds), Human Factors for Informatics Usability, Cambridge UniversityPress, New York, NY, pp. 21-37.

Shiri, A. and Revie, C. (2005), “Usability and user perceptions of a thesaurus-enhanced searchinterface”, Journal of Documentation, Vol. 61 No. 5, pp. 640-56.

Shneiderman, B. (1998), Designing the User Interface: Strategies for Effective Human-ComputerInteraction, 3rd ed., Addison-Wesley, Reading, MA.

Thong, Y.L., Hong, W. and Tam, K-Y. (2002), “Understanding user acceptance of digital libraries:what are the roles of interface characteristics, organizational context, and individualdifferences?”, International Journal of Human-Computer Studies, Vol. 57, pp. 215-42.

Tsakonas, G. and Papatheodorou, C. (2006), “Analyzing and evaluating usefulness and usabilityin electronic information services”, Journal of Information Science, Vol. 32 No. 5, pp. 400-19.

Van House, N.A., Butler, M.H., Ogle, V. and Schiff, L. (1996), “User-centered iterative design fordigital libraries: the Cypress experience”, D-Lib Magazine, February, available at: www.dlib.org/dlib/february96/02vanhouse.html (accessed October 10, 2009).

Vilar, P. and Zumer, M. (2005), “Comparison and evaluation of the user interfaces of e-journals”,Journal of Documentation, Vol. 61 No. 2, pp. 203-27.

Wenham, D. and Zaphiris, P. (2003), “User interface evaluation methods for internet banking websites: a review, evaluation and case study”, Proceedings of the 10th InternationalConference on Human-Computer Interaction, Crete, 22-27 June, pp. 721-5.

Xie, H. (2006), “Evaluation of digital libraries: criteria and problems from users’ perspectives”,Library & Information Science Research, Vol. 28, pp. 433-52.

Determiningevaluation

criteria

721

Page 25: Determining evaluation criteria for digital libraries' user interface: a review

Xie, H. (2008), “Users’ evaluation of digital libraries: their uses, their criteria, and theirassessment”, Information Processing and Management, Vol. 44, pp. 1346-73.

Yeung, T.A. and Law, R. (2006), “Evaluation of usability: a study of hotel web sites in HongKong”, Journal of Hospitality & Tourism Research, Vol. 30 No. 4, pp. 452-73.

Further reading

Bennet, J.L., Case, D., Sandelin, J. and Smith, M. (Eds) (1984), Visual Display Terminals: UsabilityIssues and Health Concerns, Prentice Hall, Englewood Cliffs, NJ, pp. 45-88.

Marchionini, G. (2000), “Evaluating digital libraries: a longitudinal and multifaceted view”,Library Trends, Vol. 49 No. 2, pp. 304-33.

About the authorsNadjla Hariri holds a PhD in Library and Information Science and is Assistant Professor in theDepartment of Library and Information Science, Science and Research Branch, Islamic AzadUniversity, Tehran, Iran. Her major research experiences and interests include Informationscience, Research methods, Information retrieval systems, and performance evaluation oflibraries whether traditional or modern like digital libraries.

Yaghoub Norouzi holds an MA in Library and Information Science and currently is a facultymember in the Department of Library and Information Science, Islamic Azad University,Hamedan Branch, Hamedan, Iran. He is currently completing his PhD dissertation on theevaluation of user interface of Iranian digital libraries in the Department of Library andInformation Science, Science and Research Branch, Islamic Azad University, Tehran, Iran. Hisresearch interests are Information storage and retrieval, Digital libraries, Information systems,and Indexing. Yaghoub Norouzi is the corresponding author and can be contacted at:[email protected]

EL29,5

722

To purchase reprints of this article please e-mail: [email protected] visit our web site for further details: www.emeraldinsight.com/reprints