Development and Evaluation of the Windows Computer Experience Questionnaire (WCEQ)

Download Development and Evaluation of the Windows Computer Experience Questionnaire (WCEQ)

Post on 28-Feb-2017

214 views

Category:

Documents

0 download

Embed Size (px)

TRANSCRIPT

  • This article was downloaded by: [University North Carolina - Chapel Hill]On: 28 October 2014, At: 07:28Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

    International Journal of Human-Computer InteractionPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/hihc20

    Development and Evaluation ofthe Windows Computer ExperienceQuestionnaire (WCEQ)Laura A. Miller , Kay M. Stanney & William WootenPublished online: 13 Nov 2009.

    To cite this article: Laura A. Miller , Kay M. Stanney & William Wooten (1997) Development andEvaluation of the Windows Computer Experience Questionnaire (WCEQ), International Journal ofHuman-Computer Interaction, 9:3, 201-212, DOI: 10.1207/s15327590ijhc0903_1

    To link to this article: http://dx.doi.org/10.1207/s15327590ijhc0903_1

    PLEASE SCROLL DOWN FOR ARTICLE

    Taylor & Francis makes every effort to ensure the accuracy of all the information (theContent) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Contentshould not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoeveror howsoever caused arising directly or indirectly in connection with, in relation to orarising out of the use of the Content.

    This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden. Terms& Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

    http://www.tandfonline.com/loi/hihc20http://www.tandfonline.com/action/showCitFormats?doi=10.1207/s15327590ijhc0903_1http://dx.doi.org/10.1207/s15327590ijhc0903_1http://www.tandfonline.com/page/terms-and-conditionshttp://www.tandfonline.com/page/terms-and-conditions

  • INTERNAllONAL JOURNAL OF HUMANCOMPUTER INTERACTION. 43). 201-212 CopyrghtO 1097. Lawrence Erlbaum Associates. Inc

    Development and Evaluation of the Windows Computer Experience Questionnaire (WCEQ)

    Laura A. Miller Psychology Department, University of Central Florida

    Kay M. Stanney Industrial Engineering and Management Systems Department.

    University of Central Florida

    William Wooten Psychology Department, University of Central Florida

    The software market has been inundated with Windows-based application programs claiming increased usability and convenience. Although this bend is indeed prolific, it has resulted in two important implications: (a) an increase in the need to select employees with high levels of Windows-based computer expertise or to identify current employees who require enhanced training, and (b) an increase in the need to measure user expertise to support human-computer interaction research. Despite these increasing demands, questionnaires used to determine general computer expe- rience are scarce. Furthermore, questionnaires regarding computer experience in a Windows environment are seemingly nonexistent. A reliable means of measuring experience ina Windows environment could substantially facilitateboth human-com- puter interaction research and training. This article describes the procedures used to develop and test the reliability of the Windows Computer Experience Questionnaire (WCEQ). A test-retest correlation revealed that the WCEQ is a reliable measure of computer experience. Furthermore, a subsequent factor analysis revealed that the WCEQ is composed of four main factors: general Windows experience, advanced Windows experience, formal instruction, and reliance on help functions.

    1 . INTRODUCTION

    Researchers in the field of human-computer Interaction (HCI) have long under- stood the valueof testing for computerexperlence (Egan, 1988; Lee, 1986). O b t a i m g such measurements allows sclentlsts to control for this variable, for example b y

    Requests for reprints should be sent to Kay M. Stanney, Industrial Engineering and Management Systems Department, University of Central Florida, Orlando, FL 328162450, E-mail: stan- ney@iems.engr.ucf.edu.

    Dow

    nloa

    ded

    by [

    Uni

    vers

    ity N

    orth

    Car

    olin

    a -

    Cha

    pel H

    ill]

    at 0

    7:28

    28

    Oct

    ober

    201

    4

  • 202 Miller, Stanney, and Wooten

    assigning individuals to novice and expert groups, which results in a better gauge of the usability of software. With the influx of Windows-hased applications in the work place, reliable measures for testing computer experience levels could be particularly beneficial in this area. A questionnaire assessing computer experience in a Windows-based environment would be valuable as a career selection device or as a method of identifying individuals who are in need of training. Yet, despite the increasing demand for Windows-based computer experience measures, this area has received little attention.

    A review of the literature resulted in the identification of a few general (i.e., non-system-specific) computer literacy questionnaires (Cassel & Cassel, 1984; Mon- tag,Sionson,& Maurer, 1984;Foplin,Drew, &Gable, 1984). Although thedefiiition of computer literacy varied between these studies, Montag et al.'s (1984) definition captured the essence of what these questionnaires are trying to gauge: the "under- standing of computer characteristics, capabilities, and applications, and the ability to implement this knowledge in the skillful, productive use of computer applications suitable to individual roles in society" (p. 7). Other system-specific questionnaires were also identified (e.g., the Computer Literacy Examination: Cognitive Aspect designed specifically for Apple II computer usen; Cheng, Flake, & Stevens, 1985). These questionnaires have little utility in measuring general, non-system-specific computer experience and thus are not reviewed further. The following section provides a brief review of existing general computer literacy questionnaires.

    1.1. General Computer Literacy Questionnaires

    The Cassel Computer Literacy Test (CMLRTC) is a 120-item multiple-choice test designed to measure a user's understanding of computer functionality (Cassel & Cassel, 1984). The test items represent subtopics in six areas, including computer development, technical understanding, computer structure, information process- ing, information retrieval, and communication systems. There was no report of the administration time; however, if users are given as few as 5 min to complete each of the six subsections of 20 questions each, it would take a minimum of 30 min to administer. The CMLRTC is generally given before and after computer trainiig to assess training effectiveness. We were not able to identify any reliability or validity data for this questionnaire; however, normative data for a number of user types (i.e., computer engineers, computer hobbyists, teachers and educators, and high school and graduate students) have been determined. Although the CMLRTC provides a measure of overall computer literacy, it is not clear how a measure specific to computcr experience could be abstracted from this literacy score.

    Montag et al. (1984) developed the Standardized Test of Computer Literacy (STCL) to determine a user's level of computer literacy. Their 80-item multiple- choice test is divided into three subsections: computer applications, computer systems, andcomputer programming. Each subsection takes approximately 30min toadminister. Theitems included in theSTCL were based on an extensive literature review as well as survey results from subject-matter experts (SMEs) in computer education (Eberts, 1994). Overall reliability was reported to be 0.86; the applications

    Dow

    nloa

    ded

    by [

    Uni

    vers

    ity N

    orth

    Car

    olin

    a -

    Cha

    pel H

    ill]

    at 0

    7:28

    28

    Oct

    ober

    201

    4

  • Windows Computer Experience Questionnaire 203

    subscale (which is somewhat relevant to general computer expertise) was reported to have a reliability of 0.75.

    The Computer Aptitude, Literacy, and Interest Profile (CALIP), developed by Poplin et al. (1984), was designed to measure an individual's level of computer literacy, aptitude, and interest in computer technology. Of the six subtests, four measure aptitude, one measures interest, and the remaining subtest measures literacy. It takes approximately 1 hr to complete. Items included in the CALIP were derived from an extensive literature review of computer-related abilities and later refined through item analysis. Reliability ranged from 0.75 to 0.95, depending on the age group. Specific validity information was not reported, although the authors claimed the questionnaire can differentiate among individuals of different com- puter expertise (LaLomia & Sidowski, 1990). These surveys were further discussed and compared in a review by LaLomia and Sidowski (1990).

    From a review of these questionnaires, it can be noted that none of these surveys was designed specifically to evaluate computer experience levels (i.e., they include other measures such as computer anxiety and aptitude). Due to this fact, all of these questionnaires consist of a large number of items and thus take a considerable amount of time to administer. Despite this deficiency, many studies have relied on computer experience as a screening method in their experimental designs (see, e.g., Kay & Black, 1990; Lee, 1986; Prumper, Zapf, Brodbeck, & Frese, 1992). These studies have typically resorted to using self-report measures or questionnaires developed particularly for their study. Reliance on an individual's self-reported level of expertise may be nonsystematic and unreliable. Questionnaires designed for a particular study are also problematic in that there is generally no theoretical basis for the selected questionnaire items and reliability or validity tests are typically not conducted (e.g., the Computer Experience Questionnaire developed by Lee, 1986). In addition, the lack of standardized experience questionnaires limits generalizabil- ity across HCI studies. Yet, in spite of the widespread use of Widows-based applications, no questionnaires were found pertaining specifically to a Windows environment. One would thus expect that if a general computer experience ques- tionnaire that was reliable, valid, and able to be administered quickly were devel- oped, it would be readily adopted by HCI researchers and practitioners.

    1.2. Questionnaire ReliabiNty

    A fundamental requirement of measuring instruments, such as questionnaires, is their reliability or the "extent of unsystematic variation in scores of an individual on some trait when that trait is measured a number of times" (Ghiselli, Campbell, & Zedeck, 1981, p. 91). More simply, reliability is a quantitative assessment of an instrument's consistency (Lewis, 1995). Measures that are more consistent produce a higher reliability coefficient. Although we can never identify all of the variables that affect the variance in a measuring device, it is important that the device account for a significant amount of this variance. Statistically, reliability is importantbecause tools that do not measure consistently from one administration to another will result in high variability and, consequently, insignificant results. More important, tests

    Dow

    nloa

    ded

    by [

    Uni

    vers

    ity N

    orth

    Car

    olin

    a -

    Cha

    pel H

    ill]

    at 0

    7:28

    28

    Oct

    ober

    201

    4

  • 204 Miller. Stanney, and Wooten

    with low reliability cannot be tested for validity (the extent to which the question- naire measures what it claims to measure), resulting in a potentially useless meas- uring instrument. Thus, for any questionnaire to be effective, it is essential that it f i t be reliable.

    1.3. Factor Analysis

    A factor analysis is a statistical procedure that examines correlations among vari- ables in order to identify underlying factors (clusters of related variables). The procedure is often used to group a large number of variables into a smaller number of factors. To ensure stable factor estimates at least 5 participants per item are recommended (Gorsuch, 1983; Nunnally, 1978; Stevens, 1986).

    When a questionnaire is developed to measure a particular skill, such as com- puter experience, that s k i isoften comprised of several subfactors. A factor analysis can be used to assist in systematically determining which subfactors are being measured by the questionnaire. Then, any questions that are unrelated or not significantly related to the construct being measured can be eliminated, thereby minimizing administration time.

    There are several ways to determine the number of factors underlying a ques- tionnaire. For instance, many statistics programs are capable of calculating eigen- values. Eigenvalues are the standardized variances associated with a particular factor. The number of eigenvalues greater than 1.0 represent the number of factors represented in a questionnaire (Weiss, 1971). A moreeffective means of determining factor loadings is through the use of a discontinuity analysis (Coovert & McNelis, 1988). A discontinuity analysis involves plotting the eigenvalues in order to get a graphical representation, known as a scree plot, of the factor data. The point at which the l i e becomes discontinuous indicates the appropriate number of factors for the analysis (Cliff, 1987). In the event that there is more than one place on the graph that qualifies as a point oI discontinuity, a person familiar with the constructs under investigation should choose between the qualifying points.

    1.4. Summary

    The objectives of this article are to (a) develop a Windows computer experience questionnaire that requires little administration time, (b) determine the reliability of the questionnaire, and (c) determine the subfactors measured by the questionnaire.

    2. METHOD

    2.1. Participants

    Eightytwo students (49 men and 33 women) recruited from introductory psychol- ogy and engineering courses participated in this study. Their ages ranged from 17

    Dow

    nloa

    ded

    by [

    Uni

    vers

    ity N

    orth

    Car

    olin

    a -

    Cha

    pel H

    ill]

    at 0

    7:28

    28

    Oct

    ober

    201

    4

  • Windows Computer Experience Questionnaire 205

    to 50 years with a mean age of 26.07 years. The participants had various levels of computer experience. They received extra credit in their courses for participating in the study.

    2.2. Materials

    The WCEQ was developed with the aid of an SME. This individual was responsible for upgrading Windows systems and supporting applications, helping employees...

Recommended

View more >