Development and Evaluation of the Windows Computer Experience Questionnaire (WCEQ)

Download Development and Evaluation of the Windows Computer Experience Questionnaire (WCEQ)

Post on 28-Feb-2017

214 views

Category:

Documents

0 download

TRANSCRIPT

  • This article was downloaded by: [University North Carolina - Chapel Hill]On: 28 October 2014, At: 07:28Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

    International Journal of Human-Computer InteractionPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/hihc20

    Development and Evaluation ofthe Windows Computer ExperienceQuestionnaire (WCEQ)Laura A. Miller , Kay M. Stanney & William WootenPublished online: 13 Nov 2009.

    To cite this article: Laura A. Miller , Kay M. Stanney & William Wooten (1997) Development andEvaluation of the Windows Computer Experience Questionnaire (WCEQ), International Journal ofHuman-Computer Interaction, 9:3, 201-212, DOI: 10.1207/s15327590ijhc0903_1

    To link to this article: http://dx.doi.org/10.1207/s15327590ijhc0903_1

    PLEASE SCROLL DOWN FOR ARTICLE

    Taylor & Francis makes every effort to ensure the accuracy of all the information (theContent) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Contentshould not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoeveror howsoever caused arising directly or indirectly in connection with, in relation to orarising out of the use of the Content.

    This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden. Terms& Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

    http://www.tandfonline.com/loi/hihc20http://www.tandfonline.com/action/showCitFormats?doi=10.1207/s15327590ijhc0903_1http://dx.doi.org/10.1207/s15327590ijhc0903_1http://www.tandfonline.com/page/terms-and-conditionshttp://www.tandfonline.com/page/terms-and-conditions

  • INTERNAllONAL JOURNAL OF HUMANCOMPUTER INTERACTION. 43). 201-212 CopyrghtO 1097. Lawrence Erlbaum Associates. Inc

    Development and Evaluation of the Windows Computer Experience Questionnaire (WCEQ)

    Laura A. Miller Psychology Department, University of Central Florida

    Kay M. Stanney Industrial Engineering and Management Systems Department.

    University of Central Florida

    William Wooten Psychology Department, University of Central Florida

    The software market has been inundated with Windows-based application programs claiming increased usability and convenience. Although this bend is indeed prolific, it has resulted in two important implications: (a) an increase in the need to select employees with high levels of Windows-based computer expertise or to identify current employees who require enhanced training, and (b) an increase in the need to measure user expertise to support human-computer interaction research. Despite these increasing demands, questionnaires used to determine general computer expe- rience are scarce. Furthermore, questionnaires regarding computer experience in a Windows environment are seemingly nonexistent. A reliable means of measuring experience ina Windows environment could substantially facilitateboth human-com- puter interaction research and training. This article describes the procedures used to develop and test the reliability of the Windows Computer Experience Questionnaire (WCEQ). A test-retest correlation revealed that the WCEQ is a reliable measure of computer experience. Furthermore, a subsequent factor analysis revealed that the WCEQ is composed of four main factors: general Windows experience, advanced Windows experience, formal instruction, and reliance on help functions.

    1 . INTRODUCTION

    Researchers in the field of human-computer Interaction (HCI) have long under- stood the valueof testing for computerexperlence (Egan, 1988; Lee, 1986). O b t a i m g such measurements allows sclentlsts to control for this variable, for example b y

    Requests for reprints should be sent to Kay M. Stanney, Industrial Engineering and Management Systems Department, University of Central Florida, Orlando, FL 328162450, E-mail: stan- ney@iems.engr.ucf.edu.

    Dow

    nloa

    ded

    by [

    Uni

    vers

    ity N

    orth

    Car

    olin

    a -

    Cha

    pel H

    ill]

    at 0

    7:28

    28

    Oct

    ober

    201

    4

  • 202 Miller, Stanney, and Wooten

    assigning individuals to novice and expert groups, which results in a better gauge of the usability of software. With the influx of Windows-hased applications in the work place, reliable measures for testing computer experience levels could be particularly beneficial in this area. A questionnaire assessing computer experience in a Windows-based environment would be valuable as a career selection device or as a method of identifying individuals who are in need of training. Yet, despite the increasing demand for Windows-based computer experience measures, this area has received little attention.

    A review of the literature resulted in the identification of a few general (i.e., non-system-specific) computer literacy questionnaires (Cassel & Cassel, 1984; Mon- tag,Sionson,& Maurer, 1984;Foplin,Drew, &Gable, 1984). Although thedefiiition of computer literacy varied between these studies, Montag et al.'s (1984) definition captured the essence of what these questionnaires are trying to gauge: the "under- standing of computer characteristics, capabilities, and applications, and the ability to implement this knowledge in the skillful, productive use of computer applications suitable to individual roles in society" (p. 7). Other system-specific questionnaires were also identified (e.g., the Computer Literacy Examination: Cognitive Aspect designed specifically for Apple II computer usen; Cheng, Flake, & Stevens, 1985). These questionnaires have little utility in measuring general, non-system-specific computer experience and thus are not reviewed further. The following section provides a brief review of existing general computer literacy questionnaires.

    1.1. General Computer Literacy Questionnaires

    The Cassel Computer Literacy Test (CMLRTC) is a 120-item multiple-choice test designed to measure a user's understanding of computer functionality (Cassel & Cassel, 1984). The test items represent subtopics in six areas, including computer development, technical understanding, computer structure, information process- ing, information retrieval, and communication systems. There was no report of the administration time; however, if users are given as few as 5 min to complete each of the six subsections of 20 questions each, it would take a minimum of 30 min to administer. The CMLRTC is generally given before and after computer trainiig to assess training effectiveness. We were not able to identify any reliability or validity data for this questionnaire; however, normative data for a number of user types (i.e., computer engineers, computer hobbyists, teachers and educators, and high school and graduate students) have been determined. Although the CMLRTC provides a measure of overall computer literacy, it is not clear how a measure specific to computcr experience could be abstracted from this literacy score.

    Montag et al. (1984) developed the Standardized Test of Computer Literacy (STCL) to determine a user's level of computer literacy. Their 80-item multiple- choice test is divided into three subsections: computer applications, computer systems, andcomputer programming. Each subsection takes approximately 30min toadminister. Theitems included in theSTCL were based on an extensive literature review as well as survey results from subject-matter experts (SMEs) in computer education (Eberts, 1994). Overall reliability was reported to be 0.86; the applications

    Dow

    nloa

    ded

    by [

    Uni

    vers

    ity N

    orth

    Car

    olin

    a -

    Cha

    pel H

    ill]

    at 0

    7:28

    28

    Oct

    ober

    201

    4

  • Windows Computer Experience Questionnaire 203

    subscale (which is somewhat relevant to general computer expertise) was reported to have a reliability of 0.75.

    The Computer Aptitude, Literacy, and Interest Profile (CALIP), developed by Poplin et al. (1984), was designed to measure an individual's level of computer literacy, aptitude, and interest in computer technology. Of the six subtests, four measure aptitude, one measures interest, and the remaining subtest measures literacy. It takes approximately 1 hr to complete. Items included in the CALIP were derived from an extensive literature review of computer-related abilities and later refined through item analysis. Reliability ranged from 0.75 to 0.95, depending on the age group. Specific validity information was not reported, although the authors claimed the questionnaire can differentiate among individuals of different com- puter expertise (LaLomia & Sidowski, 1990). These surveys were further discussed and compared in a review by LaLomia and Sidowski (1990).

    From a review of these questionnaires, it can be noted that none of these surveys was designed specifically to evaluate computer experience levels (i.e., they include other measures such as computer anxiety and aptitude). Due to this fact, all of these questionnaires consist of a large number of items and thus take a considerable amount of time to administer. Despite this deficiency, many studies have relied on computer experience as a screening method in their experimental designs (see, e.g., Kay & Black, 1990; Lee, 1986; Prumper, Zapf, Brodbeck, & Frese, 1992). These studies have typically resorted to using self-report measures or questionnaires developed particularly for their study. Reliance on an individual's self-reported level of expertise may be nonsystematic and unreliable. Questionnaires designed for a particular study are also problematic in that there is generally no theoretical basis for the selected questionnaire items and reliability or validity tests are typically not conducted (e.g., the Computer Experience Questionnaire developed by Lee, 1986). In addition, the lack of standardized experience questionnaires limits generalizabil- ity across HCI studies. Yet, in spite of the widespread use of Widows-based applications, no questionnaires were found pertaining specifically to a Windows environment. One would thus expect that if a general computer experience ques- tionnaire that was reliable, valid, and able to be administered quickly were devel- oped, it would be readily adopted by HCI researchers and practitioners.

    1.2. Questionnaire ReliabiNty

    A fundamental requirement of measuring instruments, such as questionnaires, is their reliability or the "extent of unsystematic variation in scores of an individual on some trait when that trait is measured a number of times" (Ghiselli, Campbell, & Zedeck, 1981, p. 91). More simply, reliability is a quantitative assessment of an instrument's consistency (Lewis, 1995). Measures that are more consistent produce a higher reliability coefficient. Although we can never identify all of the variables that affect the variance in a measuring device, it is important that the device account for a significant amount of this variance. Statistically, reliability is importantbecause tools that do not measure consistently from one administration to another will result in high variability and, consequently, insignificant results. More important, tests

    Dow

    nloa

    ded

    by [

    Uni

    vers

    ity N

    orth

    Car

    olin

    a -

    Cha

    pel H

    ill]

    at 0

    7:28

    28

    Oct

    ober

    201

    4

  • 204 Miller. Stanney, and Wooten

    with low reliability cannot be tested for validity (the extent to which the question- naire measures what it claims to measure), resulting in a potentially useless meas- uring instrument. Thus, for any questionnaire to be effective, it is essential that it f i t be reliable.

    1.3. Factor Analysis

    A factor analysis is a statistical procedure that examines correlations among vari- ables in order to identify underlying factors (clusters of related variables). The procedure is often used to group a large number of variables into a smaller number of factors. To ensure stable factor estimates at least 5 participants per item are recommended (Gorsuch, 1983; Nunnally, 1978; Stevens, 1986).

    When a questionnaire is developed to measure a particular skill, such as com- puter experience, that s k i isoften comprised of several subfactors. A factor analysis can be used to assist in systematically determining which subfactors are being measured by the questionnaire. Then, any questions that are unrelated or not significantly related to the construct being measured can be eliminated, thereby minimizing administration time.

    There are several ways to determine the number of factors underlying a ques- tionnaire. For instance, many statistics programs are capable of calculating eigen- values. Eigenvalues are the standardized variances associated with a particular factor. The number of eigenvalues greater than 1.0 represent the number of factors represented in a questionnaire (Weiss, 1971). A moreeffective means of determining factor loadings is through the use of a discontinuity analysis (Coovert & McNelis, 1988). A discontinuity analysis involves plotting the eigenvalues in order to get a graphical representation, known as a scree plot, of the factor data. The point at which the l i e becomes discontinuous indicates the appropriate number of factors for the analysis (Cliff, 1987). In the event that there is more than one place on the graph that qualifies as a point oI discontinuity, a person familiar with the constructs under investigation should choose between the qualifying points.

    1.4. Summary

    The objectives of this article are to (a) develop a Windows computer experience questionnaire that requires little administration time, (b) determine the reliability of the questionnaire, and (c) determine the subfactors measured by the questionnaire.

    2. METHOD

    2.1. Participants

    Eightytwo students (49 men and 33 women) recruited from introductory psychol- ogy and engineering courses participated in this study. Their ages ranged from 17

    Dow

    nloa

    ded

    by [

    Uni

    vers

    ity N

    orth

    Car

    olin

    a -

    Cha

    pel H

    ill]

    at 0

    7:28

    28

    Oct

    ober

    201

    4

  • Windows Computer Experience Questionnaire 205

    to 50 years with a mean age of 26.07 years. The participants had various levels of computer experience. They received extra credit in their courses for participating in the study.

    2.2. Materials

    The WCEQ was developed with the aid of an SME. This individual was responsible for upgrading Windows systems and supporting applications, helping employees solve Widows-based problems, identifyingnovice and expert computer users, and training employees in Windows and Windows-based software for a nationwide organization. As a result of a series of interviews with this SME, 14 Windows-related questions, specifically concerning the Windows 3.1 operating system, were submit- ted for the WCEQ evaluation. The questions gauged frequency of computer and manual use, subjective ease of use, variety of applications used, and knowledge of specific Widows functionality. Currently, scores on the WCEQ are dependent on question format. For instance, yes-no and fill-in-the-blank formats were awarded 1 point for each "yes" or comct answer, whereas questions relying on a "check all that apply" format awarded points incrementally (e.g., 1 point for each checked response). A total score is assessed by adding up all of the points awarded. Additional research is currently being performed to devise a scoring technique based on weighted factors.

    A demographics sheet provided participants with information on the purpose of the study and asked for their age, sex, and occupation.

    2.3. Procedure

    Participants were instructed to complete the demographics questionnaire and a paper-and-pend version of the WCEQ. Participants were instructed to work at their own pace and told that the WCEQ would take approximately 5 min to complete. The questionnaire administrator was available for questions at all times. One week later participants were asked to return to fill out the WCEQ questionnaire again.

    3. RESULTS

    3.1. Reliability

    First, the WCEQ total scores were determined for each administration of the questionnaire. Scores on the WCEQ ranged from 0 (indicative of no Windows computer experience) to 32, with a mean of 17.17 (SD = 7.58). The highest possible score was 39.

    Preliminary internal consistency analyses were then conducted on the WCEQ. This analysis indicated that Question 2, which was related to ease of use, was not signrficantly correlated with the total score; therefore, this test item was dropped

    Dow

    nloa

    ded

    by [

    Uni

    vers

    ity N

    orth

    Car

    olin

    a -

    Cha

    pel H

    ill]

    at 0

    7:28

    28

    Oct

    ober

    201

    4

  • 206 Miller. Stanney, and Wooten

    from the scale, resulting in 13 items. The remaining items all correlated significantly (p < .05) with the final score (see Table 1). Item-total correlations ranged from 0.18 to 0.91 with an average item-total correlation of 0.52. These item-total correlations suggest that theinstrument is homogeneous, thusmeasuringasingleconstruct. This represents moderate evidence of construct validity (Guion, 1980). The test-retest reliability was significant (r = .97, p < ,0001).

    The WCEQ demonstrated high internal consistency (Cronbach a = 0.74, p < .05). Item intercorrelations are presented in Table 2. An analysis of this matrix, which also includes the total scores, reveals that the interitem correlations ranged from -0.06 to 0.68. Of the 91 nonredundant intercorrelations, 4 were negative and non- significant, 35 were positive and nonsignificant, and 52 were positive and significant at or beyond the .05 level.

    3.2. Factor Analysis

    The 13 WCEQ items were factor analyzed using the Statistical Package for the Social Sciences (SPSS) for windows". The factor structure was extracted using the princi- ple-components procedure and rotated using the varimax procedure. The iteration process was stopped at four factors, applying the commonly used Kaiser criterion, whichinvolves retaining all factors with eigenvalues of 1.0 or greater (Weiss, 1971). Figure 1 illustrates the scree plot (plot of eigenvalues for all possible factors). Parsimonious factor solutions are separated by negative sloped sect~ons of this curve (Thurstone, 1947). Although this is an imprecisemeasure, it is commonly used as a process for determining the number of factors constituting a parsimonious solution. The Kaiser criterion indicates a four-factor solution; however, the scree plot shows reasonable discontinuity points could be made at the third, fourth, or fifth factors. The four-factor solution was selected for further investigation.

    The four-factor solution accounted for 67.2% of the varlance in the 13 items. The factor analysis also revealed that the Item communalities, which indicate the portion of the item's variance accounted for by the factor solution, resulted in a range of .45

    Table 1: Means. Standard Deviations, and ltem-Total Correlations for the WCEQ Items

    WCEQ Nenrlar WCEQ Itcr,, M SD l t t~~~-Tota l r

    1 Used Windows? .95 .22 ,4817 3 Taken Windnws coulxs? .26 .44 ,2938 4 Taken Windew.; applications courses? 3 7 .48 ,1846 5 F ~ q u e n c y of manual use .22 .42 ,3439 6 Frequency of online help use 3 4 .4X 3125 7 Total hr of Windows use 1.77 1.17 ,7747 8 Hr per week Windows use 1.96 .95 ,7833 9 Types of applicalions a-nl 5.57 3.27 .91W

    10 Functions can perform 4.90 2.43 .MI2 I1 What i< a cornpuler icon? .Ah .4R .A158 12 What does Control-Esc do? .12 .33 ,4032 13 What does All-Esc do? ,117 .2h ,3276 14 What does Alt-Tab do? .21 4 1 5458

    Dow

    nloa

    ded

    by [

    Uni

    vers

    ity N

    orth

    Car

    olin

    a -

    Cha

    pel H

    ill]

    at 0

    7:28

    28

    Oct

    ober

    201

    4

  • Table 2: Correlation Matrix of 13 WCEQ Items and Total Score

    QuestionnnireIlem

    WCEQl WCEQ.3 WCEQ4 WCEQS WCEQ6 WCEQ7 WCEQ8 WCEQ9 WCEQlO WCEQll WCEQl2 WCEQ13 WCEQ14

    WCEQl - WCEQ3 0.13 WCEQ4 0.17 WCEQ5 0.12 WCEQ6 0.16 WCEQ7 0.31' WCEQR 0.41' WCEQ4 0.39' WCEQlO 0.46' WCEQ11 0.31' WCEQl2 0.08 WCEQ13 0.06 WCEQ14 0.12 Item-Total r 0.48'

    Dow

    nloa

    ded

    by [

    Uni

    vers

    ity N

    orth

    Car

    olin

    a -

    Cha

    pel H

    ill]

    at 0

    7:28

    28

    Oct

    ober

    201

    4

  • 208 Miller. Stanney, and Wooten

    to 34. The final factor analysis results, including item comrnunalities, number of factors, eigenvalues, and percentage of variance accounted for are reported in Table - 3.

    The factor loadings are presented in Table 4. Six of the 13 questionnaire items (WCEQ1 and WCEQ7-WCEQ11) loaded on the first factor extracted. These items dealt with general Windows experience, including hours spent working in a Win- dowsenvironment, number of Windows applications used, and knowledgeof what an icon is. Three questionnaire items (WCEQlZ-WCEQ14) loaded on the second factor extracted. These items involved advanced Windows knowledge, particularly applications of key shortcut functions (Ctrl-Esc, Alt-Esc, and Alt-Tab). Two ques- tionnaire items (WCEQ3 and WCEQ4) loaded on the third factor extracted. These items were related to exposure to Windows courses or Windows applications

    Factors

    FIGURE 1 Scree plot for WCEQ factors

    Table 3: Final Factor Analysis Statistics for the Four WCEQ Components

    WCEQ '% C~rnurintiiv Numbo. WCEQ ltmn Co,mnu,mlity Fnclor Ei ,y~~mdae Vnrinrtcr ,,,

    Used Windows? Taken Windows courses? Taken Windows

    applications courses? Frequency of manual use Frequency of online help us? Total hr uf Windows use Hr per week Windows use Tyfes of applications uscd Functions can perform Whal is a computer icon? What does Control-Esc do? What docs Alt-Fsc do? What does Alt-Tab do?

    Dow

    nloa

    ded

    by [

    Uni

    vers

    ity N

    orth

    Car

    olin

    a -

    Cha

    pel H

    ill]

    at 0

    7:28

    28

    Oct

    ober

    201

    4

  • Dow

    nloa

    ded

    by [

    Uni

    vers

    ity N

    orth

    Car

    olin

    a -

    Cha

    pel H

    ill]

    at 0

    7:28

    28

    Oct

    ober

    201

    4

  • 21 0 Miller. Stanney, and Wooten

    original reliability evaluation, however, there was concern that the I-week test-re- test interval may have been too short, resulting in spuriously high reliability coefficients. Therefore, a split-half test-retest reliability analysis was conducted as a more robust test of the questionnaire's consistency. Eighteen undergraduate industrial engineering students completed each half of the WCEQ, which were administered 6 weeks apart. The split-half test-retest reliability was s ipf icant ( r = .66, p c ,005). These results compared favorably with the reliability results, thus indicating that the WCEQ is a reliable measuring instrument.

    The item-total score correlations indicated that all questions except the question relating to ease of use were significantly correlated with the total score and thus were effective in reliably gauging Windows computer experience. The question- naire had acceptable internal consistency reliability. The positive and significant interitem correlations indicate that the questions are not completely independent. The factor analysis further def ied the underlying related variables. In general, items that were highly correlated loaded onto the same factor.

    The factor analysis data support the multidimensionality of the WCEQ. In its current form, the WCEQ is composed of four components that represent different sources of variance and different components of computer experience in a Windows environment. The four components focus on the following:

    1. General experience in a Windows environment. 2. Advanced experience in a Windows environment, particularly familiarity

    with and the use of key shortcut functions. 3. Formal instruction or course work in Windows applications. 4. Reliance on help, using manuals and help hypertext.

    To the extent that these four components are compatible with the overall purpose of the WCEQ, these components should be retained. However, a closer, more intuitive look at the resultant factors showed that, as a construct, "reliance on help, using manuals and help hypertext" is not related to an individual's Windows computer experience level. Rather, it is more a reflection of an individual's learning style. Statistical support for the omission of this item is found in (a) the scree plot, which showed that legitimate cutoff points could be made at the third, fourth, or fifth factors; (b) its contribution to the total variance, which was only 8.2%; and (c) the low coefficient alpha of 0.38, reported in Table 5. The three remaining compo- nents, general experience, advanced experience, and formal instruction, contributed 59.0% of the variance. In addition, the high coefficient alphas (0.75,0.73, and 0.79, respectively) and acceptable reliabilities (0.99.0.54, and 0.29, respectively) for these factors indicate that all three of these factors contribute significantly to the objectives of the WCEQ. These factors should thus remain and be scored and reported as separate sections of the WCEQ. Furthermore, an improved version of the WCEQ would include additional items for the advanced experience and formal instruction factors. There are currently only three and two questions, respectively, to evaluate these factors. This study should be replicated with this revised version of the WCEQ to ensure its reliability.

    Dow

    nloa

    ded

    by [

    Uni

    vers

    ity N

    orth

    Car

    olin

    a -

    Cha

    pel H

    ill]

    at 0

    7:28

    28

    Oct

    ober

    201

    4

  • Windows Computer Experience Questionnaire 21 1

    It should be noted that the current data are based on college students. Different participant pools, particularly job incumbents, should be used in future studies to ensure that the reliability holds across diverse user groups. We are currently performing validity studies of the WCEQ using a revised version of the question- naire. In addition, at the time of development, Windows 95 was only recently announced. Therefore, our interviews with the SMEs targeted operations pertaining to a W i d o w s 3.1 environment. Although many of the questions found on the WCEQ apply for both Windows 3.1 and Widows 95 operating systems, a ques- tionnaire pertaining specifically to Windows 95 environments is needed. An up- dated WCEQ questionnaire is currently being developed to reflect the changes in Windows 95.

    5. CONCLUSIONS

    The three primary objectives of this study were met. A Windows computer experi- ence questionnaire was developed that requires little administration time. The results show that the WCEQ has internal consistency and is a reliable instrument for measuring Windows-based computer experience. As initially designed, the WCEQ consists of four main subfactors. Three of these factors, general windows experience, advanced Windows experience, and formal instruction, were consid- ered to be directly related to the quantification of computer experience. Questions related to these factors should thus remain part of the WCEQ and each factor should be scored separately. The fourth factor, reliance on help functions, was considered to be more indicative of learning style than actual user experience. Based on this subjective assessment and the low statistical significance of this factor, it was recommended that questions related to this factor be removed from the question- naire. Additional reliability studies, development of a weighted scoring technique, further evidence of the content and criterion validity of the instrument, as well as a new questionnaire pertaining to the Windows 95 environment are called for and will be addressed in future studies.

    REFERENCES

    Cassel, R. N., & Cassel, S. L. (1984). Cassel Computer Literacy Test (CMLRTC). lolrrrlnl of Instnrctiownl Pnjcholop~. 13.3-9.

    Cheng, T. T., Plake, B., & Stevens, D. 1. (1985). A validation study of the computer literacy examination: Cognitive aspect. AEDS Io~rrnnl, 18,139-152.

    Cliff, N. (1987). Annlyzing mtrlfivarinte dntn. San Diego, CA: Harcourt Brace Jovanovich. Coovert, M. D., & McNelis, K. (1988). Determining the number of common factors in factor

    analysis: A review and program. Ed~icnfional mid Psychologicnl Mens~iremenl. 48,687493. Eberts, R. E. (1994). User interfncc desipl. Englcwood Cliffs, NJ: Prentice Hall. Egan, D. E. (1988). Individual differences in human-computer interaction. In M. Helander

    (Ed.), Hnndbookof homnn+conrp~rter intrrnciion (pp. 543-568). Amsterdam: North-Holland. Ghiselli, E. E.,Campbell, J. P., &Zedeck,S. (1981).Mens1rrenzent theoryforflu.bel~nviornlscir,rc~s.

    New York: Freeman.

    Dow

    nloa

    ded

    by [

    Uni

    vers

    ity N

    orth

    Car

    olin

    a -

    Cha

    pel H

    ill]

    at 0

    7:28

    28

    Oct

    ober

    201

    4

  • 212 Miller. Stanney, and Wooten

    Gorsuch, R. L. (1983). Fnctor nnnlysis. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc. Guion, R. M. (1980). On trinitarian doctrines of validity. Professionnl Ps!/cl~ology, 11,385-398. Kay, D. S., & Black, J. B. (1990). Knowledge transformations during the acquisition of

    computer expertise. In S. P. Robertson, W. Zachary, & J. B. Black (Eds.), Cogrrilior~, computing, nnd cooperntion (pp. 268-303). Nolurood, NJ: Ablex.

    LaLomia, M. I., & Sidowski,!. B. (1990). Measurementsofcomputer satisfaction, literacy, and aptitudes: A review. lrrfernnlionnl lor~rnnl oJH~rttlnrr-Conlplrtrr hlmnctiori, 2,231-253.

    Lee, J. A. (1986). The effects of past computer experience on computerized aptitude test performance. Educntionnl nfrd Ps~~cl~oio~icnl Mensurrmrnt, 46,727-733.

    Lewis, J. R. (1995). IBM computer usability satisfaction questionnaires: Psychometric evalu- ation and instructionsfor use.lnlcn~nlionnl~o~crnnloJHrrmni~-Conrpnler~lmirrfernctiori, 7.57-78.

    Montag, M., Simonson, M. R., & Maurer, M. M. (1984). Stnrrdnrdized Test of Contp~ler Lilernq (STCLI. Ames: Iowa State University Research Foundation.

    Nunnally, J. C. (1978). Psyclron~etric tlreory. New York: McGraw-Hill. Poplin, M. S., Drew, D. E., & Gable, R. S. (1984). Mnrmnlfor the cornprrtrr npfitude, l i t o o q , nnd

    interest profile. Austin, TX: PRO-D. Prumper, J., Zapf, D., Brodheck, F. C., & Frese, M. (1992). Some surprising differences

    between novice and expert errors in computerized office work. Belmviorrr nrrd I!!fornmliorr Terl~~tolof l , 71, 319-328.

    Stevens, J. (1986). Applied m~rllitmrintr slnlislicsfirr llrr socinl sciences. Millsdale, NJ: Lawrence Erlbaum Associates, Inc.

    Thurstone. L. L. (1947). M~rlliple Jnctor nllnlysis. Chicago: University of Chicago Press. Weiss, D. J. (1971). Further considerations in applications of factor analysis. lorrrrrol of

    Co~rttseling Psycholog!y, 18.85-92.

    Dow

    nloa

    ded

    by [

    Uni

    vers

    ity N

    orth

    Car

    olin

    a -

    Cha

    pel H

    ill]

    at 0

    7:28

    28

    Oct

    ober

    201

    4

Recommended

View more >