the staggers nursing computer experience questionnaire

10
Clinical Method The Staggers Nursing Computer Experience Questionnaire Nancy Staggers HEN STUDYING nursing informatics con- cepts in the clinical setting, researchers may need to assess nurses’ previous computerex- perience. It is a study variable that may confound or help explain research findings. For instance,an investigator may be interested in the amount of time nursesspenddocumenting in a clinical infor- mation system, the amount of learning transferred between two different information systems, the impact of various computer screen designs on nurses’ information detection abilities, or nurses’ competency and concomitant clinical decision making as they use a new, graphically based in- formation system. One of the potential confound- ing variables for these studies is nurses’ previous experience. Despitethe needfor an instrument that measurescomputer experience, a representative review of severalliterature databases producedno instrumentswith reported reliability and validity. This paperdescribes the development and psycho- metric testing for an instrument that measures nurses’ perceivedcomputer knowledge and expe- rience . BACKGROUND Previous Computer Experience Measures A number of researchers in the past measured computer experience using self-designedinstru- ments (Table 1). These instrumentswere located from a bibliographic database search in human- computerinteraction, screen design, nursing infor- matics, and educational areas.As shown in Table 1, there was little consistency in how computer experience was conceptualized and measured. Ex- cept for Greer (1986) and Shields (1984), authors did not explicitly list any categories that might sug- gest a conceptualdefinition for computer experi- ence. Some authors considered computer experi- Applied Nursing Research Vol. 7, No. 2 (May), 1994: pp. 97-106 ence related to time; for example, how long subjectsused computers(e.g., Co11 8z Wingerts- man, 1990) or how frequently subjects used com- puters (e.g., Sullivan, 1989; Vincente, Hayes, & Williges, 1987). Other authors believed computer experience had to do with the kinds of applications subjects used (e.g., Borgman, 1984; Parks, Dam- rosch, Heller, & Romano, 1986). Othersbelieved that knowledge gained in formal course work or owning/having access to computers defined the computer experience concept. Less frequently, computerexperience was defined as reading com- puter magazines, working professionally with computers, or having programming skills. More frequently, formal educationor knowledge, com- puter application use, owning a computer, and a time component were identified. Many authors combined items from several of these categories, but overall there was no agreement or rationale about which categoriesdefined the concept do- main. There has been little discussion about the psy- chometric properties of computer experiencein- struments. Authors, for the most part, provided little information aboutinstrument development. A few authors treatedcomputerexperience as a sim- ple demographic variable (e.g., Loyd & Gressard, 1984;Vincente et al., 1987). Other authorsdevel- oped more complex measures but gave little infor- mation about the psychometric properties for gen- erated instruments (e.g., Schwirian, Malone, Stone, Nunley, & Fransisco, 1989). None of the authorsreported instrument validity assessments. Only Lee (1984) reported instrument reliability. Using Cronbach’s alpha, she obtained a .35 reli- ability coefficient for her 5-item instrument. This is an extremely low level of internal consistency reliability according to Waltz, Strickland, and Lenz (1984). The majority of authors used dichot- omous yes/no scaling formats. Another common format was a 3-to-5-point Likert-type scale,but the number of scale categories and word anchorsfor theseLikert scaleswere nearly as variable as the number of authors.Less frequently, subjects were asked to write down specific information. Re- searchers analyzed individual items separately dur- 97

Upload: nancy-staggers

Post on 23-Nov-2016

217 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: The staggers nursing computer experience questionnaire

Clinical Method

The Staggers Nursing Computer Experience Questionnaire

Nancy Staggers

HEN STUDYING nursing informatics con- cepts in the clinical setting, researchers

may need to assess nurses’ previous computer ex- perience. It is a study variable that may confound or help explain research findings. For instance, an investigator may be interested in the amount of time nurses spend documenting in a clinical infor- mation system, the amount of learning transferred between two different information systems, the impact of various computer screen designs on nurses’ information detection abilities, or nurses’ competency and concomitant clinical decision making as they use a new, graphically based in- formation system. One of the potential confound- ing variables for these studies is nurses’ previous experience. Despite the need for an instrument that measures computer experience, a representative review of several literature databases produced no instruments with reported reliability and validity. This paper describes the development and psycho- metric testing for an instrument that measures nurses’ perceived computer knowledge and expe- rience .

BACKGROUND

Previous Computer Experience Measures

A number of researchers in the past measured computer experience using self-designed instru- ments (Table 1). These instruments were located from a bibliographic database search in human- computer interaction, screen design, nursing infor- matics, and educational areas. As shown in Table 1, there was little consistency in how computer experience was conceptualized and measured. Ex- cept for Greer (1986) and Shields (1984), authors did not explicitly list any categories that might sug- gest a conceptual definition for computer experi- ence. Some authors considered computer experi-

Applied Nursing Research Vol. 7, No. 2 (May), 1994: pp. 97-106

ence related to time; for example, how long subjects used computers (e.g., Co11 8z Wingerts- man, 1990) or how frequently subjects used com- puters (e.g., Sullivan, 1989; Vincente, Hayes, & Williges, 1987). Other authors believed computer experience had to do with the kinds of applications subjects used (e.g., Borgman, 1984; Parks, Dam- rosch, Heller, & Romano, 1986). Others believed that knowledge gained in formal course work or owning/having access to computers defined the computer experience concept. Less frequently, computer experience was defined as reading com- puter magazines, working professionally with computers, or having programming skills. More frequently, formal education or knowledge, com- puter application use, owning a computer, and a time component were identified. Many authors combined items from several of these categories, but overall there was no agreement or rationale about which categories defined the concept do- main.

There has been little discussion about the psy- chometric properties of computer experience in- struments. Authors, for the most part, provided little information about instrument development. A few authors treated computer experience as a sim- ple demographic variable (e.g., Loyd & Gressard, 1984; Vincente et al., 1987). Other authors devel- oped more complex measures but gave little infor- mation about the psychometric properties for gen- erated instruments (e.g., Schwirian, Malone, Stone, Nunley, & Fransisco, 1989). None of the authors reported instrument validity assessments. Only Lee (1984) reported instrument reliability. Using Cronbach’s alpha, she obtained a .35 reli- ability coefficient for her 5-item instrument. This is an extremely low level of internal consistency reliability according to Waltz, Strickland, and Lenz (1984). The majority of authors used dichot- omous yes/no scaling formats. Another common format was a 3-to-5-point Likert-type scale, but the number of scale categories and word anchors for these Likert scales were nearly as variable as the number of authors. Less frequently, subjects were asked to write down specific information. Re- searchers analyzed individual items separately dur-

97

Page 2: The staggers nursing computer experience questionnaire

Authors

Table 1. Measures of Computer Experience

Categories Items Scaling Format

Borgman (1994)

Chin, Diehl, & Norman (1998)

Coil & Wingertsman (1990)

Chu & Spires (1991)

Gassert (1990)

Greer (1986)

Hayek & Stephens (1989)

Kacynski & Roy (1984)

Koohang (1969)

Lee (1984)

Loyd 61 Gressard (1984, 1985)

Own a computer Access to a computer Application uses Paid computer work Frequency Length of time Formal education

Type of system used Application uses

Length of time Formal knowledge

Own a computer Work experience

Application uses Type of system used Own a computer

Amount of experience Programming

Own a computer Application uses Formal education

Read computer magazines

Length of time Application knowledge

Formal education Application use (WP) Frequency of use

Length of time

1 3 1

1

1 5

2 1 2

1

1 week to >l year None to extensive

oto>3 0 to >3 letters Never to once/week

~1 week to >l year

Mackowiak (1969) Experience level 1 No experience, games,

Malaney 81 Thurman (1989-1990)

Parks et al. 11986)

Computer use

Frequency Access to computers

Application uses Formal education

3

1 1

7 1

3 2 2 1

2 4 1 1

10

1 1 1

2 6 1

some, substantial

Do not use, regular use, Yes/no Number of hours/week None to a great deal

Not at all to daily Number of hours

Yes/no Schwirian et al, (1989)

Shields (19941

Slann et al. (1990)

Sullivan (1999)

1 1 7 1 3 3 1

1 21

1 13

1 1

19 3 2

1

Own a computer Application uses Computer use Formal education

Formal education Computer use Own a computer Paid computer work Application uses

Owning a computer Frequency of use Read computer magazines

Frequency of use Application uses Programming languages

(formal education)

Van Dover & Boblin (1991) Application uses 142

Vincente et al. (1967) Frequency of use 1

Times per month Years Number of courses

None to >6 Used/not used

Years using computers Cognitive test

Yes/no Yes/no

None to extensive

Yes/no

No experience to much

Yes/no

Yes/no

Not at all to every day

Yes/no

Number of hours/week Yes/no Number of languages

Used, observed, want to learn more

Number of hours/week

Abbreviation: WP, word processing.

Page 3: The staggers nursing computer experience questionnaire

CLINICAL METHOD 99

ing data analysis and did not discuss whether summed instrument scores might be appropriate. Clearly, more rigor is needed in defining and mea- suring computer experience.

Defining Computer Experience

Computer experience is not unidimensional (Po- tosnak, 1985) but is a combination of computer knowledge, computer use, and length of time us- ing a particular application. As did Potosnak, many researchers might think of the length of time using an application as a subcategory of computer use. However, Shneiderman questioned its validity as an indicator of computer experience. Because computer users may repeat one activity over a long period, length of time demonstrates neither depth nor breadth of application use. For instance, a nurse may use one application, basic word pro- cessing, for 10 years. By assessing only this one dimension, length of time may be a superficial indicator for total computer experience (B. Shnei- derman, personal communication, March 15, 1991). Van Muylwijk, Van Der Veer, and Waem (1983) believed computer experience was an “ir- reversible” phenomenon composed of five areas: (a) presystem knowledge such as typing skills; (b) computer programming languages, such as C+ + ; (c) computer programming with theoretical and practical exercises; (d) system experience such as use frequency and the depth of understanding of the system; and (e) experience with particular tasks on a computer.

In the instrument described in this article, the focus is on computer experience rather than precursors to experience such as typing skills. Van Muylwijk et al. (1983) seemingly defined computer experience for computer science profes- sionals . Nursing computer experience was concep- tualized with less emphasis on programming expe- rience and more emphasis on applied knowledge and use.

DEVELOPMENT OF A NEW COMPUTER EXPERIENCE MEASURE

The purpose of this new instrument, titled the Staggers Nursing Computer Experience Question- naire (SNCEQ), is to determine nurses’ computer experience. Because most nurses are female, the instrument was assessed using this population of nurses. Later assessments will include male nurses. The instrument was divided into a scored

applications section and an unscored descriptive section.

Theoretical Basis of the Instrument

The instrument development and psychometric testing techniques described by Waltz et al. (1984) formed the theoretical basis for instrumentation in this work. Waltz and colleagues caution research- ers to identify and employ a measurement frame- work during instrument development. Two frame- works are possible, norm-references and criterion- referenced. A norm-referenced framework is used when one needs to evaluate the performance of a subject compared with a norm or comparison group (Waltz et al., 1984). A criterion-referenced framework is used when one wants to evaluate whether a subject has acquired a set of predeter- mined behaviors. Because nurses’ computer expe- rience would be compared with others and there are no predetermined behaviors for computer ex- perience, the norm-referenced framework is more appropriate.

The theoretical basis for the application section of the measure was innovation adoption (Rogers, 1983). Innovation adoption is the process through which an individual passes from first knowledge of a new idea, practice, or object to forming an atti- tude toward the innovation and ultimately deciding to use it. The outcomes of this process are attitudes toward, knowledge about, and use of an innova- tion. Attitudes toward computers have been ex- plored in-depth elsewhere and form a more sub- jective component of innovation adoption; knowledge and use better correspond to behavioral experience with an innovation. Consistent with the literature from human-computer interaction areas, these two categories form the basis for the appli- cation section of the instrument in that nurses are asked to rate their knowledge and use of various applications.

Computer experience categories for the scored portion of the instrument were identified through a literature review and by speaking with informatics faculty and students. Categories were computer knowledge, computer application use, participa- tion in informatics role activities, and knowledge about informatics role activities (Table 2).

Nurses’ computer knowledge may be described as perceived knowledge of general and hospital information systems (HIS) computer applications. Application use is nurses’ self-reported use of gen- eral and HIS computer applications. Informatics

Page 4: The staggers nursing computer experience questionnaire

100 CLINICAL METHOD

Table 2. Observable Indicators

Variable

Computer Use

Computer knowledge

lnformatics role participation

Indicators

Self-reported use of 20 different computer end HIS applications.

Perceived knowledge about 20 different general computer and 12 HIS applications

Self-reported participation in 5 types of role activities.

5point Like&type scale ranging in value from 0 to 4. Zero represents no use, computer knowledge, role participation, and role knowledge, and 4 rep- resents extensive use, computer knowledge, or participation (Table 3).

lnformatics role knowledge Perceived knowledge of 5 types of role activities.

role activities are nurses’ self-reported knowledge about and participation in computer-related nurs- ing activities typical of an information systems nurse employed in a hospital.

Nurses’ computer experience was operationally defined as an individual’s total score for four sub- scales measuring nurses’ self-reported general and HIS computer knowledge, general and HIS com- puter use, and participation in and knowledge about informatics role activities. Higher scores on a subscale reflect greater computer experience in that activity, and lower scores represent less expe- rience .

A self-report format was chosen for computer use and knowledge because behavioral observa- tions of nurses’ current and past computer use are not feasible for many researchers. Computer use in past instruments has been defined as frequency of use (e.g., number of hours per week), amount of use (none to extensive), and/or how long in the past computers were used (e.g., less than one week to greater than one year). Some authors asked re- spondents to name their experience within a spec- ified period such as the past 6 months. As men- tioned earlier, length of time using computers is not an adequate measure of experience. Subjects may use computers once a week over a year’s time, and others may spend 8 hours a day using computers over the same year. If asked how long they had used computers, both would answer one year; yet, clearly these experience levels are not equivalent. Even among subjects who spend equivalent times using a computer, option use and knowledge levels differ.

Questionnaire

Observable indicators were explicated on a pa- per-and-pencil self-report instrument using a

Frequency of use captures another dimension of computer use but can eliminate some aspects of past experience. For example, if a nurse used a spreadsheet extensively 2 years ago and a re- searcher only asked subjects to describe spread- sheet use, subjects would puzzle about how to an- swer the item. The resulting instrument score may

Table 3. Sample Questionnaire items

Past or Present Computer Use

Scale: 0 = none 4 = extensive

Computer Knowledge

0 = none 4 = considerable

Computer Application 1. Writing reports, papers, documents, or other text

(word processing) 2. Sending messages to others (electronic mail)

Hospital Information System Applications 1. Looking up patients’ test results (results reporting) 2. Charting patient data such as vital signs or

medications (data entry)

Role Activity 1. Computer system design 2. Computer system selection

0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 01234

01234 0 1 2 3 4

01234 01234

Past or Present Role Participation Knowledge

Scale: 0 = none 4 = extensive 0 = none 4 = considerable

01234 01234 01234 0 1 2 3 4

Page 5: The staggers nursing computer experience questionnaire

CLINICAL METHOD 101

not reflect past computer experience. If one asks about amount of time, determining a cutoff point is difficult. Amount of use is certainly pertinent, but scaling issues are problematic. Is amount of use once per day, number of hours per week, or a perceived rating such as none to extensive? There is no clean conceptual answer to this question. However, computer users likely will not be able to estimate accurately the number of hours of appli- cation use per week, and application usage is ex- tremely variable, depending on work require- ments. Perhaps the best alternative is to have nurses rate their perceived amount of use on a Likert-type scale ranging from none to extensive with no time boundaries.

Computer knowledge is not a simple concept, and there are few discussions about its conceptu- alization in the literature. Logically, one might think of computer knowledge as formal computer education or course work and attained knowledge that might result from course work or independent reading. How to capture respondents’ formal edu- cation and attained knowledge is another matter. Although respondents may attend formal courses, they may have little knowledge of various appli- cations. Therefore, respondents are asked to rate their perceived knowledge of applications. One could ask respondents whether they have had for- mal course work for each application, but this would make no sense for HIS use. In clinical care settings, there may be informal or formal training sessions for HIS applications. For scaling and con- ceptual consistency, perceived HIS computer knowledge is represented by the same scale rang- ing from none to extensive. Formal course work is used for descriptive information only.

Item Construction Questionnaire item construction was a synthesis

of ideas from different sources. General computer applications were determined by examining com- puter applications in Ball and Hannah (1984), Mikuleky and Ledford (1987), Saba and McCor- mick (1986), Gassert (1990), computer experience measures, and speaking with informatics faculty and students. General applications were believed to cross the four areas authors commonly used to organize nursing and nursing computer applica- tions: nursing education, administration, clinical practice, and research. Therefore, types of soft- ware applications such as word processing, bibli- ographic retrieval, and statistical analyses were

listed instead. Sample items are listed in Table 3. Many nurses may use these applications but may be unaware of their general classification type, so both the classification and an example of how one might use the application were included: for in- stance, “Creating pictures, slides, or overhead dis- plays (computer graphics). ’ ’

Typical HIS applications were determined by categories in Ball and Hannah (1984), Mikuleky and Ledford (1987), Saba and McCormick (1986), and by speaking with clinically based informatics students. General HIS applications were chosen, such as results retrieval, entering/modifying doc- tors’ prescriptions, charting nursing assessments, entering/modifying patient acuities, and accessing unit/patient information. Very specific applica- tions such as contacting poison control information in the emergency department were believed to be subsumed in more general data access categories applicable to a wider range of general and HIS computer applications. Sample items are listed in Table 3.

The items in the role activities category were derived primarily from an unpublished question- naire developed by Gassert (1990) defining infor- mation system nurse activities. The categories in- clude knowledge about participation in system design, selection, implementation, evaluation, and teaching computer classes. Sample items are listed in Table 3.

Descriptive Section A section was included in the questionnaire to

delineate the number of formal computer-related courses taken by nurses, reasons they might not have used computers in the past, and an overall computer experience rating. In past computer ex- perience measures, the number of computer- related courses was typically polled. Naming the specific courses, such as programming in “C,” is not requested in order to reduce response burden and lessen the chance the information would be redundant with knowledge application categories. Last, nurses list possible reasons they might not use computers often, checking as many reasons as are applicable. Sample reasons include no com- puter at work, computers make me anxious, never had a computer course, and rw need for computers at work.

To provide a visual assessment of subjects’ overall computer experience, a scale was con- structed to allow subjects to rate their perceived

Page 6: The staggers nursing computer experience questionnaire

102

computer experience level from novice to expert on a 7-point modified visual analogue scale. The scale is shown below:

Please rate your level of computer experience: novice . . . . . . . . . . . . . . . . . . . . . expert

PILOT TESTING

Subjects for the reliability and construct validity assessments were recruited from a graduate-level nursing course. During a regularly scheduled class meeting, the investigator asked for volunteers to participate in the study. O f 78 nurses in the course, 29 volunteered for the study. ‘O f these, 25 were eligible to participate. (The other four were male volunteers). Twenty-four completed both sessions of the pilot study. The mean age of subjects was 3 1, and they had an average of 11.6 years in nurs- ing. The majority of subjects had a BSN (n = 21); three indicated they already acquired master’s de- grees .

SNCEQ Reliability Estimate

O f the types of reliability assessments appropri- ate for norm-referenced instruments, test-retest re- liability was deemed most suitable for the SNCEQ. The most common measure, coefficient alpha, was less appropriate because a score on one SNCEQ item may not necessarily predict a similar score on other items. Using computer graphics, for exam- ple, does not predict whether subjects use HIS re- sults reporting applications. Test-retest reliability is used when one wants to elicit the consistency of the responses for one group of respondents during two different sessions (Waltz et al., 1984). The instruments were coded between pilot study ses- sions by asking respondents for their birthdates, and responses were matched for the two test ad- ministrations. Computer experience was believed to be stable during a 2-week period, but on the second administration, respondents were asked whether they attended a computer class or learned a new computer application since they first com- pleted the questionnaire. One student indicated that she attended a class and learned a new appli- cation. Her data were dropped from test-retest re- liability testing. Consequently, Pearson r was computed for 24 respondents’ scores for each of the subscales. A correlation above .70 was consid- ered adequate (Polit & Hungler, 1983). The results for the SNCEQ test-retest reliability estimates are in Table 4. All estimates are adequate, and item

CLINICAL METHOD

Table 4. SNCEQ Reiiibility Estimates (N = 241

Scale Pearson’s r

Overall 94 Use .93 Computer knowledge .90 Role participation .91 Role knowledge .90 Courses .97

analysis was not necessary. In addition, paired t tests were calculated to determine whether respon- dents’ scores significantly differed from the first test administration to the second. No significant differences were found.

SNCEQ Validity Estimates

Both content and construct validity were exam- ined for the SNCEQ. Content validity assesses whether items on an instrument adequately repre- sent the intended content domain (Waltz et al., 1984). Content validity was estimated by asking seven nursing informatics experts to rate the rele- vance of each category and item to the concept of nursing computer experience using a 4-point rating scale: (a) not relevant, (b) somewhat relevant, (c) moderately relevant, and (d) very relevant. The scale was adapted from one in Waltz et al. (1984); the original option quite relevant was changed to “moderately relevant” because distinguishing be- tween“quite” and “very” relevant might be dif- ficult. Experts were actively employed in aca- demic/service nursing informatics roles, had nursing informatics publications, and had doctoral degrees. Five of the seven experts returned usable questionnaires. One expert did not return the ques- tionnaire, and the other returned a questionnaire with incomplete data.

Coefficient alphas was used to determine the extent of item agreement among judges. Waltz et al. (1984) outlined a technique to measure content validity using dichotomous categories. There is not guidance about assessing a 4-item scale. Although typically used to estimate reliability, coefficient alpha measures the correlation or agreement among items and may be used to estimate the agreement among judges as well. An estimation equal to or greater than .80 was considered accept- able (Waltz et al., 1984). The overall coefficient alpha was .672. However, there was only one item that two judges believed was not relevant: knowl- edge about computer-assisted software engineering

Page 7: The staggers nursing computer experience questionnaire

CLINICAL METHOD 103

tools. The low alpha coefficient resulted from dis- agreement among judges about the degree of item relevancy rather than the overall relevance/ nonrelevance of items. Therefore, a content valid- ity index was calculated for dichotomous (relevant/ not relevant) agreement among the five judges using the techniques outlined in Waltz et al. The results were adequate (Table 5).

all SNCEQ subscales and the SNCEQ total score than would the sample of graduate nursing stu- dents.

Experts’ and pilot subjects’ comments about the instrument were evaluated carefully. New items were added to the reasons why nurses may not use computers category: not enough time to use, not in my job description, and afraid of filelinformation loss. O ther comments were general assessments of the instrument or not consistent with the instru- ment’s theoretical definition. For example, one ex- pert recommended including familiarity with re- search and evaluation literature as a computer experience category, a pertinent category for com- puter curricula perhaps, but not consistent with the current definition for nurses’ computer experience. No new categories were added. Pilot subjects sug- gested formatting changes, such as expanding the space between rating scales.

Informatics nurses were asked to fill out the SNCEQ during a routine meeting of a nursing in- formatics group. Their participation was volun- tary, and responses were anonymous. Instructions to the students and informatics nurses were very similar. Twenty-five instruments were distributed. O f the 25, 10 were ineligible to participate. Two participants were salesmen, 6 others were nonnurses, and 2 were male nurses. All 15 eligible female nurses returned usable questionnaires. To create groups with equal numbers, 15 students were randomly selected from the original pilot sample to comprise the second group.

Construct Validity

Construct validity is the degree to which an in- dividual possesses a trait or quality, presumed to be measured by the instrument (Waltz et al., 1984). The contrasted groups approach was used to determine construct validity for the SNCEQ. That is, a group of female nurses who attend in- formatics meetings and are engaged in nursing in- formatics activities would use and know more about computers than a group of female graduate nursing students. The investigator hypothesized that the informatics nurses would score higher on

The two groups were not equivalent in three demographic characteristics. The informatics group was nearly 6 years older on the average (39.4 v 33.9), had 6 years more time in nursing (17.5 years v 11.3), and was more educated (11 informatics nurses had master’s degrees). How- ever, these characteristics would not necessarily be related to higher SNCEQ scores. The older the nurse, the less likely she would have been exposed to computers in her original educational prepara- tion, and logically, one might argue that younger nurses could have more computer experience. Be- cause many nursing work settings are not comput- erized, more nursing experience would not be con- sistently related to more computer experience. In fact, one might expect older nurses with more nursing experience to have less computer experi- ence because computers are a relatively new phe- nomenon. Last, many nursing curricula include lit- tle computer instruction except limited functions such as statistical procedures and word processing which graduate students might also have, so scores

Table 5. Content Validity Index for the SNCEQ

Judges

l&2 1843 l&4 1845 2&3 2&4 2&5 3&4 3&5 4&5

Category Use Knowledge Role Knowledge Courses Reasons

1 .oo 1 .oo 94 1 .oo 1 .oo 1 .oo 1 .oo 33 .JJ .JJ 1 .oo 1.00 .J!i 57

1 .oo 1.00 1 .oo 1 .oo 1 .oo 1 .oo 1 .oo 1 .oo 1 .oo 1 .oo 1 .oo 1 .oo 1 .oo 1 .oo

.a3 .JJ .JJ 1 .oo 1 .oo .J5 .5J 1 .oo 1 .oo 1 .oo 1 .oo 1 .oo 1 .oo 1 .oo 1 .oo 1.00 1 .oo 1 .oo 1 .oo 1 .oo 1.00

.83 .JJ .JJ 1 .oo 1.00 .J5 .5J

.83 .JJ .JJ 1 .oo 1 .oo .J5 57 1 .oo 1.00 1 .oo 1 .oo 1.00 1 .oo 1 .oo

.93 .91 .90 1 .oo 1 .oo .so .83

Page 8: The staggers nursing computer experience questionnaire

104 CLINICAL METHOD

should not be significantly affected across all sub- scales purely by nurses’ educational level. The SNCEQ subscale and total scores for both groups are located in Table 6.

To evaluate construct validity, a one-way anal- ysis of variance was calculated to determine if the two groups significantly differed on the subscales and the total score. The following were compared in the two groups: (a) computer use subscale, (b) computer knowledge subscale, (c) role participa- tion subscale, (d) role knowledge subscale, (e) overall SNCEQ scores, (f) novice-expert rating, and (g) total number of formal courses. The sig- nificance level for each of these was set at .lO . Because each mean was tested twice, the alpha level was adjusted to .05 using the Bonferroni technique (Keppel, 1982). The results are pre- sented in Table 7. The informatics nurses scored

significantly higher than the students for all sub- scales, the total score (F (1,28) = 58.97, p < .OOOl), novice-expert rating (F (1,26) = 11.09, p C .0026), and formal courses (F (1,28) = 26.88, p < .OOOl). These results suggest beginning con- struct validity evidence for the SNCEQ.

NovicelExpert Scale

There might be differences in the novice-expert ratings according to whether the scale was placed at the top or bottom of the instrument. Accord- ingly, the two different formats of the instrument were randomly distributed to pilot subjects. During data analyses, correlations were calculated using the novice-expert score and the total SNCEQ score. Then the two correlations were tested for differences. The results were not significant for

Table 6. SNCEQ Subscale and Total Scores for Graduate Student and lnformatics Nurses (N = 15)

Scale Mean SD Range Total Possible

Personal computer use Students lnformatics nurses

Hospital information system use Students lnformatics nurses

Overall computer use Students lnformatics nurses

Personal computer knowledge Students lnformatics nurses

Hospital information system knowledge Students lnformatics nurses

Overall computer knowledge Students lnformatics nurses

lnformatics role participation Students lnformatics nurses

lnformatics role knowledge Students lnformatics nurses

Overall SNCEQ score Students lnformatics nurses

(Descriptive items) Total formal course work

Students lnformatics nurses

Novice-expert rating Students lnformatics nurses

19.33 7.47 8-35 80 33.20 18.59 l-88 80

7.73 7.47 O-23 48 32.27 11.97 6-48 48

27.07 8.58 12-43 128 65.47 26.20 25-114 128

17.80 8.70 8-40 76 36.93 1.13 9-77 76

8.57 5.91 O-18 48 37.87 7.98 16-48 48

26.27 11.76 12-52 124 74.80 24.73 25-125 124

.73 2.43 l-3 20 15.60 3.18 7-20 20

1.13 1.55 o-5 20 15.73 4.15 7-20 20

55.20 20.05 24-92 282 171.60 55.18 214 292

2.07 2.43 o-7 - 8.53 4.17 3-16 -

2.54 1.45 1-5 7 4.53 1.69 l-7 7

Page 9: The staggers nursing computer experience questionnaire

CLINICAL METHOD 105

Table 7. SNCEQ Subscale and Total Score Construct Validity Testing for Students and lnformatics Nurses UV = 30)

Scale ss df MS F P

Computer use 11059.20 1,28 11059.20 29.10 <.OOOl Computer knowledge 17666.13 1,28 17666.13 47.11 <.OOOl Role participation 1657.63 1, 28 1657.63 206.71 <.OOOl Role knowledge 1598.70 1,28 1598.70 162.97 1.0001 Overall SNCEQ score 101617.20 1,28 101617.20 58.97 <.OOOl Novice-expert rating 27.71 1,26 27.71 11.09 1.0026 Formal courses 313.63 I,28 313.63 26.88 <.OOOl

Time 1 (z (10) = .614, p > .05) and for Time 2 (z (9) = .657, p > .05). There is no difference whether the scale is placed at the top or bottom of the instrument. Because the correlations were slightly higher if the scale was at the bottom of the instrument, it is located there.

SUMMARY

This report described the development and ini- tial psychometric testing for an instrument measur- ing nurses’ perceived computer experience. The instrument’s theoretical basis was psychometric theory and innovation adoption, specifically inno- vation use and knowledge. The scored portion of the instrument asks nurses to rate their past or present computer use and knowledge for general computer and HIS applications as well as partici- pation in and knowledge about informatics role activities. A descriptive section of the tool asks nurses to indicate the number of formal courses and college-level computer/informatics courses they have taken. A novice-to-expert scale allows nurses to indicate their overall computer experi- ence. Additionally, nurses list reasons, if any, that they might not use computers often. Content and construct validity were adequate for the instru- ment . Test-retest reliability indicated beginning re- liability for the tool. The SNCEQ provides a reli- able and valid measure for assessing nurses’ past and present computer experience.

The instrument is available from the author.

REFERENCES Ball, M.J., & Hannah, K.J. (1984). Using computers in

nursing. Reston, VA: Reston Publishing Company. Borgman, CL. (1984). The user’s mental model of an in-

formation retrieval system: Effects on performance (Doctoral dissertation, Stanford University, 1984). Dissertation Abstracts International, 45(l), 4A. (University Microfilms No. DA8408258)

Chin, J.P., Diehl, V.A., & Norman, K.L. (1988). Develop- ment of an instrument measuring user satisfaction of the hu- man-computer interface. In E. Soloway, D. Frye, & S.B. Shep- pard (Eds.), CHI ‘88 Conference Proceedings: Human Factors in Computing Systems (pp. 213-218). New York, NY: Associ- ation for Computing Machinery.

Coll, R., & Wingertsman, J.C. (1990). The effect of screen complexity on user preference and performance. International Journal of Human-Computer Interaction, 2, 255-265.

Chu, PC., & Spires, E.E. (1991). Validating the computer anxiety rating scale: Effects of cognitive style on computer anxiety. Computers in Human Behavior, 7, 7-21.

Gassert, C. (1990). Demographic information for nursing informatics students. Unpublished instrument, University of Maryland at Baltimore.

Greer, J. (1986). High school experience and university achievement in computer science. AEDS Journal, 19, 216-225.

Hayek, L.M., & Stephens, L. (1989). Factors affecting com- puter anxiety in high school computer science students. Journal of Computers in Mathematics and Science Teaching, 19(l), 53-68.

Kacynski, K.A., &Roy, K.D. (1984). Analysis of graduate nursing students’ innovation-decision process. In G.S. Cohen (Ed.), Proceedings of the Eighth Annual Symposium on Com- puter Applications in Medical Care (pp. 605609). New York, NY: IEEE Computer Society Press.

Keppel, G. (1982). Design and analysis: A researcher’s handbook (2nd ed.). Englewood Cliffs, NJ: Prentice-Hall.

Koohang, A.A. (1989). A study of attitudes toward comput- ers: Anxiety, confidence, liking, and perception of usefulness. Journal of Research on Computing and Education, 22, 137- 150.

Lee, J.A. (1984). The effects of past computer experience on computerized aptitude test performance. Educational and Psy- chological Measurement, 46, 727-733.

Loyd, B.H., & Gressard, C. (1985). Effect of gender and computer experience on attitudes toward computers. Journal of Education Computing Research, 5, 69-88.

Loyd, B.H., & Gressard, C. (1984). The effects of sex, age, and computer experience on computer attitudes. AEDS Journal, 18(2), 67-77.

Mackowiak, K. (1989). Deaf college students and comput- ers: The beneficial effect of experience on attitudes. Journal of Educational Technology Systems, 17, 219-229.

Malaney, G.D., 8r Thurman, Q. (198990). Key factors in the use and frequency of use of microcomputers by college

Page 10: The staggers nursing computer experience questionnaire

106 CLINICAL METHOD

students. Journal of Educational Technology Sysrems, 18, 151- 160.

Mikuleky, M.P., & Ledford, C. (1987). Computers in tmrs- ing: Hospital and clinical applications. Menlo Park, CA: Ad- dison-Wesley.

Parks, P.L., Damrosch, S.P., Heller, B.R., & Romano, C.A. (1986). Student survey-part 1. Unpublished instrument, University of Maryland at Baltimore, Baltimore, MD.

Polit, D.F., & Hungler, B.P. (1983). Nursing research: Principles and methods. Philadelphia, PA: Lippincott.

Potosnak, K.M. (1985). Choice of interface modes by em- pirical groupings of computer users. In B. Shackel (Ed.), Hu- man-computer interaction-INTERACT ‘84, (pp. 27-32). New York, NY: Elsevier.

Rogers, E.M. (1983). Diffusion of innovations. New York, NY: The Free Press.

Saba, V.K., &McCormick, K.A. (1986). Essentials of com- puters for nurses. Philadelphia, PA: Lippincott.

Schwhian, P.A., Malone, J.A., Stone, V.J., Nunley, B., & Francisco, T. (1989). Computers in nursing practice: A com- parison of the attitudes of nurse and nursing students. Comput- ers in Nursing, 7, 168-177.

Shields, M. (1984). Computing at Brown University: A sur- vey research report (pp. l-29). Unpublished instrument at Brown University, Providence, RI.

Slann, MacLeod, H., Cilisson, P., & Dumdell, A. (1990). The effect of computer use on gender differences in attitudes to computers. Computers and Education, 14, 183-191.

Sullivan, P. (1989). What computer experience to expect of

technical writing students entering a computer classroom: The case of Purdue students. Journal of Technical Writing and Communication, 19( 1). 53-68.

Van Dover, L., & Boblin, S. (1991). Student nurse computer experience and preferences for learning. Computers in Nursing, 9, 75-79.

Van Muylwijk, B., Van Der Veer, G., & Waem, Y. (1983). On the implications of user variability in open system: An over- view of the little we know and the lot we have to find out. Behaviour and Information Technology, 2, 313-326.

Vincente, K.J., Hayes, B.C., Jr Williges, R.C. (1987). As- saying and isolating individual differences in searching a hier- archical file system. Human Factors, 29, 349-359.

Waltz, C.F., Strickland, O.L., Jr Lenz, E.R. (1984). Mea- surement in nursing research. Philadelphia, PA: Davis.

From Office of the Assistant Secretary of Defense (Health Affairs, HSO), Pentagon, Washington, DC.

LTCP Nancy Staggers, PhD, RN: Army Nurse Corps, Dep- uty Director for Clinical Policy Support.

The opinions or assertions contained herein are the private views of the author and are not to be construed as ofjkial or as reflecting the views of the Army, Army Medical Department, or Department of Defense.

Address reprint requests to Nancy Staggers. PhD, RN, 11906 Yellow Rush Pass, Columbia, MD 21044.

Copyright 0 1994 by W.B. Saunders Company 0897-1897l94/0702-0009$5.00/0