knowing what you know and what you don't: further … · research report no. 2002-3 knowing...

31
www.collegeboard.com Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring Sigmund Tobias and Howard T. Everson

Upload: phamdang

Post on 04-Jun-2018

232 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

www.collegeboard.com

Research Report No. 2002-3

Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

Sigmund Tobias and Howard T. Everson

Page 2: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

College Entrance Examination Board, New York, 2002

College Board Research Report No. 2002-3

Knowing What You Knowand What You Don’t:Further Research on

Metacognitive KnowledgeMonitoring

Sigmund Tobias and Howard T. Everson

07-1626.RD.CBRpt02-3Txt 4/11/07 1:38 PM Page i

Page 3: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

Sigmund Tobias is a distinguished scholar in theDivision of Psychological and Educational Services inthe Graduate School of Education at FordhamUniversity at Lincoln Center.

Howard Everson is Vice President for AcademicInitiatives and Chief Research Scientist at the CollegeBoard.

Researchers are encouraged to freely express theirprofessional judgment. Therefore, points of view or opin-ions stated in College Board Reports do not necessarilyrepresent official College Board position or policy.

The College Board: Expanding College Opportunity

The College Board is a national nonprofit membershipassociation dedicated to preparing, inspiring, and connect-ing students to college and opportunity. Founded in 1900,the association is composed of more than 4,200 schools,colleges, universities, and other educational organizations.Each year, the College Board serves over three million stu-dents and their parents, 22,000 high schools, and 3,500colleges, through major programs and services in collegeadmissions, guidance, assessment, financial aid, enroll-ment, and teaching and learning. Among its best-knownprograms are the SAT®, the PSAT/NMSQT®, and theAdvanced Placement Program® (AP®). The College Boardis committed to the principles of equity and excellence, andthat commitment is embodied in all of its programs, ser-vices, activities, and concerns.

For further information, contact www.collegeboard.com

Additional copies of this report (item #993815) may beobtained from College Board Publications, Box 886,New York, NY 10101-0886, 800 323-7155. The priceis $15. Please include $4 for postage and handling.

Copyright © 2002 by College Entrance ExaminationBoard. All rights reserved. College Board, AdvancedPlacement Program, AP, SAT, and the acorn logo areregistered trademarks of the College EntranceExamination Board. PSAT/NMSQT is a registeredtrademark jointly owned by both the College EntranceExamination Board and the National Merit ScholarshipCorporation. Visit College Board on the Web: www.collegeboard.com.

Printed in the United States of America.

AcknowledgmentsThis work was funded by the College Board. Earlierversions of this paper, and the studies reported herein,were presented in April 1999 at the annual meeting ofthe American Educational Research Association inMontreal. The findings and opinions expressed in thisarticle are our own and do not reflect the positions orpolicies of the College Board.

The research reported here was conducted in collabora-tion with our colleagues: Lourdes Fajar and KatherineSantos conducted Study I in partial fulfillment of therequirements for the Seminar in Educational Researchat the City College of New York; Rhonda Romero con-ducted Study II in partial fulfillment of the requirementsfor the Seminar in Educational Research at the CityCollege of New York; Edgar Feng conducted Study IIIin partial fulfillment of the requirements for the Seminarin Educational Research at the City College of NewYork; Fred Dwamena conducted Study IV in partial ful-fillment of the requirements for the Seminar inEducational Research at the City College of New York;Study V was supported by a contract with the BattelleCorporation relying on resources made available by theNavy Personnel Research and Development Center;Harold Ford conducted Study VII in partial fulfillmentof the requirements for the qualifying examination atFordham University’s Graduate School of Education;Julie Nathan conducted Study X as her doctoral disser-tation research at Fordham University’s GraduateSchool of Education; Hyacinth Njoku conducted StudyXI in partial fulfillment of the requirements for thequalifying examination at Fordham University’sGraduate School of Education.

We are grateful to both Patrick C. Kyllonen and IrvingKatz of Educational Testing Service for their helpfulcomments in earlier versions of this paper.

07-1626.RD.CBRpt02-3Txt 4/11/07 1:38 PM Page ii

Page 4: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

ContentsI. Introduction ................................................1

The Importance of Knowledge Monitoring..........................1

II. Assessing Knowledge Monitoring ................1

Analysis of Monitoring Accuracy.............2

III. Knowledge Monitoring Accuracy and the Developing Reader .......................3

Study I: Knowledge Monitoring Accuracy and Reading in BilingualElementary School Students.....................3

Participants and Procedures ....................3

Results and Discussion ............................3

Study II: Reading, Help Seeking, and Knowledge Monitoring .....................3

Participants and Procedures ....................4

Results and Discussion ............................4

Summary: Knowledge Monitoring and Reading.............................................5

Study III: Strategic Help Seeking inMathematics ............................................5

Participants and Procedures ....................5

Results and Discussion ............................5

IV. Knowledge Monitoring and Ability .............6

Study IV: Triarchic Intelligence andKnowledge Monitoring ............................6

Participants and Procedures ....................7

Results and Discussion ............................7

Study V: The Impact of KnowledgeMonitoring, Word Difficulty, Dynamic Assessment, and Ability on Training Outcomes.............................7

Participants and Procedures ....................7

Materials..................................................8

Results and Discussion ............................8

Study VI: Knowledge Monitoring and Scholastic Aptitude .........................10

Participants and Procedures ..................10

Results and Discussion ..........................10

Study VII: Knowledge Monitoring,Scholastic Aptitude, College Grades ......10

Participants and Procedures ..................10

Results and Discussion ..........................10

Summary: Knowledge Monitoring and Academic Ability ............................11

V. Knowledge Monitoring: Is It a DomainGeneral or Specific Ability?.....................11

Study VIII: Math and Vocabulary KMAs,SAT® Tests, and Grades: Relationships ....11

Participants and Procedures ..................11

Results and Discussion ..........................12

Study IX: Knowledge Monitoring, Reading Ability, and Prior Knowledge ....12

Participants and Procedures ..................12

Results and Discussion ..........................12

Other Relevant Studies ..............................13

VI. Self-Reports of Metacognition and Objective Measures of KnowledgeMonitoring ..............................................14

07-1626.RD.CBRpt02-3Txt 4/11/07 1:38 PM Page iii

Page 5: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

Summary: Relationships of KMA Scores and Self-Report Measuresof Metacognition ...................................17

Study X: An Investigation of the Impact of Anxiety on KnowledgeMonitoring ............................................17

Participants and Procedures ..................18

Results and Discussion ..........................18

Study XI: Cross-Cultural Perspective on Knowledge Monitoring.....................18

Participants and Procedures ..................19

Results and Discussion ..........................19

VII. Metacognitive Knowledge Monitoring and Strategic Studying Among Secondary Students .....................19

VIII.General Discussion....................................21

IX. Suggestions for Future Research ................22

References .........................................................23

Tables1. Two Prototypical KMA Item

Score Patterns. ..................................................2

2. Correlations Among ASVAB Measures and KMA Scores by Word Difficulty. ...............8

3. Descriptive Statements for KMA Scores and GPA. ...............................................9

4. Correlations of Math and Verbal KMA Score and GPA......................................12

5. Descriptive Statistics, Correlations, and Beta Weights with Posttest Score. ............13

6. Correlations Between KMA and Learning in Training Course...........................14

7. Correlations of Verbal and Math KMA Scores, Metacognitive Self-Report Scales, Teacher Ratings, and SAT I-V and SAT I-M Scores. ..................16

8. Correlations of MSLQ, LASSI, KMA Scores, and GPA. ............................................16

Figures1. A componential model of metacognition. .........1

2. The interaction between knowledge monitoring ability and help seeking reviews. .................................4

3. The relationship between knowledge monitoring and help seeking reviews in Study III. .............................5

4. Relationship of KMA scores with ASVAB performance. ........................................9

5. Relationship between test anxiety and help seeking. ............................................20

6. Math KMA means by type of problemreviewed for Nigerian and American male students..................................................21

07-1626.RD.CBRpt02-3Txt 4/11/07 1:38 PM Page iv

Page 6: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

I. IntroductionFor more than a decade our program of research hasconcentrated on furthering our understanding of oneaspect of metacognition—knowledge monitoring. Ourresearch has been animated by a desire to understandlearners’ ability to differentiate between what theyknow and do not know. In general, metacognition, per-haps the most intensively studied cognitive process incontemporary research in developmental and instruc-tional psychology, is usually defined as the ability tomonitor, evaluate, and make plans for one’s learning(Brown, 1980; Flavell, 1979). Metacognitive processesmay be divided into three components: knowledgeabout metacognition, monitoring one’s learningprocesses, and the control of those processes (Pintrich,Wolters, and Baxter, 2000). We believe that monitoringof prior learning is a fundamental or prerequisitemetacognitive process, as illustrated in Figure 1.

If students cannot differentiate accurately betweenwhat they know and do not know, they can hardly beexpected to engage in advanced metacognitive activitiessuch as evaluating their learning realistically, or makingplans for effective control of that learning.

To date we have completed 23 studies of knowledgemonitoring and its relationship to learning from instruc-tion. Our earlier work, 12 studies in all, is summarizedand reported elsewhere (see Tobias and Everson, 1996;Tobias and Everson, 2000). In this paper we continuethis line of research and summarize the results of 11studies that have been conducted over the past threeyears. The work reported here attempts to address anumber of general issues, e.g., the domain specificity ofknowledge monitoring, measurement concerns, and therelationship of knowledge monitoring to academicability. In addition to suggesting new directions forfurther research, we also discuss the implications of thisresearch for learning from instruction.

The Importance ofKnowledge MonitoringOur interest in the accuracy of monitoring prior knowl-edge stems from our belief that this ability is central tolearning from instruction in school and in trainingsettings in business, industry, and the government(Tobias and Fletcher, 2000). Learners who accuratelydifferentiate between what has been learned previouslyand what they have yet to learn are better able to focusattention and other cognitive resources on the materialto be learned. Much of the research conducted to datesupports this supposition.

Our earlier research, for example, indicated thatknowledge monitoring ability was related to academicachievement in college (Everson, Smodlaka, and Tobias,1994; Tobias, Hartman, Everson, and Gourgey, 1991).Moreover, the relationship between knowledge moni-toring and academic achievement was documented indiverse student populations, including elementaryschool students, students attending academicallyoriented high schools, vocational high school students,college freshmen, and those attending college for sometime. Again, details of these studies can be found in ourearlier reports (Tobias and Everson, 1996; 2000). Morerecently, we have concentrated on the development ofknowledge monitoring assessment methods that can beused across academic domains, and that have measure-ment properties which allow for greater generalizabilityof results.

II. Assessing KnowledgeMonitoring

Metacognitive processes are usually evaluated bymaking inferences from observations of students’performance, by interviewing students, or by self-reportinventories. As Schraw (2000) noted, developing effec-tive methods to assess metacognition has been difficultand time consuming. Such assessments usually requiredetailed observation and recording of students’learning, rating the observations for metacognition,obtaining “think aloud” protocols of students’ work,and rating their introspective reports. Referring to thisapproach, Royer, Cisero, and Carlo (1993) noted that“The process of collecting, scoring, and analyzingprotocol data is extremely labor intensive” (p. 203).Obviously, self-report scales would be the most conve-nient tools to measure metacognition, and a number ofsuch questionnaires have been developed (Jacobs and

1

Figure 1. A componential model of metacognition.

07-1626.RD.CBRpt02-3Txt 4/11/07 1:38 PM Page 1

Page 7: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

Paris, 1987; Pintrich, Smith, Garcia, and McKeachie,1991; Schraw and Denison, 1994; Tobias, Hartman,Everson, and Gourgey, 1991). Self-report instrumentshave the advantage of easy administration and scoring.However, their use raises a number of questions whichhave been detailed elsewhere (Tobias and Everson,2000) and will not be summarized here.

In contrast, the knowledge monitoring assessment(KMA) technique developed for use in our researchprogram evaluates the differences between the learn-ers’ estimates of their procedural or declarativeknowledge in a particular domain and their actualknowledge as determined by performance. The accu-racy of these estimates is measured against test per-formance. This approach is similar to methods used inresearch on metamemory (Koriat, 1993; Nelson andNahrens, 1990), reading comprehension (Glenberg,Sanocki, Epstein, and Morris, 1987), and psy-chophysics (Green and Swets, 1966). A review ofresearch on existing metacognitive assessment instru-ments (Pintrich et al., 2000) found that the scores onthe KMA had the overall highest relationship withlearning outcomes.

Analysis of Monitoring AccuracyClearly, with the KMA we are concerned with assessingknowledge monitoring ability, i.e., the accuracy of thelearners’ knowledge monitoring. In this measurementframework, the data conform to a 2 X 2 contingencytable with knowledge estimates and test performanceforming the columns and rows. In our earlier researchwe reported four KMA scores for each student whichprovided a profile of their knowledge of the domain andwhether they demonstrated that knowledge on a subse-quent test. The (+ +) and the (- -) scores were assumedto reflect accurate knowledge monitoring ability, andthe (+ -) and (- +) scores inaccurate knowledge moni-toring. A number of scholars, including Nelson (1984),Schraw (1995), and Wright (1996), have suggested thatthe optimal analysis of the discrepancies betweenestimated and demonstrated knowledge requires aprobabilistic conceptualization of knowledge monitor-ing ability. They encourage the use of either the Gamma(G) coefficient, a measure of association (Goodman andKruskal, 1954), or the Hamman coefficient (HC), ameasure of agreement accuracy (Romesburg, 1984).These and similar methods have been used inmetamemory research on the feeling of knowing andjudgments of learning (Nelson, 1984).

Though there is some debate about which measure ismore suitable (Wright, 1996), Schraw (1995) hasargued that G is less appropriate when the accuracy of

agreement is central, as it is in the KMA paradigm.Schraw (1995) demonstrates, and our work supportsthis assertion (Tobias, Everson, and Tobias, 1997) thatcalculating G may actually distort the data and lead todifferent inferences of ability. This can be seen, forexample, in Table 1, below, which displays two hypo-thetical KMA score patterns where accuracy of agree-ment is equivalent, but the distributions across the 2 x 2table differ (i.e., 10 accurate and five inaccurate knowl-edge estimates). The G coefficients differ, .61 and .45,while the HC’s are identical, .33 for each. In our earlierwork (Tobias et al., 1997) we found identical G’s eventhough the knowledge monitoring accuracy differed,whereas the HC’s were different for these score distrib-utions. A major disadvantage of the G arises when anyof the 2 X 2 cells are empty, G automatically becomes1. HC estimates, on the other hand, are unaffected byempty cells in the score distributions. Since there areoften a number of empty cells in the response patternsin our research, the utility of using G as an estimator isquestionable.

In view of these considerations, Schraw (1995) sug-gested using both G and HC. Wright (1996) and Nelsonand Nahrens (1990) have pointed out that the HC isdependent on marginal values and can, therefore, lead toinaccurate assessments of the estimate–performancerelationship. Such problems arise when all possible com-binations of estimates and performance are considered,i.e., when all four cells of the 2 X 2 table are of equalinterest. Since we are concerned only with the accuracyof estimates, or the agreement between estimates andtest performance, the HC coefficient appears to be themost useful statistic for the analyses of these data. TheHC coefficients range from 1.00, signifying perfect accu-racy, to -1.00, indicating complete lack of accuracy; zerocoefficients signify a chance relationship between esti-mated and demonstrated knowledge. Further support forusing HC comes from two studies reported below(Studies I and II). In these studies the correlationsbetween HC and G averaged .85, suggesting that usingHC would not provide biased estimates of knowledgemonitoring accuracy. Thus, HC was used in the studiesreported below as an estimate of knowledge monitoringaccuracy.

2

TABLE 1

Two Prototypical KMA Item Score PatternsTest Knowledge EstimatesPerformance Know Don’t Know Know Don’t Know

Pass 5 3 8 3

Fail 2 5 2 2

07-1626.RD.CBRpt02-3Txt 4/11/07 1:38 PM Page 2

Page 8: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

III. KnowledgeMonitoring Accuracyand the DevelopingReader

The substantial relationship between knowledge monitor-ing accuracy and reading comprehension measures of col-lege students (Tobias et al., 1991; Everson et al., 1994)suggests that knowledge monitoring accuracy and readingcomprehension ought to have similar relationships at alleducational levels. Of course, good reading comprehen-sion is especially important in elementary school since stu-dents who fall behind early in school have a difficult timecatching up. Therefore, the goal of the first two studieswas to examine the knowledge monitoring–reading rela-tionship among young elementary school students.

Study I:Knowledge Monitoring Accuracyand Reading in BilingualElementary School Students1

This study examined the differences in knowledge mon-itoring accuracy between mono- and bilingual students,as well as the relationship between this metacognitiveability and reading comprehension in relatively youngschool children. Jimeniez, Garcia, and Pearson (1995)found that when bilingual students come upon an unfa-miliar English word they often search for cognates intheir native language. They also reported that bilingualstudents, when compared to their monolingual peers,monitored their comprehension more actively by askingquestions when they faced difficulties or by rereadingthe text. This suggests that bilingual children attemptingto comprehend text presented in English are likely to bemore accurate knowledge monitors than their mono-lingual peers. This hypothesis was tested in Study I.

Participants and ProceduresFifth- and sixth-grade students (n=90) from two large,urban public schools participated in this study. Two-thirds of the participants were bilingual, reporting thatSpanish was their first language. Knowledge monitoringaccuracy was assessed using the standard KMA proce-dure, i.e., a 34-item word list was presented, and thestudents indicated the words they thought they knew

and those they did not. A multiple-choice vocabularytest that included the words presented in the estimationphase followed. The vocabulary words were selected forgrade-level appropriateness, and were presented in orderof increasing difficulty. The word list and vocabulary testwere also translated into Spanish. The 60 bilingual par-ticipants were randomly assigned to one of two groups:a group tested with a Spanish language KMA, and agroup tested with an English language KMA. The mono-lingual group, serving as a contrast group, also took theEnglish version of the KMA. Archival measures of read-ing ability based on performance on the Degrees ofReading Power (DRP) test (Touchstone Applied ScienceAssociates, 1991) were retrieved from school’s files.

Results and DiscussionParticipants were divided into high and low readingability groups using a median split on the DRP to createthe groups. A 3 x 2 ANOVA, three language groups andtwo reading ability groups, was conducted with the HCderived from the KMA as the dependent variable.Differences across the three language groups on theKMA were not significant (F [2,84]=< 1). Good andpoor readers, however, did differ (F [1,84]=6.56, p < .01), with the better readers demonstrating highermetacognitive monitoring ability. The interactionbetween language groups and reading ability was notsignificant. The finding that good readers were moreaccurate monitors than the poorer readers fits withearlier research (Tobias and Everson, 2000). However,the magnitude of the knowledge monitoring–readingrelationship was somewhat lower for the school age stu-dents (r=.28) than for college students (r=.67).

The absence of knowledge monitoring differencesbetween mono- and bilingual students may be attributed tothe English language fluency of the bilingual group.Subsequent interviews with the bilingual students indicatedthat the majority lived in the United States for four or moreyears and were fluent in English. Such fluency, apparently,made it unnecessary to monitor comprehension and searchfor cognates in their native language.

Study II:Reading, Help Seeking, andKnowledge Monitoring 2

Research evidence points to the fact that good readersmay be more aware metacognitively than poorerreaders. Though our prior research has been concernedwith knowledge monitoring accuracy and its

3

1 This report is based on a paper by Fajar, Santos, and Tobias (1996) found in the references.2 This report is based on a paper by Romero and Tobias (1996) found in the references.

07-1626.RD.CBRpt02-3Txt 4/11/07 1:38 PM Page 3

Page 9: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

relationship to learning outcomes, we have not exam-ined the influence of accurate knowledge monitoring onthe processes invoked while learning from instruction,i.e., the components of metacognition that controllearning from instruction.

The intent of Study II was to examine one suchprocess, help seeking—an important learning strategywhen one is baffled, confused, or uncertain while tryingto learn something new or when solving novel problems.Seeking help, we argue, signals a level of metacognitiveawareness—a perceived gap in knowledge, perhaps—and an intent on the part of the learner to address thelearning problem. Achieving that awareness suggests thatlearners can differentiate between what they know anddo not know. Thus, we hypothesized that measures ofknowledge monitoring ability should correlate with helpseeking activities in the reading comprehension domain.Simply put, accurate knowledge monitors should seekhelp strategically, i.e., on material they do not knowbecause soliciting help on known content wastes timethat could be spent more usefully seeking assistance onunknown content. Less accurate monitors, on the otherhand, are unlikely to be strategic and were expected toseek more help on known materials.

Participants and ProceduresForty-one fourth-grade students (49 percent male) froman urban public school participated. They were ethnicallydiverse, and a number of the students reported they werefrom families with incomes below the poverty line. Theparticipants were, for the most part, selected from regularelementary school classes, though four (10 percent) weremainstreamed into the classes from special education.

As in our earlier studies, the KMA consisted of a 38-item word list and vocabulary test generated from fourth-grade curriculum materials. Participants’ scores on theDRP (Touchstone Applied Science Associates, 1991)were obtained from school records. Help seeking wasoperationalized by asking participants to leaf through adeck of 3 X 5 index cards containing the 38 wordsappearing on the KMA and select 19 for which theywould like to receive additional information. The infor-mation, printed on the back of each index card, consistedof a definition of the word and a sentence using the wordin context. Participants were tested individually and thewords selected for additional help were recorded.

Results and DiscussionAs expected, the correlation between the KMA andDRP scores was .62 (p < .001), which was quite similarto the correlation of (r=.67) found for college students(Tobias et al., 1991). While the DRP scores for thissample were somewhat more variable than in Study I, it

is not clear why the correlations were substantiallyhigher than in the earlier study. It is plausible that thearchival DRP scores of the bilingual students in Study Iwere not representative of their developing reading abil-ities. Despite this variation in the correlations, theresults of these two studies indicate that the metacogni-tive monitoring–reading relationship is similar at boththe elementary and postsecondary levels.

To analyze help seeking behavior we split the partic-ipants into high and low knowledge monitoring groups,and four-word category groups: (1) words known andpassed on the test (+ +); (2) words claimed as known butnot passed (+ -); (3) words claimed as unknown andpassed (- +); and (4) words claimed as unknown and notpassed (- -). The dependent measures were derived bycalculating the percent of words selected by theparticipants for further study from each of the four-word categories. A 2 x 4 ANOVA with repeatedmeasures on the second factor was computed. Asexpected, a highly significant difference among the four-word categories was found (Wilks F [3,37]=36.22, p < .001). More important, a significant interaction wasfound between knowledge monitoring accuracy andword types studied (Wilks F [3,37]=15.34, p < .001).This interaction is displayed in Figure 2.

The results indicate that participants with higherKMA scores asked for more help, by a small margin, onwords estimated to be unknown and failed on the test (- -), whereas those with lower KMA scores asked forhelp more often with words estimated to be known andfailed on the test (+ -). There was one exception to thattrend: those with higher KMA scores also sought morehelp on the words they claimed they knew and did, infact, know (+ +). This was clearly not a strategic use ofhelp, and a waste of their time. Upon reflection, it is

4

Figure 2. The interaction between knowledge monitoringability and help seeking reviews.

07-1626.RD.CBRpt02-3Txt 4/11/07 1:38 PM Page 4

Page 10: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

plausible that one reason for seeking help on knownitems was that students were asked to review a fixednumber of words. Some participants, for example, com-mented that they would have reviewed fewer words hadthey been allowed. Thus, by requiring participants toreview a fixed number (19) of words, we may have con-founded the findings. In Study III, below, we attemptedto modify this design flaw.

Summary: Knowledge Monitoringand ReadingThe results of the two studies of the reading–knowledgemonitoring relationship among elementary schoolstudents indicate that this aspect of metacognition isimportant for school learning. The findings of both stud-ies indicated that there were significant positive relation-ships between metacognitive knowledge monitoring andreading. These findings comport with the results of ourearlier work (Tobias and Everson, 1996; 2000), eventhough there was some variability in the magnitude ofthe reading–knowledge monitoring relationship betweenthe two studies reported here. The variability may beattributable to the small samples used in Studies I and II,or to the possibility that the reading and metacognitionrelationships are more variable for elementary schoolstudents (i.e., developing readers) than for more maturecollege students. Further research with larger, develop-mentally varied samples is needed to clarify this issue.

Study III:Strategic Help Seeking inMathematics3

This investigation extended Study II by examining helpseeking behavior in mathematics. We expected that rel-atively accurate knowledge monitors would seek morehelp on math problems they thought they could notsolve than their less accurate peers who were expectedto be less strategic and less metacognitively able.

Participants and Procedures A total of 64 tenth-grade minority students (33 females)participated in the study. They were administrated 26mathematics problems, divided evenly between compu-tation and problem solving, selected from prior NewYork State Regents Competency Tests. Participants hadto estimate whether they could, or could not, solve eachproblem and then received a test asking them to solvethe same problems. After the test, participants selected

problems on which they wished to receive additionalhelp. The help consisted of the correct answer and thesteps required to solve the problems. The help wasprinted next to the problem and covered so that it couldnot be seen without removing stickers covering the helpmaterial. Participants were asked to remove the stickerswhen they wished additional help on the problem.

Results and DiscussionParticipants were divided into high and low knowledgemonitoring groups. The mean percentages of helpsought for each group on four types of problems weredetermined. The item types were (1) those estimated assolvable and solved correctly (+ +); or (2) thoseestimated as solvable and solved incorrectly (+ -); or (3)those estimated as unsolvable and solved correctly (- +),or (4) those estimated as unsolvable and failed on thetest (- -). These data are shown in Figure 3.

The data were analyzed using multivariate analysisof variance (MANOVA). There were no differences inthe mean number of problems on which both groupschose additional help, but differences emerged on thetype of problems for which help was sought (Wilks F[4,59]=3.71, p < .01). As Figure 3 indicates, the largestdifferences between accurate and less accurate knowl-edge monitors occurred on the help sought for thoseproblems participants originally estimated as solvableand then failed on the test. (F [1,62]=10.36, p < .01).Seeking help on these items is of strategic value, sinceparticipants are likely to have realized they were wrongto estimate they could solve these problems since they

5

Figure 3. The relationship between knowledge monitoring andhelp seeking reviews in Study III.

3. This report is based on a paper by Tobias and Everson (1998) found in the references.

07-1626.RD.CBRpt02-3Txt 4/11/07 1:38 PM Page 5

Page 11: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

subsequently failed them on the test. Therefore, it madegood sense to seek help on these items in order to deter-mine the reasons for their wrong estimates. Anotherdifference was on problems perceived as unsolvable thatwere failed subsequently on the test (F [1,62]=8.62,p=< .03). This finding is similar to Study II and indi-cates that the metacognitively accurate studentsreviewed more strategically than their less accuratepeers. None of the other differences were significant.

In this study help was sought on far fewer problems(36 percent overall) than on Study II where studentswere required to review half the problems. Therefore, itseems clear that students were required to seek morehelp in the earlier study than was needed which mayhave affected those results. Thus, the differencesbetween the two studies may be greater than thepercentages suggest since there are two reasons forexpecting students to seek more help on mathematicsthan on vocabulary items. First, solving mathematicsproblems is usually considered to be a more difficulttask than finding vocabulary synonyms. Second, theassistance offered in Study III was greater than for theearlier study since students had more control over thenumber of terms to review, and then could see the stepsrequired to solve the problem.

There were also differences in the types of items onwhich help was sought in the two studies. A comparisonof Figures 2 and 3 indicates that in Study III far less helpwas sought on problems students estimated knowingand passed on the test, as well as those estimated asbeing unknown and passed on the test. Apparently, notrequiring participants to seek help on a specific numberof items led to a reduction in the number of problemsreviewed and also allowed participants to be more selec-tive in their help seeking.

It is surprising that no correlations were foundbetween KMA scores and grades on in-class mathemat-ics examinations. One reason may be that the partici-pants were permitted to use calculators during in-classmathematics tests but not on the study materials; manycomplained about being denied the use of calculators.Nevertheless, the findings of Study III extend priorresearch on the relationship of knowledge monitoringto students’ strategic choices in seeking additional assis-tance for learning. Further research with larger samplesis clearly needed to examine the different types of strate-gic behavior of the two knowledge monitoring groups.Study III also confirms the relationship between knowl-edge monitoring in mathematics and learning in thatdomain. Strategic help seeking among secondary schoolstudents was examined in Studies X and XI and will bediscussed later in this report.

IV. KnowledgeMonitoring andAbility

Metacognition is described by a number of researchers asan executive process (Borkowski, 2000) that directs thefunctioning of other cognitive processes. In Sternberg’s(1988) theory of intelligence a critical role is played bygeneral cognitive processes, or metacomponents that areanalogous to metacognition, which coordinate the func-tions of lower order cognitive processes used in planning,monitoring, and evaluating learning. These formulationsclearly suggest that metacognition, in general, andknowledge monitoring, in particular, should be related tomeasures of intelligence and learning ability. Priorresearch (Tobias and Everson, 2000), and the studiesdescribed above, indicate that accurate monitoring isrelated to reading, achievement in mathematics, andlearning as reflected in school grades. These findings sug-gest that knowledge monitoring should also be related todifferent measures of learning ability or intelligence,though none of our earlier studies dealt directly with thatquestion. One purpose of the four studies describedbelow was to investigate more directly the knowledgemonitoring–ability relationship.

Study IV:Triarchic Intelligence andKnowledge Monitoring4

Sternberg (1995) has maintained that tests of intelligenceor scholastic aptitude measure analytic abilities related tosuccess in most levels of schooling. However, he maintainsthat analytic ability tells us little about the types of process-es needed to solve problems in everyday life, such as prac-tical intelligence or creative abilities. Sternberg’s triarchictheory of intelligence (1995) contends that measures ofboth practical and creative abilities should be incorporatedinto intelligence tests in order to predict success in a rangeof contexts beyond school. Sternberg suggests that intelli-gence is demonstrated in figural content areas and not onlyin the verbal and mathematical arenas frequently assessedin traditional measures of intelligence, and he has designedtests to assess each of these domains. In view of the priorrelationships between knowledge monitoring and schoollearning it was expected that knowledge monitoringshould be related to verbal analytic intelligence, especiallywhen the KMA is composed of vocabulary tasks. Further,

6

4 We are grateful to Robert Sternberg for making these tests available to us.

07-1626.RD.CBRpt02-3Txt 4/11/07 1:38 PM Page 6

Page 12: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

we were interested in the KMA’s relationships to measuresof practical and creative intelligence.

Participants and Procedures Sixty participants were recruited from a high school ina low socioeconomic community attended largely byminority students. A word list was developed fromtenth-grade curriculum materials, and participants wereasked to check whether they knew or did not know eachof the words and then complete a multiple-choicevocabulary test based on the same words. Participantsalso received these nine tests from Sternberg’s (1995)triarchic intelligence battery: (1) analytic ability, (2)practical intelligence, and (3) creative intelligence.Three tests using verbal, quantitative, and figural stim-uli for each type of intelligence posited by Sternbergwere administered, yielding a total of nine tests thatcould be group administered and objectively scored.

Results and DiscussionKnowledge monitoring accuracy was correlated with stu-dents’ scores on the nine subtests from Sternberg’s bat-tery. Of the nine correlation coefficients only two weresignificant: the analytic verbal subtest had a correlationof .40 (p < .01) with the KMA, while the practical quan-titative test correlated .29 (p < .05). The significant cor-relation with analytic verbal ability was expected sinceprior research (Tobias and Everson, 1998; 2000) hadshown that vocabulary based KMAs were related toschool learning, the primary indicator of analytic ability.However, it is difficult to account for the relationshipwith practical quantitative ability, especially since theKMA was based on a vocabulary test rather than onmathematics problems. Since eight correlations werecomputed for this analysis on an exploratory basis, thepossibility that one of these was attributable to chancecannot be ignored, and replication is needed in order toestablish further the relationship between knowledgemonitoring and diverse measures of intelligence.

Study V:The Impact of KnowledgeMonitoring, Word Difficulty,Dynamic Assessment, and Abilityon Training Outcomes5

Our prior research used students from educational insti-tutions, elementary through college, as participants.Study V investigated the knowledge monitoring–ability

relationship of adults in a training context. The abilitymeasures used in this study were developed by the U.S.armed forces and were not designed for school settings.Two other issues were also addressed in this study: theimpact of word difficulty and dynamic assessmentprocedures on knowledge monitoring. It was expected(Tobias and Everson, 2000) that medium difficultyvocabulary words would provide the most accurate indexof knowledge monitoring. Responses to easy words wereassumed to be relatively automatic, requiring minimalreflective processing by participants; difficult words, onthe other hand, were likely to be dismissed as too com-plex and most likely to receive little thoughtful attention.Words of medium difficulty, in contrast, were expected toinvoke the most intense processing since a majority oftrainees could perceive themselves as identifying the cor-rect definition of these words. Therefore, moderately dif-ficult words were expected to produce the most usefulindices of the discrepancy between estimated and demon-strated knowledge.

In previous studies (Tobias and Everson, 2000), a textpassage was used in which all of the words on the KMAwere defined. In this study we expected that enablingtrainees to infer word meaning from the text increasedthe similarity to dynamic assessment approaches (seeCarlson and Wiedl, 1992; Guthke, 1992; Lidz, 1992) inwhich learners have the opportunity to learn before andduring testing. Previous results indicated that relation-ships with reading comprehension were substantiallyhigher for those given the chance to infer the meaningsof words (Tobias et al., 1991). However, reading a textcovering the KMA words had little impact on the rela-tionship between knowledge monitoring and collegegrades (Everson and Tobias, 1998). Study V examinedwhether a text passage, in which some of the words usedin the KMA were defined, might be more closely relatedto training outcomes where the objectives of instructionwere more explicit than in most educational contexts(Tobias and Fletcher, 2000). Higher relationships withlearning outcomes were expected when the monitoringassessment and the instruction shared a similar contentand context.

Participants and ProceduresThe 71 participants (20 percent female) were examinedin groups of 12–18 at a U.S. Navy training school inSan Diego, California. The dependent variablesconsisted of participants’ mean scores, provided by theschool, on six multiple-choice tests taken during the six-week course. During the course participants weretrained to apply concepts from the physics of

7

5 This study was supported by a grant from the Navy Personnel Research and Development Center. This summary is based on a paperby S. Tobias, H.T. Everson, and L. Tobias (1997).

07-1626.RD.CBRpt02-3Txt 4/11/07 1:38 PM Page 7

Page 13: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

underwater sound transmission to the task of locatingand tracking submarines underwater usingsophisticated sonar equipment.

Materials All participants received an 80-item word list and hadto estimate whether they knew or did not know eachword. The list contained 40 general words, obtainedfrom prior versions of the Vocabulary subtest of theGates-MacGinitie Achievement6 battery for secondaryschool students. Item difficulty data from this test indi-cated there were 13 easy items, 14 medium items, and13 difficult items. Another 40 words dealt with the fieldof oceanography and were composed of two subgroups:15 technical words (e.g., benthic, guyot, nektons) and25 more common words which were generally relatedto the domain in which students were receiving training(e.g., flora, habitats, marine ecology). A multiple-choicevocabulary test containing the same words was thengiven before students read a three-page text passage onoceanography, methods of exploring the ocean, and thecharacteristics of underwater environments. All theoceanography words were defined either explicitly orimplicitly in the text. The word list and vocabulary testdealing with oceanography were then readministered inorder to obtain an index of students’ ability to updatetheir subject specific knowledge and monitoring ability.

Results and DiscussionThe results dealing with KMA–ability relationships, andthe impact of knowledge updating will be discussedhere; data dealing with the domain generality/specificityissue will be discussed later in the report. Scores on theArmed Forces Qualification Test (ASVAB) and onselected subtests of the ASVAB were available for asignificant number of students. These scores were cor-related with the KMA for each type of word; the resultsand number of cases for each correlation are shown inTable 2. The results indicate that, as expected, there

were significant correlations between the KMA andthose subtests of the ASVAB measuring verbal ability.

Contrary to expectations, the highest knowledgemonitoring–ability relationships were found for easywords; correlations decreased for more difficult andmore technical words. These findings suggest that theinfluence of general ability on knowledge monitoringdecreased as more difficult and more specializedknowledge was needed to make accurate knowledgemonitoring estimates. It is also important to note thatcorrelations between the KMA and ability measureswere higher after students read the oceanography textpassage. Apparently, the ability to learn new words byinferring their meaning from the text was more heavilyassociated with general ability than was priorfamiliarity with the technical words.

The means and standard deviations of the KMAscores for all types of words are shown in Table 3, as arethe descriptive data for the mean scores on all testsgiven in the course. A within-subjects ANOVA indicat-ed that the differences among the KMA scores weresignificant (F [6,420]=39.66, p < .001).

Again, monitoring accuracy declined as the difficultyand technical content of the words increased. Theresults also indicated that KMA scores increased on thesecond administration after the text was read, presum-ably because the trainees recalled, or were able to infer,the definition of the KMA words from the passage.

The impact of monitoring accuracy on learning wasexamined further by splitting the ASVAB test scores atthe median and computing a 2 (low/high test score) X 7(word types) ANOVA with repeated measures on thesecond factor; the KMA scores served as the dependentvariable. As expected, higher ability students were moreaccurate knowledge monitors (F [1,69]=5.28, p < .05),see Figure 4. The interaction of word type and abilitytest score was not significant.

In sum, results suggest that knowledge monitoringwas related to the trainees’ ability to learn from the

8

TABLE 2

Correlations Among ASVAB Measures and KMA Scores by Word DifficultyKMA Scores by Word Categories

ASVAB Measures Easy Medium Difficult Common Technical Common 2 Technical 2

AFQT (67) .59** .40** .33* .44** .20 .55** .35**Word Know (43) .68** .40** .23 .36* -.02 .52** .26Para. Comp. (38) .59** .34* .23 .12 .22 .28 .28Verbal (33) .77** .48** .33 .33 .15 .59* .43*Gen. Science (49) .44** .29* .33* .42* .17 .25 .40**p < .05

**p < .01

6 We are grateful to the Riverside Publishing Company for providing these data for our research.

07-1626.RD.CBRpt02-3Txt 4/11/07 1:38 PM Page 8

Page 14: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

passage and with their learning in the Navy trainingcourse. These results were similar to earlier findings(Everson and Tobias, 1998; Tobias and Everson, 1996;2000) indicating that the KMA was related to learningamong college, high school, and vocational high schoolstudents. Since the interaction between word types andoutcomes was not significant, presumably because ofthe relatively small number of cases, selectedcomparisons on specific word types were not computed.We expected that words dealing with oceanographywould be most highly related to instructional outcomes.The pattern of results tends to confirm these expecta-tions, though they did not reach statistical significance.The largest differences between high and low achieversoccurred on the common oceanography words beforeand after reading the passage, and on the technicalwords once the text was read, apparently reflecting theimportance of accurate knowledge monitoring in

learning this new material. We also expected that thelargest monitoring differences for words not dealingwith oceanography would be for items of medium diffi-culty. The pattern of results suggests that differences onthe easy, medium, and difficult words were relativelysimilar, failing to confirm our earlier predictions aboutword difficulty.

The data in Table 3 indicate that the trainees’ testscores were relatively homogenous, and perhaps attrib-utable to the mastery based organization of the course(i.e., students who failed a test by obtaining scores lessthan 70 percent had to restudy the materials and takeanother test, restricting the achievement range andpotential relationships of training outcomes to externalcriteria). In mastery based courses the time devoted tolearning may be related more to external measures,such as the KMA, than are grades, and one recommen-dation for further research is that learning time data becollected in studies using mastery learning.Nonetheless, these results suggest that the highest rela-tionships between the KMA and learning outcomesoccurred on all oceanography words after students hadthe opportunity to update their word knowledge. Thisis contrary to prior research (Tobias and Everson,2000) that found the highest relationships between theKMA and grade-point averages in college before stu-dents had the chance to update their word knowledge.Presumably, the similarity of the oceanography passageto the instructional domain, underwater acoustics, mayaccount for the facilitating effect, and it may be thatmetacognitive knowledge monitoring is most useful ininstructional situations where the content is less familiarto the learner.

Giving students the opportunity to update theirword knowledge by reading the text passage approxi-mated dynamic assessment procedures (see Carlsonand Wiedl, 1992; Guthke, 1992; Lidz, 1992).Dynamic assessment usually also includes supportiveintervention in students’ learning, observations oftheir reactions to the interventions, and evaluations ofstudents’ responses to the assistance. These additionalcomponents were not present in this study; instead stu-dents had the opportunity only to update their knowl-edge in a domain similar to the one which was the sub-ject of instruction. It remains for further research toclarify whether a more active, dynamic interventionmight demand more accurate, ongoing knowledgemonitoring.

9

TABLE 3

Descriptive Statements for KMA Scores and GPAVariables Mean SD Minimum Maximum

Easy .51 .12 -.10 .62Medium .36 .14 .00 .55

Difficult .28 .18 -.24 .62Common .21 .17 -.21 .58

Technical .21 .15 -.30 .55Common 2 .30 .18 -.09 .64

Technical 2 .37 .17 -.04 .65

GPA

(M of 6 tests) 89.23 4.59 78 99

N=71 for all cells

Figure 4. Relationship of KMA scores with ASVABperformance.

07-1626.RD.CBRpt02-3Txt 4/11/07 1:38 PM Page 9

Page 15: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

Study VI:Knowledge Monitoring andScholastic Aptitude7

Much of our knowledge monitoring research has beenconducted on relatively typical school populations.Therefore, little is known about the role of knowledgemonitoring in academically talented groups. In Study VIwe examined a large group of students with academictalent either in mathematics or language arts and socialstudies. Traditionally, academic talent has been definedalmost exclusively on the basis of prior school achieve-ment and/or scholastic aptitude. This study addressedthe question of establishing a broader profile of giftedand talented students by considering the range of theirmetacognitive abilities, generally, and their ability toaccurately assess what they know and do not know, inaddition to examining the relationship of metacognitiveability to scholastic aptitude.

Participants and Procedures The participants were 462 students from grades 7through 11 attending a voluntary summer program foracademically talented students. In a two-hour session stu-dents received a list of 30 verbal analogies selected fromprior versions of the SAT® I: Reasoning Test, and wereasked to estimate whether they could or could not solveeach analogy question. The analogy items were projectedonto a screen for 8 seconds each. Then 25 math itemswere projected in the same way for 20 seconds each, andstudents again were instructed to estimate whether theycould/could not solve the problems. The math items weretaken from prior versions of the SAT I: Reasoning Testand the SAT II: Subject Tests. After the estimates werecompleted, the analogy and math items were presented asmultiple-choice tests, and students were given as muchtime as needed to answer the questions. Participants’recorded SAT I verbal and math scores were obtainedfrom program archives.

Results and DiscussionAs expected, SAT mathematical scores correlated .50 (p < .001) with math KMA scores, and the SAT verbalscores correlated .29 (p < .001) with the verbal KMAscores confirming expected relationships betweenknowledge monitoring accuracy and aptitude. Theseresults are not surprising since the verbal and mathtasks used in the KMA were adapted from priorversions of the SAT I and SAT II. The correlations foundin this study are similar to those in Studies IV and V.

Study VII:Knowledge Monitoring, ScholasticAptitude, College Grades8

The preceding studies found significant positive correla-tions between knowledge monitoring ability and scholas-tic aptitude. Earlier research (Everson and Tobias, 1998,Tobias and Everson, 2000) found significant relation-ships between college and high school grades and knowl-edge monitoring ability. In yet another study (Eversonand Tobias, 1998), we found that KMA scores alsopredicted first year college grades. To bring these lines ofresearch together, Study VII was designed to investigatewhether KMA scores add incremental validity to theSAT I by increasing the prediction of college grades.

Participants and ProceduresParticipants (n=120) were first-year students attending aprivate college in a large urban area. The participantswere enrolled in an academic skills development programdesigned to prepare them for college. Participants wereadministrated two KMAs: (1) A vocabulary KMA con-sisting of a 40-item word list and vocabulary test and (2) a21-item mathematics KMA consisting of problems takenfrom prior versions of the PSAT/NMSQT® and SAT I.Participants were given 10 seconds to estimate whetherthey could or could not solve each problem; a multiple-choice math test on the same problems was then adminis-tered. Participants’ grade-point averages (GPA) and theirSAT I scores were obtained from their college records.

Results and DiscussionIn contrast to prior findings, there were no significant rela-tionships between students’ monitoring accuracy andeither their GPA or their SAT I scores. Possibly students inthis sample, who were attending an academic skills prepa-ration program, may have been graded unreliably with anemphasis on effort and less concern with achievementaccounting for this unique and rather surprising finding. Itappears that the KMAs used in this study, which had beenpreviously employed with college freshmen, were too diffi-cult for students lacking the academic skills expected ofentering freshmen. This interpretation is supported by theKMA correlations with the SAT I verbal scores of.28 (p < .01) for the easy words and .38 (p < .01) for wordsof medium difficulty, but fell to a nonsignificant .09 for thedifficult words. Apparently students made random guesseson the difficult words, reducing any systematic variance forthe overall KMA. The findings that none of the correla-tions between either the SAT I and GPA, or between the

10

7 Description of this study is based on a report by Tobias, Everson, and Laitusis (1999).8 This study was conducted by Harold Ford, Graduate School of Education, Fordham University.

07-1626.RD.CBRpt02-3Txt 4/11/07 1:38 PM Page 10

Page 16: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

KMA and GPA, were significant indicated that there wassomething very unusual about this group of participants.Therefore, it is clear that the question of whether the KMAadded unique variance to the prediction of college gradesshould be examined with more heterogeneous studentgroups, rather than those composed solely of freshmenenrolled in college preparatory programs.

Summary: Knowledge Monitoringand Academic AbilityCollectively, these studies, with the exception of StudyVIII, provide evidence of an important relationshipbetween intellectual ability and knowledge monitoringaccuracy. As was the case for the reading–knowledgemonitoring relationship, there was some variabilityamong the studies in the strength of the relationships.The positive ability–knowledge monitoring relationshipis quite reasonable since both monitoring and scholasticaptitudes have been found to be associated with learn-ing (Tobias and Everson, 2000) at secondary and post-secondary school levels. A remaining question iswhether knowledge monitoring contributes unique vari-ance to the prediction of school learning, above thevariance accounted for by scholastic aptitude. Ourattempt at answering that question was thwarted by thesampling problem in Study VIII. It remains for furtherresearch to provide satisfactory answers to that question.

V. Knowledge Monitoring:Is It a Domain Generalor Specific Ability?

A major issue in cognitive psychology has been whethercognitive processes are domain general or specific, i.e., ifresults dealing with cognitive processes apply largely tothe content area in which they were studied or whetherthey generalize to other content domains. This question isof some importance for processes such as metacognitionsince it raises the issue of whether students’ metacognitiveprocesses in one content area are likely to be similar inother fields, and whether training metacognition in onedomain is likely to lead to improvements in others.

Two studies (Everson and Tobias, 1998) foundsignificant relationships between vocabulary basedKMAs and GPAs in humanities and English courses butnot in science or social science classes. It was reasoned

that in those studies the KMA contained a sample ofgeneral words used in discourse within the largerculture. Findings of significant relationships withEnglish and humanities grades probably reflected thefact that general familiarity with these disciplines didnot require a technical vocabulary, whereas success inthe sciences and social sciences does require mastery ofa technical vocabulary that is unfamiliar to the averageperson. Therefore, the absence of KMA relationshipswith science and social science grades may have beenattributable to the absence of technical words fromthose domains in the KMA. In turn, these findingssuggest that metacognitive knowledge monitoringprocesses may be domain specific. Schraw, Dunkle,Bendixen, and Roedel (1995) reported that students’confidence judgments correlated across domains,irrespective of their performance providing qualifiedsupport for domain generality of monitoring skills. Thestudies reviewed below re-examined the domaingenerality/specificity question.

Study VIII:Math and Vocabulary KMAs,SAT® Tests, and Grades:Relationships9

One way of investigating the domain generality–specificity issue was to study the relationship betweenknowledge monitoring in a specific domain and achieve-ment in that area. Prior research (Everson and Tobias,1998) revealed low but significant relationships betweenGPA and a vocabulary based version of the KMA.However, the correlation between a mathematics versionof the KMA and math grades had not been previouslyinvestigated. We examined this issue in Study VIII.

Participants and ProceduresA total of 120 undergraduates (73 females) from an urbancollege participated in the study. Participants were recruit-ed from an introductory psychology course, and wereawarded extra course credit for taking part in the study.

The experimental materials were administered in largegroup sessions. Participants received two KMAs, com-posed of (1) vocabulary words adapted from a priorGates-MacGinitie Vocabulary test (Gates and MacGinitie,1989) previously used in Study V, and (2) 21 math prob-lems previously used in Study VII. Each of the items waspresented singly using a computer projection device.Students estimated whether they knew the words or couldsolve the math problems and then received paper-and-

11

9 This description is based on a study by Tobias, Everson, and Laitusis (1999) found in the references.

07-1626.RD.CBRpt02-3Txt 4/11/07 1:38 PM Page 11

Page 17: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

pencil five-choice math and vocabulary tests containing thesame items.

Results and DiscussionThe correlation between the mathematics and vocabu-lary KMAs was .49 (p < .001). The generally significantcorrelations (see Table 4) between the KMA scores andGPAs replicated prior results (Everson and Tobias,1998; Tobias and Everson, 2000) and indicated that theinsignificant relationships found in the preceding studywere attributable largely to some singular aspects ofthat sample.

Surprisingly the correlations of both the mathematicsand vocabulary KMAs with GPA in different areas wereremarkably similar (see Table 4). With the exception ofgrades in English, where both correlations were virtuallyidentical, the vocabulary KMA had somewhat highercorrelations with all the GPAs than the KMA based onmath, though the differences were not significant. TheGPAs in mathematics courses were somewhat lessreliable because many participants took few or no math-ematics classes, reducing the number of cases on whichthose relationships were based. The results of this studyindicate somewhat higher relationships between knowl-edge monitoring accuracy across different domains thanfound in preceding research. Furthermore, similarity inthe magnitude of the correlations of both KMAs withthe different GPAs also indicates somewhat greaterdomain generality than do prior studies.

Study IX:Knowledge Monitoring, ReadingAbility, and Prior Knowledge10

Prior knowledge of the content is one of the mostpowerful predictors of future learning (Dochy andAlexander, 1995). None of our earlier studies investi-gated the impact of prior learning on knowledge

monitoring. We expected that students with substantialprior knowledge should be more accurate in theirmetacognitive knowledge monitoring in that domainthan in others. One of the purposes of Study IX was toexamine that expectation. In addition, the effect ofreading ability was also re-examined in the context ofprior knowledge.

Participants and ProceduresThe participants (n=37) were college students mostlyyoung women recruited from a large state university.Students were predominately upper juniors and seniors,and participation in the study was a part of their courserequirements. Two KMAs were administered bycomputer: (1) vocabulary KMA consisted of 39 wordspreviously used in Studies V, VII, and VIII, (2) KMAcomposed of 40 vocabulary words used in texts onlearning theory (e.g., schema, encode, extinction, etc).Participants were initially shown a series of individualvocabulary items for eight seconds each, and asked toestimate whether they either knew or did not know eachword. A computer program controlled the presentationof the KMA items and all responses were recorded bycomputer. After completing their estimates, participantsreceived the same words in a multiple-choice vocabu-lary test without a time limit. Participants were thenasked to complete a brief 11-item pretest based on atext selection dealing with learning theory which theyread as a course requirement. The instructionalmaterials appeared in a hypertext format, and partici-pants clicked on various words that opened links forfurther elaboration. After the participants finishedreading the hypertext passage, they took a posttestwhich included the 11 items on the pretest plus someothers, in order to assess their understanding of the textpassage. Students’ performance on the test contributedto their grade in the course. The Nelson Denny readingcomprehension test was also administered.

Results and DiscussionTwo KMA scores were computed for each participant:one estimating knowledge monitoring accuracy for thelearning theory words, and the second for the generalvocabulary items. Multiple regression analysis was used toregress the learning theory posttest scores on the twoKMA scores and the reading comprehension measures.Only the KMA score for the general vocabulary wordscontributed to the prediction of the posttest score (r2=.52,p < .001). Table 5 shows the means and standard devia-tions for all the variables, the correlations of each withposttest score, and their regression coefficients.

12

TABLE 4

Correlations of Math and Verbal KMA Score and GPAGPA Math Hamann Vocabulary Hamann

English .24* .23*Humanities .26 .38**

Math .24 .22Science .13 .28*

Social Science .33** .39**

* p < .05** p < .01

10 The description of this study is based on a report prepared by Tobias, Everson, Laitusis, and Fields (1999) found in the references.

07-1626.RD.CBRpt02-3Txt 4/11/07 1:38 PM Page 12

Page 18: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

It appears that the learning theory words were not asclosely related to either the pretest (r=.28, ns) or theposttest (r=.34, p < .05) as were the general vocabularywords which were significantly related to both pretest (r=.39, p < .02) and posttest (r=.71, p < .001). Keep inmind, both pre- and posttests were referenced directly tothe instructional text. On the other hand, care had beentaken not to duplicate the learning theory words withterms that might appear in the instructional text in orderto avoid contamination with pre- and posttest measures.That absence of duplication may have led to generalKMA being more accurate predictor of learning from theinstructional passage than the more domain specific.

This finding is similar to what was reported in StudyV in which a KMA using technical vocabulary that wassimilar, but not identical to, the instructional domaindifferentiated between high and low achieving traineesafter they had the opportunity to familiarize themselveswith the vocabulary used in the KMA. Apparently,merely having a logical or conceptual connection to thesubject is not sufficient to increase KMA relationshipswith learning in a particular domain, unless the KMAactually samples the content of the instructed domain.The results are also interesting with respect to therelative importance of prior knowledge and monitoringaccuracy. Since only the general KMA scorescontributed significantly to the prediction of theposttest, it suggests that monitoring accuracy may bemore important for the prediction of learning than evenprior knowledge of the content to be learned. The gen-erality of this finding should be qualified somewhatsince the pretest score had a low relationship with theposttest (r=.34, p < .05), suggesting it was a somewhatpoorer predictor of outcomes than prior knowledgeusually is (Dochy and Alexander, 1995). Therefore,conclusions about the relative importance of priorknowledge and monitoring accuracy will have to awaitresearch in which the pretests specifically sample

students’ prior knowledge of the instructional domain,rather than general knowledge of that domain, or wheninstructional efficacy can be manipulated experimentally.

Finally, since the reading comprehension test scoredid not contribute significantly to the prediction ofposttest scores while general knowledge monitoring did,the results also suggest that students’ reading compre-hension was not as important in predicting learning aswas their general monitoring accuracy. Again, some cau-tion in generalizing these results is in order for severalreasons. The reading comprehension test score and theKMA were highly correlated (r=.81), suggesting colin-earity between the two measures. This is not too sur-prising because the words for the general monitoringKMA were selected from prior versions of a readingachievement test, and one would expect such materialsto be highly correlated. The correlation of the readingcomprehension score and the learning theory KMAwhile significant (r=.42, p=.01), was much lower.Finally, the distribution of reading comprehension scoreswas positively skewed, with most of the scores falling atthe upper end of the distribution. Thus, further researchwith reading materials and vocabulary words which arenot as closely related is needed to examine the relativecontributions of knowledge monitoring to learning.

Other Relevant Studies Several studies reported earlier dealt with the issue ofthe domain generality–specificity of metacognitiveknowledge monitoring. Study V, for example, examinedthe domain generality–specificity issue by exploring therelationship of vocabulary knowledge monitoringassessments to different types of instruction in specificdomains, sonic transmission in underwater environ-ments. The KMAs used common words of varyingdifficulty, and less common, technical words dealingwith oceanography. The oceanography words weresimilar to the instructional domain without duplicatingit. The domain generality question was investigated intwo ways: (1) by comparing the correlations betweenknowledge monitoring accuracy in the two contentdomains, and (2) by examining the relationship of eachword set with success in a training course. The correla-tions between the two KMAs on the first administrationwas .44 (p < .001). That relationship increased to .59 (p < .001) for the second administration of the oceanog-raphy words after the trainees had read the instructionaltext. These data suggest a moderate relationshipbetween monitoring accuracy in the different domains.

The two KMAs were also correlated with trainees’course learning. These correlations are shown in Table 6.When more common, general vocabulary was used, the

13

TABLE 5

Descriptive Statistics, Correlations, and Beta Weightswith Posttest Score

Mean SD r Posttest Beta

General KMA .69 .27 .71** .54*Learning KMA .17 .18 .34* .04

Pretest scores 4.81 2.23 .34* .08ReadingComprehensiontest score 30.84 5.42 .63** .16

Posttest score 32.38 6.43

Pre- and Posttest r=.34*

* p < .05** p < .01

07-1626.RD.CBRpt02-3Txt 4/11/07 1:38 PM Page 13

Page 19: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

KMA was related to learning outcomes only for the easyand difficult words. Upon initial testing, the KMA wascorrelated with the more common oceanography wordsbut not the more technical words. After the instructionaltext was read, the KMA correlations increased for boththe common and technical oceanographic terms.

Students may have been able to recall or infer themeanings of many technical words from the text, andthe higher correlations on the second administration ofthe KMA are a further indication that knowledgemonitoring accuracy is important for learning. In sum,the correlations of the different KMA word categorieswith learning outcomes indicate that knowledge moni-toring accuracy was related to learning in a technicaldomain, again suggesting moderate generality forknowledge monitoring accuracy. In this study, and inthose reported by Schraw et al. (1995), the evidenceregarding domain generality of metacognition is mixed.

Returning to Study VI, two different KMA scores,verbal and math, were reported. The correlationbetween the two KMA scores was .33 (p < .001)indicating a low but significant relationship betweenknowledge monitoring in those two domains. The mathKMA score correlated .52 (p < .001) with the SAT Imathematical scores, but was not related to the SAT Iverbal scores. The verbal KMA had correlations of .27(p < .001) with the SAT I–V, and .16 (p < .001) withscores on the SAT I–M, apparently reflecting some ofthe verbal content of the math word problems. Somestudents (n=93) had taken the SAT I a second time. Themath KMA–SAT I–M correlations for this subgroupwas .56 (p < .001), and correlations between verbalKMAs and SAT I–V were .44 (p < .001), and .28 withthe SAT I–M (p < .01). As in the Schraw et al. (1995)study and in the research described earlier, these results

suggest both domain specific and domain generalcomponents to knowledge monitoring. The higherrelationships with SAT scores in the same content domainpoint to domain specificity, and the significant correla-tions between the verbal and math KMAs, as well as thesignificant, though low, correlations of the verbal analo-gies KMA with the SAT I–M hint at domain generality.

The data dealing with the generality–specificity issueindicate that there are both domain general and domainspecific components to knowledge monitoring. While theresults vary among studies, correlations between KMAsfrom different domains, and between KMAs and learningin different subjects are generally in the low to moderaterange and never approach levels suggesting thatknowledge monitoring is either predominantly domaingeneral or domain specific. Much of the content taught inschools is similar across the curriculum. This complicatesattempts to study the generality–specificity of knowledgemonitoring, since correlations with knowledgemonitoring in different fields not only reflect relation-ships between knowledge monitoring accuracy, but alsosome of the relationships between different contentdomains. A definitive study of this issue may require useof a domain which is entirely unfamiliar to the partici-pants, development of a KMA in that domain, and thencomparing relationships between novel and more familiardomains. Pending such research, results are likely to indi-cate that metacognitive knowledge monitoring has bothdomain general and domain specific components.

VI. Self-Reports ofMetacognition andObjective Measuresof KnowledgeMonitoring

In addition to the work on metacognition there hasbeen substantial interest in what the learner is thinkingabout prior to, during, and after learning activities andthe role that these cognitive processes play in facilitatinglearning in complex environments (i.e., the strategicaspects of learning). Weinstein refers to these as learningand study strategies, which include “a variety of cogni-tive processes and behavioral skills designed to enhancelearning effectiveness and efficiency” (Weinstein andMayer, 1986, p. 4). As mentioned above, a number ofsuch self-report measures of metacognition and self-

14

TABLE 6

Correlations Between KMA and Learning in TrainingCourseKMA Score Coefficients for Various Word Types r

Easy (13 Words) .24*Medium Difficulty (14 Words) -.01

Difficult (13 Words) .36**Common Oceanography (15 Words) .24*

Technical Oceanography (25 Words) -.01Common Oceanography–Administration 2 .39**

Technical Oceanography–Administration 2 .33**40 Popular Words .31**

40 Oceanography Words .16

40 Oceanography Words–Administration 2 .45**

* p < .05** p < .01

07-1626.RD.CBRpt02-3Txt 4/11/07 1:38 PM Page 14

Page 20: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

regulated learning are available (Jacobs and Paris, 1987;Pintrich, Smith, Garcia, and McKeachie, 1991; Schrawand Denison, 1994; Tobias, Hartman, Everson, andGourgey, 1991) and the relationships between these andthe KMA were investigated in two studies reported below.

Self-reports of metacognitive processes are subject toa number of problems which have been describedelsewhere (Tobias and Everson, 2000) and will be sum-marized here only briefly. Since metacognition involvesmonitoring, evaluating, and coordinating cognitiveprocesses, many have wondered if students are aware ofthe processes used during learning? Can they describethem by merely selecting from alternatives on a multiplechoice scale? There is also a question about whetherstudents are reporting honestly. While the truthfulnessof self-report responses is always an issue, it is especial-ly troublesome for metacognitive assessments becausestudents may be reluctant to admit exerting little efforton schoolwork, especially when those reports may beavailable to their instructors.

The KMA also uses self-reports, the estimates ofknowledge or ability to solve problems, in addition tostudents’ test performance. However, unlike question-naires about students’ metacognition the KMA does notask participants to report on cognitive processes usedwhile performing a task. Estimates on the KMA aremore accessible than reports of cognitive processes and,therefore, the KMA is less likely to be affected by the dif-ficulties of abstract recall involved in responding tometacognitive questionnaires. This was confirmed byresearch (Gerrity and Tobias, 1996), suggesting that theKMA, compared to self-report scales of test anxiety, wasless susceptible to students’ tendency to present them-selves in a favorable light. Despite these differences, amodest relationship between metacognitive self-reportsand the KMA was expected. These issues were addressedin Study VI and VIII and will be reviewed next.

In addition to investigating the KMA’s relationshipwith ability in Study VI, we examined the correlationsbetween the KMA and three other assessments ofstudents’ metacognition—two self-report questionnairesand teachers’ observational reports. In view of theirobvious content and methodological differences, moder-ate relationships between the KMA and these otherassessments was expected. In addition to the verbal andmath KMAs, participants completed two self-reportscales: (1) the Learning and Study Skills Inventory(LASSI) (Weinstein et al., 1987); and (2) theMetacognitive Awareness Inventory (Schraw andDennison, 1994)—a 51-item self-report scale measuringdifferent aspects of students’ metacognitive processes.Moreover, teachers assessed students’ metacognitiveabilities by completing a seven-item, Likert-type scale.

Students’ LASSI scores were factor analyzed, andtwo factors accounting for 68 percent of the variancewere extracted. The first factor described students’reports of relatively automatic study strategies. A repre-sentative item was …I am distracted from my studiesvery easily. Items loading on the second factor reflectedthe use of strategies requiring deliberate effort, forexample …I use the chapter headings as a guide to findimportant ideas in my reading. Their LASSI factorscores were correlated with their KMA math and verbalscores, with correlation of r=.21 between LASSI Factor1 and the math KMA and r=.19 with the verbal KMA.The correlations with Factor 2 were r=-.11 and -.20,respectively. While these correlations were, with oneexception, statistically significant because of the largenumber of cases, the absolute relationships were smalland somewhat lower than expected. One reason for thelow relationships may be that the LASSI does not havea scale designed to measure metacognition, in general,or knowledge monitoring, in particular.

The MAI scores were also factor analyzed andyielded a single factor accounting for 69 percent of thevariance. This factor was correlated only with the mathKMA (r=.12, p < .05). The seven-item teacher ratingscale was also factor analyzed, yielding two factors withEigen values above unity and accounting for 71 percentof the variance. Varimax rotation results were then sub-mitted to a second order factor analysis yielding onemonitoring factor which correlated significantly onlywith the math KMA (r=.143, p <.05).

Not surprisingly, the most striking aspect of theseresults was the relatively high correlations among theself-report measures, as indicated in Table 7, whencontrasted with correlations with the KMAs. Wesuspect that the self-report measures are sharingcommon method variance.

The correlations among the KMAs and both theLASSI and MAI are sometimes significant, but generallyquite low, suggesting relatively small amounts of sharedconstruct variance. Moreover, the higher KMAcorrelations with the SAT, compared with the SAT Icorrelations with the LASSI or MAI, may be attribut-able to the fact that the content of the KMAs wasadapted from the SAT I and SAT II. Also, in priorresearch (Everson, Tobias, and Laitusis, 1997), theKMA was often related more to GPA than the LASSI.Thus, a strong relationship between the KMA andscholastic aptitudes was not unexpected.

In Table 7 the teacher rating scale had low relation-ships with all of the other variables. Those results maybe attributed to the fact that many of the teachersrefused to fill out the rating form, indicating that suchan assessment was foreign to the philosophy of the

15

07-1626.RD.CBRpt02-3Txt 4/11/07 1:38 PM Page 15

Page 21: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

educational program in which they were engaged.Furthermore, we noted that the ratings had limitedvariability—the teachers were reluctant, apparently, torate these gifted and talented students low on anyacademically related measure.

In addition to the lack of shared method variance, it isplausible that the low correlations between the KMA andLASSI factors in Study VI were due to the absence of aLASSI scale designed specifically to assess metacognition.In an effort to examine this possibility, another self-reportinstrument with a metacognition subscale, the MotivatedStrategies Learning Questionnaire (MSLQ) (Pintrich,Smith, Garcia, and McKeachie, 1991), was used in StudyVIII. Unlike the LASSI, which assesses use of study strate-gies in general, students respond to the MSLQ in terms ofthe strategies used in particular courses. Therefore, inStudy VIII we examined the relationships between theKMA and the MSLQ, the relationships between theseassessments and the LASSI, as well as students’ GPAwhich were discussed earlier.

Recall that in Study VIII participants (n=120) weregiven math and verbal KMAs, the LASSI, and the MSLQ.Participants’ GPAs were retrieved from the college’srecords. The MSLQ was administered using standardinstructions requiring students to respond in terms of aparticular course. Half the sample was asked to respondin terms of an English course they were currently taking,

while the other half responded in terms of a mathematicscourse. If students were not registered for courses ineither content area, they were asked to respond in termsof the last course of that type they had taken.

The 10 LASSI subscales were factor analyzed andtwo interpretable factors, similar to those found ear-lier, emerged. One was related to automatic studystrategies and the other was related to strategiesunder volitional control. Since the MSLQ was admin-istered to half the participants in terms of an Englishclass and the other half in terms of a mathematicscourse, the number of students completing eitherversion of the MSLQ was only about half the totalsample size, and too small for factor analysis. Thecorrelations between the MSLQ scales, including theone assessing metacognition, and the math or verbalKMAs, were quite low. However, in order to deter-mine whether all the MSLQ scales assessed aconstruct similar to the KMA, multiple correlationswere computed for students’ scores on all 15 MSLQsubscales and their math and verbal KMA scores.Separate correlations were computed for studentscompleting the scales in terms of English or mathe-matics courses. These data are shown in Table 8.

The multiple correlations between the MSLQ and theKMAs were substantially higher than the zero order cor-relations, though often failing to reach statistical signifi-

16

TABLE 8

Correlations of MSLQ, LASSI, KMA Scores, and GPAVariables Math KMA Verbal KMA LASSI Factor I LASSI Factor II Total GPA

MSLQ-E1 .68 .61 .66 .90** .45MSLQ-M1 .55 .40 .77** .82** .54

LASSI-I -.03 .11 .— .— .32**LASSI-II -.20* -.17 .— .— .14

Math and Verbal KMA .— .— .15 .22 .36**

* p < .05** p < .01

TABLE 7

Correlations of Verbal and Math KMA Scores, Metacognitive Self-Report Scales, Teacher Ratings, and SAT I–Vand SAT I–M ScoresVariables LASSI-I LASSI-II MAI Teacher Rating SAT I–M SAT I–V

Verbal KMA .19** -.20*** .01 .06 -.01 .29***Math KMA .21*** -.11* .12* .16* .50*** .16*

LASSI Factor I .— .— .25*** .18 .06 .08LASSI Factor II .— .— .56*** .06* -.10* -.10*

MAI Factor .25*** .56*** .— .11 .01 -.11*

Teacher Rating Factor .19* .06 .11 .— -.08 -.00

* p < .05** p < .01

*** p < .001

07-1626.RD.CBRpt02-3Txt 4/11/07 1:38 PM Page 16

Page 22: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

cance. Clearly, further research with larger samples isneeded to determine the MSLQ’s relationship with theKMA. The correlations between the math and verbalKMAs and the LASSI factors were similar to thosereported in Study VI, though they were not statisticallysignificant because of the small number of subjects.Surprisingly, when the ten LASSI subscales were regressedonto the math KMA scores the multiple correlation wasr=.46, and the multiple correlation with verbal KMAscore was r=.35, essentially doubling the multiple corre-lation with LASSI factor scores. This discrepancy inresults is attributable, we believe, to the fact that multiplecorrelations tend to maximize relationships amongpredictors and criteria, while the factors tend to optimizean interpretable factor structure. Thus, these somewhathigher multiple correlations may be misleading.

All these assessments were related to students’ acad-emic achievement as measured by undergraduate GPA.These data, also displayed in Table 8, indicate that theMSLQ had moderate relationships with achievement,though again failing to reach statistical significance dueto the small number of cases and multiple predictors.The correlation between the verbal and math KMAswere r=.49 (p < .001).

The results suggest a convergence between theoverall MSLQ and knowledge monitoring, thoughreplication is needed because the correlations, while ofreasonable magnitude, failed to reach statistical signif-icance because of the small sample and the large num-ber of predictor variables. Correlations among theKMA scores and the LASSI factors were also low,though the multiple correlations improved that picturesomewhat. It seems reasonable to conclude that theMSLQ and the LASSI, in total, are measuringconstructs similar to the KMA, though none of theindividual self-report scales are similar in content to theKMA. The correlations in Table 8 also indicate that theLASSI and MSLQ were markedly similar (multiple r’sranging from .7 to .9), not surprising since many of theitems in both scales are quite similar and both assess-ments share method variance.

Summary: Relationships of KMAScores and Self-Report Measuresof Metacognition As expected, low-to-moderate relationships were foundbetween the different self-report instruments used in thevarious studies and the KMA. Several findings wereconsistent over the different studies. The self-report

questionnaires tended to correlate highly with oneanother, indicating that they measured similar charac-teristics using quite similar methods. The uniformlylower correlations with the KMAs suggest that thoughthe KMAs evaluate some of the same characteristics asthe self-report measures, the differences in method maydiminish their shared variance.

In most of the studies the correlations among theKMAs and measures of scholastic aptitude and learningtended to be higher than those for the self-report instru-ments. This suggests the variance that the KMA doesnot share with the self-report instruments is more highlyassociated with school learning than are the question-naire measures. A recent study, conducted in Belgium(Van Damme and Claesen, personal communication)found low, and often insignificant correlations, betweena vocabulary based KMA and several self-reportmeasures. Especially interesting were findings that theKMA scores were strongly related to teachers’ judge-ments of metacognition, while the self-report scaleswere not. This research confirms our earlier findingsthat, in contrast to self-report measures, the KMAassesses components of metacognition that are morehighly related to classroom learning.

Study X:An Investigation of the Impactof Anxiety on KnowledgeMonitoring11

The task of knowledge monitoring is complex. Skilledmetacognitive knowledge monitoring requires thatstudents retrieve prior declarative or procedural knowl-edge from long-term memory, compare it to materialwith which they are presented, and make a judgment asto whether they know the material, or are able to use itto solve problems. The complexity of these taskssuggests that knowledge monitoring should benegatively affected by anxiety. Anxiety is an unpleasantaffective state, similar to fear, that has been shown todebilitate performance on complex cognitive tasks(Hembree, 1988; 1990). It has been suggested (Tobias,1992) that anxiety impairs performance because thecentral representation of anxiety absorbs cognitiveresources, leaving reduced capacity for students to dealwith demanding tasks.

Prior research (Tobias and Everson, 2000) found neg-ative relationships between anxiety and knowledge mon-itoring. Those studies assessed students’ anxiety by self-

17

11 The description of this study is based on a dissertation completed by Julie Nathan at Fordham University in May 1999, and on areport of that study by Nathan and Tobias (2000) presented at a meeting. Both sources may be found in the references.

07-1626.RD.CBRpt02-3Txt 4/11/07 1:38 PM Page 17

Page 23: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

report inventories. Such inventories usually pose a varietyof problems, as indicated in the section dealing with self-reports below. Thus, one purpose of this study was tomanipulate anxiety experimentally in order to makemore causal inferences about the impact of anxiety onknowledge monitoring. A further purpose of this studywas to examine the impact of anxiety at different pointsduring the KMA procedure, i.e., during estimation, test-ing, or help seeking, in order to determine when anxietyinduces its most debilitating effects. It was expected thatclarification of that question would provide a betterunderstanding of how anxiety exerts its effects.

Participants and ProceduresParticipants (n=97) were drawn from a small parochialhigh school in a rural community. Students completed anumber of anxiety measures and were assigned ran-domly to one of four experimental groups. TheMathematics Anxiety Scale (MAS), a subscale of theFennema-Sherman Mathematics Attitude Scales (1976)was used to assess math anxiety. Students reported theirlevel of state worry, i.e., the degree of anxiety experi-enced at that time, on an abbreviated version of theWorry-Emotionality Questionnaire (WEQ) (Liebert andMorris, 1967; Morris, Davis and Hutchings, 1981) con-sisting of only five worry items. Participants weredirected to report their feelings and attitudes in reactionto the tasks presented to them.

The KMA consisted of 26 math problems adaptedfrom prior versions of the SAT. In a second session,one week later, all groups performed the same fourtasks. The experimental manipulation consisted ofvarying the point at which evaluative stress wasinduced. Stress was induced by telling students theKMA was a measure of ability requiring much effort.The four groups were: (1) an estimation group inwhich students estimated whether they could or couldnot solve each of 26 mathematics problems andreceived stress-inducing information at this point; (2)a test group that took a multiple-choice test containingthe same KMA problems and stress instruction weregiven at this point; (3) a help group which was per-mitted to select any test items on which they wished toreceive additional help (answers to the math problemsand solution strategies), again, stress instructions wereadministered at this point; and (4) a control groupwhich received no stress-inducing instructions.Immediately after the experimental sessions, partici-pants completed another brief Worry scale to deter-mine the level of test anxiety experienced at that point(see Nathan, 1999 for additional details).

Results and DiscussionThere were significant differences in students’ reportedworry, indicating that the stress instructions succeededin arousing varying levels of anxiety across the fourexperimental groups. It was expected that anxietywould peak when evaluative stress was introduced foreach group but contrary to expectations results showedthat the anxiety increased similarly for all groups overthe different tasks (F [2, 192]=29.20, p < .01). from alow of 9.59 at estimation to 12.8 at help seeking.Apparently, once stress was introduced it continued atan elevated level throughout the experiment; this wastrue even for the control group which received no stress-inducing instructions. Presumably, the anxiety usuallyconnected with solving math problems and the test-likecontext were sufficient to arouse and maintain test anx-iety throughout the experiment. Anecdotal evidenceindicated that students knew quite a bit about the taskeven before the instructions were given, suggesting thatprior participants told succeeding ones about the tasksand that this feedback mitigated the influence of theanxiety-inducing instructions.

Consistent with prior findings, those high in mathanxiety performed more poorly on the math test (r=-.36)and were less accurate knowledge monitors (r=-.21)than those lower in anxiety (Tobias and Everson, 2000).They also estimated being able to solve fewer problemsthan their less anxious peers (r=-.49). However,contrary to expectations, there were no differencesamong the groups on any of these dependent variables,presumably because of the feedback among groups.One important suggestion for future research is thatparticipants assigned to different groups should eitherbe examined in different places making the possibility ofsuch feedback unlikely, or if they are recruited from thesame location, they should be examined at the sametime. Such precautions are important in situationswhere differential instructions are given.

Study XI:Cross-Cultural Perspective onKnowledge Monitoring12

All prior KMA studies were conducted in the UnitedStates using American students. In Study XI we examinedknowledge monitoring in other cultures by comparingNigerian and American students. Nigerian students wereof special interest since there is some evidence suggestingthat instruction in that country relies heavily on rote mem-orization (Kozulin, 1998). It was, therefore, interesting to

18

12 The data for this study were collected by Hyacinth Njoku, Graduate School of Education, Fordham University.

07-1626.RD.CBRpt02-3Txt 4/11/07 1:38 PM Page 18

Page 24: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

determine whether such instructional practices led to lessaccurate metacognitive knowledge monitoring and lessstrategic help seeking among Nigerian students.

Participants and ProceduresThe participants were tenth- and eleventh-grade highschool students from the United States and Nigeria. TheNigerian sample (n=77) was composed of male studentsfrom an elite Catholic High School. The American sam-ple consisted of two groups from two different parochialhigh schools. One group (n=70, 39 females) came froma Catholic coeducational high school in an urban middleclass neighborhood in a small city in northeastern UnitedStates. The second group (n=57) was all male, predom-inantly minority students from lower socioeconomicbackgrounds attending a Catholic high school in a poorurban neighborhood in the Bronx, New York.

All students received a KMA composed of 24 mathproblems selected by experts as being equally appropri-ate for Nigerian and American students. They firstestimated their ability to solve each of the problems,then took a multiple-choice test on the problems. Finally,they were given the option of receiving additional helpon any of the problems. The help consisted of the correctanswer to the problem and the steps to reach the answer.

Results and DiscussionMath KMAs were computed for all students and thedata indicated substantial differences between the twogroups of American students. Therefore, they were ana-lyzed as separate groups. A one-way ANOVA amongthe four groups (Nigerian males, and the threeAmerican groups of females, middle class males, andmales from poorer backgrounds) found substantialdifferences in knowledge monitoring accuracy (F [3,200]=4.42, p < .005). Post hoc contrasts revealedthat middle class participants were more accurate mon-itors than the minority group (p < .004), and that themajority American males were marginally more accu-rate than the Nigerian students (p < .056). There wereno differences in monitoring accuracy between Nigerianstudents and minority American males or females.

A one-way ANOVA of the math achievement rawscores revealed a significant main effect on mathperformance (F [3,200]=12.361, p < .000). Post hoccontrasts indicated that middle class American maleshad higher scores than Nigerians and poorer Americanmales. Math achievement scores of poorer Americanmales were lower than those of the Nigerian orAmerican female students. Thus, while the Nigerianmales differed only marginally in knowledge monitor-ing accuracy from their American counterparts, the dif-ferences in math achievement between these groups was

more substantial. These data suggest that poor perfor-mance in mathematics among Nigerian students is notrelated to poor metacognitive knowledge monitoring.Clearly, cross-cultural differences in metacognitiveknowledge monitoring need further study with largerand more comparable groups of students.

VII. MetacognitiveKnowledgeMonitoring andStrategic StudyingAmong SecondaryStudents

There is a compelling rationale for the relationshipbetween knowledge monitoring and strategic studying,especially at secondary and postsecondary levels wherea great deal of instructional material has to be masteredin a short time. Students who accurately differentiatebetween what they know and do not know are likely tostudy more strategically. Furthermore, as suggested bythe results of Study II and III, it was also expected thatstudents with accurate metacognitive knowledge moni-toring skills are apt to seek help on tasks they estimatedas knowing yet failed on a subsequent test. These expec-tations were examined in Studies X and XI.

As described earlier, in Study X a 26-item mathKMA was administered along with a number of anxietymeasures designed to assess both math anxiety and thedegree to which anxiety was experienced while workingon the experimental tasks. Students were assignedrandomly to one of four groups that varied as to whenthey received stress-inducing instructions. Strategicreview was studied in two ways. Responses were scoredby assigning one point for each item that students failedon the test and on which they subsequently sought helpand one point for those items that students passed onthe test on which help was not requested later. Theresults indicated that more anxious students sought helpon a significantly greater percentage of items passed onthe test, a nonstrategic and inefficient endeavor. Thegroup receiving stress instructions during the testreviewed significantly more items than the other threegroups, perhaps because the experimental situation forthem was most similar to stressful school examinations.

Similarly, we expected accurate knowledge monitors

19

07-1626.RD.CBRpt02-3Txt 4/11/07 1:38 PM Page 19

Page 25: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

would review more strategically (i.e., would omitreviewing material they had mastered already) than lessaccurate students. Results from the control group, whoreceived no stress instructions, would most directlyaddress this question. Knowledge monitoring accuracyand strategic review were positively correlated in the con-trol group, but this relationship did not reach statisticalsignificance. Monitoring accuracy and strategic reviewwere expected to correlate negatively in the three groupsreceiving stress instructions because the evaluative stressinstructions were expected to disrupt the otherwise posi-tive relationship between these variables. Contrary toexpectations, however, only the data from the Test groupshowed a negative relationship between these variables.This finding suggests that accurate knowledge monitorswho feel most stressed just before taking an examinationwere least strategic in deciding on which items to ask foradditional assistance after the exam.

In the second analysis, a repeated measure ANOVAwas computed on the types of items on which partici-pants sought help. Consistent with Studies II and III, thedata were analyzed in terms of the percentage of itemson which help was sought from each of four categories.Items students estimated knowing that were later (1)passed (+ +) or (2) failed (+ -) on the test, and words stu-dents estimated not knowing which were later (3)passed (- +) or (4) failed (- -) on the test. These data areshown in Figure 5. The differences were statistically sig-nificant, indicating that the more anxious studentsreviewed a greater percentage of items passed on thetest, clearly a nonstrategic and inefficient endeavor.

The repeated measures ANOVA also revealed that theTest group’s item review pattern was least efficient.Although they reviewed a greater proportion of itemswhich they failed on the test, they reviewed many items

they got right on the test. These review patterns suggestthat feeling stressed before the test interferes with theability to review efficiently and strategically. An alterna-tive explanation may be that the Test group reviewedmore items than the other three groups, perhaps becausethat condition was most similar to stressful school exams.

In Study XI we also examined strategic help seeking.Again there were four groups of 10th and 11th Nigerianand American high school students: (1) 77 Nigerian malestudents from an elite Catholic high school; (2) 70 maleand (3) 39 female students from am American Catholiccoeducational high school in a middle-class neighbor-hood; and (4) 57 minority students from low socioeco-nomic backgrounds attending a Catholic high school in apoor urban community. The KMA had 24 mathproblems. After estimating their ability to solve eachproblem, students took a multiple-choice test comprisedof these same problems. Finally, they were given theoption of receiving additional help on any of the prob-lems. The help consisted of seeing the correct answer tothe problem and the steps required to solve the problem.

The data were analyzed using a 2 (high/low KMA) X4 (++, +-, -+, --) ANOVA, with the four types of itemstreated as a repeated measure. An overall multivariatedifference among the items was found (Wilks’Lambda=.875, F [3, 195]=, 9.25, p < .001), but therewas no difference between low and high monitoringgroups, nor was there an interaction between monitor-ing groups and type of items reviewed. A similaranalysis examining differences among cultural groupwas found (F [3, 197]=2.60 < .054). Post hoc contrastsrevealed that the differences were for those items stu-dents estimated not knowing and failing (--) withAmerican middle-class students seeking more help onthese items than Nigerian students. These data indicatesomewhat more strategic performance by the middle-class American males, though a subsequent comparisonrevealed differences (p > .027) in items estimated to beunknown and passed on the test (-+), with Nigerianmales reviewing fewer of these items than the middleclass sample of American males. The means for thedifferent items by group are displayed in Figure 6.

The mixed findings regarding help seeking and thesmall sample sizes indicate that the Nigerian–Americancultural differences in metacognitive behaviors requirere-examination with larger samples.

Moreover, there were contradictory findings regard-ing strategic help between Studies II and III and X andXI. The first two studies confirmed that those whoaccurately differentiated between what they know anddo not know sought help more strategically by omittingknown material and seeking help for unfamiliarmaterial. In contrast, less accurate monitors generally

20

Figure 5. Relationship between test anxiety and help seeking.

07-1626.RD.CBRpt02-3Txt 4/11/07 1:38 PM Page 20

Page 26: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

allocated their attention less effectively by studyingwhat was already known at the expense of unfamiliarmaterial. In Studies X and XI, however, accurate knowl-edge monitoring had little impact on strategic helpseeking. It is plausible that these contradictory findingsare due to sampling and methodological differences inthe research designs. The first two studies used elemen-tary school students, while secondary school studentsparticipated in the later studies.

Secondary school students, and those at postsecondarylevels, have to master a great deal of new material. In thatsetting, the time needed to seek help has important impli-cations since students usually do not have enough time toget help on all the items on which they might want assis-tance. Therefore, help should be sought only on itemswhere it is most needed. In the conditions implemented inthe later studies, however, students could receive help onall the items they wished, hence they had no need to actstrategically. An important suggestion for furtherresearch is to permit help on a small number of items soassistance on some items is obtained at the expense ofothers. Ideally students should seek help only on thosethey estimated knowing/being able to solve and subse-quently failing on a test, as well as items estimated asunknown/unable to solve and failed on test. Reviews ofthe results of several studies using vocabulary, verbalanalogies, and math KMAs indicate that approximately36 percent of students’ responses fall into those cate-gories. Therefore, one suggestion might be to permit stu-

dents to obtain help only on a portion of the presenteditems. Then, the help condition may more nearly reflectthe learning conditions students encounter in schools.

There are two other suggestions for further research.First, all our studies used relatively small samples, thusreplications and extensions with larger numbers of par-ticipants is clearly needed. Second, it would be useful toobtain “think aloud” protocols as students engage inhelp seeking or review. Such protocols may clarifystudents’ thinking about their motivations for help seek-ing on different types of items.

VIII. General DiscussionThe 11 studies presented here, along with our earlierresearch, lend additional support to the validity of theKMA. In sum, they demonstrate the importance ofassessing knowledge monitoring ability as a way toenlarge our understanding of human learning. For themost part, the findings bolster the importance of accu-rate knowledge monitoring for school learning, readingat elementary school levels, strategic learning at leastamong elementary school students, and the develop-ment of academic aptitude. The studies extended ourearlier work on the KMA in two major ways: (1) thecontent of the KMA was extended to mathematics atsecondary and postsecondary school levels, as well as toverbal analogies and science, and (2) new populationswere studied, including the gifted and talented, adults inmilitary training programs, and foreign-born students.

The research dealing with general ability provided evi-dence that knowledge monitoring was positively related togeneral ability. Prior findings relating the KMA to acade-mic achievement implied that knowledge monitoring wasrelated to how much students knew. Since achievementand aptitude are highly related, positive relationships withability were expected, and these studies provided empiri-cal evidence of that relationship. Results varied withrespect to the size of the ability–monitoring relationship,and further research is needed to examine the commonvariance between the two constructs. Nonetheless, weexpect a positive relationship between these cognitivecomponents. At the same time we expect that knowledgemonitoring will have some unique variance, since the abil-ity to estimate one’s knowledge should be related to howmuch is known, but not be synonymous with it.

This body of research suggests that the ability todifferentiate between what is known (learned) andunknown (unlearned) is an important ingredient for suc-cess in all academic settings. It should be noted, however,that accurate monitoring of one’s prior knowledge is only

21

Figure 6. Math KMA means by type of problem reviewed forNigerian and American male students.

07-1626.RD.CBRpt02-3Txt 4/11/07 1:38 PM Page 21

Page 27: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

one aspect of metacognition that may be important forsuccess in school-like situations. Good students also haveto monitor their comprehension of reading assignmentsand their understanding of lectures and other knowledgeacquisition efforts. In addition, good students typicallyconsult other resources (students, teachers, or referenceworks) in order to enlarge their understanding, answerquestions, and integrate material learned in differentcourses. They also need to check whether they have com-pleted assignments in different courses, whether theyhave met the various requirements in order to progress intheir academic careers, etc. These are only a few of thedifferent types of monitoring behaviors students need toperform in order to succeed in school. These variousforms of metacognitive monitoring have been studied aspart of the research on academic self-regulation (Winne,1996; Zimmerman and Risenberg, 1997).

Although we are aware of no research specificallyrelating monitoring of prior knowledge and academicself-regulation, there is a strong basis for expecting sucha relationship. The ability to differentiate what oneknows from what is still to be mastered is likely to be afundamental component of other forms of self-monitor-ing and self-regulation. It is hard to imagine successfulmonitoring of reading comprehension or understandinglectures if students cannot accurately differentiatebetween what they know and do not know. It, there-fore, may be hypothesized that accurate monitoring ofprior knowledge is a prerequisite for the effective self-regulation of learning, as we assume in our model (seeFigure 1). Clearly, research examining these relation-ships is urgently needed.

Self-regulation is a compound construct made up ofcognitive, emotional, motivational, and conative(students’ willingness to expend effort to accomplishtheir goals) components; these are sometimes referred toas “skill and will” components (Weinstein and Mayer,1986; Zimmerman and Reisenberg, 1997). Thus, stu-dents may have the cognitive abilities needed for effec-tive self-regulation but lack the motivation, desire, orwill to regulate their efforts. Therefore, students whoare accurate knowledge monitors may not be effectiveat regulating their academic behavior, unless they alsohave the motivation, energy, and the will to do so.

IX. Suggestions forFuture Research

In most of the KMA studies, the majority of students’estimates (47 to 69 percent in a sample of studies) arethat they know or can solve a particular item. Thus,even though the Hamann coefficient is a measure of theaccuracy of students’ estimates, actual knowledge andaccuracy are slightly confounded. It would be useful tohave a greater number of estimates in the “do notknow/cannot solve” category. One way of achievingthis is to add items consisting of nonsense words orinsoluble problems. When confronted with such itemsor tasks, students, especially those who are quite knowl-edgeable, will have to estimate not knowing the item,thereby helping to disentangle the possible confoundingbetween knowledge monitoring and estimation.

Further research on the relationship between knowl-edge monitoring, motivation, and conation is clearly need-ed to determine their impact on students’ ability to bestrategic learners. It would be useful to identify studentswho are accurate knowledge monitors, but not highlymotivated to succeed academically. We would expect that,despite their knowledge monitoring skills, such studentswill not effectively control their studying behaviors and,therefore, not do well academically. On the other hand,highly motivated students who are willing to invest con-siderable effort to attain academic goals are unlikely to beeffective unless they also possess accurate knowledgemonitoring skills. Knowledge monitoring in general, andthe KMA in particular, is less likely to be affected by moti-vational and volitional variables than are more complexself-regulatory behaviors, since making knowledge orproblem solving estimates is relatively automatic andtakes little effort. Therefore, motivation and conationshould have less effect on such judgments. On the otherhand, obtaining additional help and/or reviewing priorlearning does require more effort and reflection than mak-ing estimates and may, therefore, be more affected bymotivational and volitional tendencies.

It may also be helpful to obtain “think aloud”protocols during the estimation phase of the KMA pro-cedure. Such protocols can clarify students’ thinkingwhen they are estimating their knowledge of items. Itwould be especially interesting to determine whetherthey have any doubts while estimating knowing itemsthat are subsequently failed. It may be expected thataccurate monitors are more likely to have some doubtsabout these items than their less accurate peers.

Logical analysis has led us to assume a hierarchicalorganization of metacognitive activities. That is, we pre-sumed that such more advanced metacognitive activities

22

07-1626.RD.CBRpt02-3Txt 4/11/07 1:38 PM Page 22

Page 28: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

as evaluating learning, selecting appropriate learningstrategies, or planning future learning activities couldnot occur without accurate knowledge monitoring.While the logic seems compelling, it should neverthelessbe subjected to empirical test. It would be interesting toobserve students in classes while they conduct self-evaluations of their work, select strategies, or plan forfuture learning and determine whether accurate moni-tors do in fact conduct these activities more efficiently.

Finally, it has been assumed that the KMA is moreresistant to the problems posed for self-report instru-ments in general, such as students’ tendency to makesocially desirable responses. That assumption should bestudied experimentally, by instructing some students torespond to the KMA in ways to make themselves seemto be very diligent students, while others are instructedto appear casual or lackadaisical; of course, a controlgroup should receive neutral instructions. In suchstudies, participants should also be asked to respond toself-report measures to study the relative resistance ofboth types of assessment to student “faking.”

ReferencesBorkowski, J.G. (2000, April). The assessment of executive

functioning. Paper delivered at the annual convention ofthe American Educational Research Association, NewOrleans.

Brown, A.L. (1980). Metacognitive development and reading.In R.J. Spiro, B.B. Bruce, & W.F. Brewer (Eds.),Theoretical issues in reading comprehension (pp. 453–481). Hillsdale, NJ: Lawrence ErlbaumAssociates.

Butler, D.L. & Winne, P.H. (1995). Feedback and self-regulated learning: A theoretical synthesis. Review ofEducational Research, 65, 245–281.

Carlson, J.S. & Wiedl, K.H. (1992). Principles of dynamicassessment: The application of a specific model. Learningand Individual Differences, 4, 153–166.

Dochy, F.J.R.C. & Alexander, P.A. (1995). Mapping priorknowledge: A framework for discussion amongresearchers, European Journal of Psychology ofEducation, 10, 225–242.

Everson, H. T., Smodlaka, I., & Tobias, S. (1994). Exploringthe relationship of test anxiety and metacognition onreading test performance: A cognitive analysis. Anxiety,Stress, and Coping, 7, 85–96.

Everson, H.T. & Tobias, S. (1998). The ability to estimateknowledge and performance in college: A metacognitiveanalysis. Instructional Science, 26, 65–79.

Everson, H.T., Tobias, S., & Laitusis, V. (1997, March). Dometacognitive skills and learning strategies transfer acrossdomains? Paper presented at the annual meeting of theAmerican Educational Research Association, Chicago.

Fennema, E. & Sherman, J.A. (1976). Fennema-Shermanmathematics attitudes scales: Instruments designed tomeasure attitudes towards the learning of mathematics byfemales and males. Catalog of Selected Documents inPsychology, 6, 3, 1–32.

Fajar, L., Santos, K., & Tobias, S. (1996, October). Knowledgemonitoring among bilingual students. Paper presented atthe annual meeting of the Northeastern EducationalResearch Association, Ellenville, NY.

Flavell, J. (1979). Metacognition and cognitive monitoring: Anew area of cognitive developmental inquiry. AmericanPsychologist, 34, 906–911.

Gates, W.H. & MacGinitie, R.K. (1989). Gates MacGinitieReading Test (3rd Ed.). Chicago: The RiversidePublishing Co.

Gerrity, H. & Tobias, S. (1996, October). Test anxiety andmetacognitive knowledge monitoring among high schooldropouts. Paper presented at the annual convention ofthe Northeastern Educational Research Association,Ellenville, NY.

Glenberg, A.M., Sanocki, T., Epstein, W., & Morris, C.(1987). Enhancing calibration of comprehension. Journalof Experimental Psychology: General, 166, 119–136.

Goodman, L.A. & Kruskal, W.H. (1954). Measures of associ-ation for cross classifications. Journal of the AmericanStatistical Association, 49, 732-764.

Green, D.M. & Swets, J.A. (1966). Signal detection theoryand psychophysics. NY: Wiley.

Guthke, J. (1992). Learning tests: The concept, main researchfindings, problems, and trends. Learning and IndividualDifferences, 4, 137–151.

Hembree, R. (1990). The nature, effects, and relief of mathe-matics anxiety. Journal for Research in MathematicsEducation, 21, 33–46.

Hembree, R. (1988). Correlates, causes, effects, and treatment oftest anxiety. Review of Educational Research, 58, 47–78.

Jacobs, J.F. & Paris, S.G. (1987). Children’s metacognitionabout reading: Issues in definition, measurement, andinstruction. Educational Psychologist, 22, 255–278.

Jimenez, R.T., Garcia, G.E., & Pearson, P.D. (1995). Threechildren, two languages, and strategic reading: Casestudies in bilingual/monolingual reading. AmericanEducational Research Journal, 32, 67–97.

Koriat, A. (1993). How do we know that we know? Theaccessibility model of the feeling of knowing.Psychological Review, 100, 609–639.

Kouzlin, A. (1998). Psychological Tools: A sociocultural approachto education. Cambridge, MA: Harvard University Press.

Liebert, R.M. & Morris, L.W. (1967). Cognitive andemotional components of test anxiety: A distinction andsome initial data. Psychological Reports, 20, 975–978.

Lidz, C.S. (1992). The extent of incorporation of dynamicassessment into cognitive assessment. Journal of SpecialEducation, 26, 325–331.

Morris, L.W., Davis, M.A., & Hutchings, C. H. (1981).Cognitive and emotional components of anxiety:Literature review and a revised worry-emotionality scale.Journal of Educational Psychology, 73, 541–555.

23

07-1626.RD.CBRpt02-3Txt 4/11/07 1:38 PM Page 23

Page 29: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

Nathan, J. (1999). The impact of test anxiety on metacogni-tive knowledge monitoring. Unpublished doctoral disser-tation completed at Fordham University, NY.

Nathan, J. & Tobias, S. (2000, August). Metacognitive knowl-edge monitoring: Impact of anxiety. Paper delivered atthe annual meeting of the American PsychologicalAssociation, Washington, DC.

Nelson, T.O. (1984). A comparison of current measures of theaccuracy of feeling of knowing predictions. PsychologicalBulletin, 95, 109–133.

Nelson, T.O. & Nahrens, L. (1990). Metamemory: A theoret-ical framework and new findings. In G.H. Bower (Ed.),The psychology of learning and motivation (pp.125–173). New York: Academic.

Pintrich, P. R., Wolters, C.A., & Baxter, G.P. (2000).Assessing metacognition and self-regulated learning. InG. Schraw (Ed.) Issues in the measurement of metacog-nition. Lincoln, NE: Buros Institute/ The University ofNebraska.

Pintrich, P.R., Smith, D.A., Garcia, T., & McKeachie, W.J.(1991). A manual for the use of the Motivated Strategiesfor Learning Questionnaire (MSLQ). Ann Arbor, MI:National Center for Research to Improve PostsecondaryTeaching and Learning.

Romero, R. & Tobias, S. (1996, October). KnowledgeMonitoring and Strategic Study. Paper presented at asymposium on “Metacognitive Knowledge Monitoring”at the annual convention of the Northeastern EducationalResearch Association, Ellenville, NY.

Romesburg, H.C. (1984). Cluster analysis for researchers.London: Wadsworth, Inc.

Royer, J.M., Cisero C.A., & Carlo, M.S. (1993). Techniquesand procedures for assessing cognitive skills. Review ofEducational Research, 63, 201–243.

Schommer, M. & Walker, K. (1995). Are epistemologicalbeliefs similar across domains? Journal of EducationalPsychology, 87, 424–432.

Schraw, G. (2000). Issues in the measurement of metacogni-tion. Lincoln NE: Buros Institute of MentalMeasurements and Erlbaum Associates.

Schraw, G. (1995). Measures of feeling of knowing accuracy:A new look at an old problem. Applied CognitivePsychology, 9, 329–332.

Schraw, G. & Denison, R.S. (1994). Assessing metacognitiveawareness. Contemporary Educational Psychology, 19,460–475.

Schraw, G., Dunkle, M.E., Bendixen, L.D., & Roedel, T.D.(1995). Does a general monitoring skill exist? Journal ofEducational Psychology, 87, 433–444.

Sternberg, R.J. (1995). In Search of the Human Mind,Orlando: Harcourt Brace College Publishers.

Sternberg, R.J. (1998). The triarchic mind: A new theory ofhuman intelligence. NY: Viking Press.

Tobias, S. & Everson, H.T. (1996). Assessing metacognitiveknowledge monitoring. College Board Report No. 96-01.College Board, NY.

Tobias, S. (1995). Interest and metacognitive word knowl-edge. Journal of Educational Psychology, 87, 399–405.

Tobias, S. (1994, April). Interest and metacognition in wordknowledge and mathematics. Paper presented at theannual convention of the American Educational ResearchAssociation, New Orleans.

Tobias, S. (1992). The impact of test anxiety on cognition inschool learning. In K. Hagvet (Ed.), Advances in testanxiety research (Vol. 7, pp. 18–31). Lisse, Netherlands:Swets & Zeitlinger.

Tobias, S. & Everson, H.T. (2000). Assessing metacognitiveknowledge monitoring. In G. Schraw (Ed.), Issues in themeasurement of metacognition (pp. 147–222). Lincoln,NE: Buros Institute of Mental Measurements andErlbaum Associates.

Tobias, S. & Everson, H.T. (1998, April) Research on theassessment of metacognitive knowledge monitoring. Paperpresented at the annual convention of the AmericanEducational Research Association, San Diego.

Tobias, S. & Everson, H.T. (1996, April) Development andvalidation of an objectively scored measure of metacog-nition appropriate for group administration. Paper pre-sented at the annual convention of the AmericanEducational Research Association, New York.

Tobias, S., Everson, H.T., & Laitusis, V. (1999, April).Towards a performance based measure of metacognitiveknowledge monitoring: Relationships with self-reportsand behavior ratings. Paper presented at the annual meet-ing of the American Educational Research Association,Montreal.

Tobias, S., Everson, H.T., Laitusis, V., & Fields, M. (1999,April). Metacognitive knowledge monitoring: Domain spe-cific or general? Paper presented at the annual meeting ofthe Society for the Scientific Study of Reading, Montreal.

Tobias, S., Everson, H.T., & Tobias, L. (1997, March).Assessing monitoring via the discrepancy betweenestimated and demonstrable knowledge. Paper presentedat a symposium on Assessing Metacognitive KnowledgeMonitoring at the annual convention of the AmericanResearch Association, Chicago.

Tobias, S., Hartman, H., Everson, H., & Gourgey, A. (1991,August). The development of a group administered,objectively scored metacognitive evaluation procedure.Paper presented at the annual convention of theAmerican Psychological Association, San Francisco.

Tobias, S. & Fletcher, J.D. (Eds.) (2000). Training and retrain-ing: A handbook for business, industry, government, andthe military. New York: Macmillan Gale Group.

Touchstone Applied Science Associates (1991). Degrees ofReading Power. New York: Touchstone.

Weinstein, C.E. & Mayer, R.E. (1986). The teaching of learn-ing strategies. In M.C. Wittrock (Ed.), Handbook ofresearch on teaching (3rd ed. pp. 315–327). New York:Macmillan.

Weinstein, C.E., Zimmerman, S.A., & Palmer, D.R. (1988).Assessing learning strategies: The design and develop-ment of the LASSI. In C.E. Weinstein, E.T. Goetz, & P.A.Alexander (Eds). Learning and study strategies: Issues inassessment, instruction, and evaluation (pp. 25–39). NewYork: Academic Press.

24

07-1626.RD.CBRpt02-3Txt 4/11/07 1:38 PM Page 24

Page 30: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

Winne, P.H. (1996). A metacognitive view of individual dif-ferences in self regulated learning. Learning andIndividual Differences, 8, 327–353.

Wright, D. B. (1996). Measuring feeling of knowing. AppliedCognitive Psychology, 10, 261–268.

Zimmerman, B.J. & Risenberg, R. (1997). Self regulatorydimensions of academic learning and motivation. In G.D.Phye, Handbook of academic learning: Construction ofknowledge (pp. 105–125). San Diego: Academic Press.

25

07-1626.RD.CBRpt02-3Txt 4/11/07 1:38 PM Page 25

Page 31: Knowing What You Know and What You Don't: Further … · Research Report No. 2002-3 Knowing What You Know and What You Don’t: Further Research on Metacognitive Knowledge Monitoring

www.collegeboard.com 993815