editors - pr.sola kmuttsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and...

64

Upload: others

Post on 05-Jul-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections
Page 2: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

EDITORSSonthida Keyuravong, King Mongkut’s University of Technology Thonburi

Thanis Bunsom, King Mongkut’s University of Technology ThonburiWoravut Jaroongkongdach, King Mongkut’s University of Technology Thonburi

EDITORIAL PANELAlan Waters, Lancaster University, UK

Hayo Reinders, University of Groningen, the NetherlandsKim McDonough, Concordia University, Canada

Vijay Bathia, City University of Hong KongIan McGrath, National Institute of Education, SingaporeChristian Goh, National Institute of Education, Singapore

Suvichit Chaidaroon, Nanyang Technological University, SingaporeAek Phakiti, University of Sydney, Australia

Phil Chappell, Macquarie University, AustraliaSumalee Chinokul, Chulalongkorn University, ThailandPassapong Sripicharn,Thammasat University, Thailand

Saneh Thongrin, Thammasat University, ThailandApisak Phupipat, Thammasat University, Thailand

Chantarat Hongboontri, Mahidol University, ThailandSinghanat Nomnian, Mahidol University, Thailand

Anchalee Chanyanuwat, Walailuck University, ThailandSripen Setsathien, King Mongkut’s University North Bangkok, Thailand

Jirapa Wittayapirak, KMIT, Ladkrabang, ThailandAdisa Tiao, Prince of Songkhla University, Thailand

JiradaWudthayagorn, Chulalongkorn University, Chiang Mai, ThailandAnchalee Wannarat, Suranaree University, Nakorn Ratchasima, Thailand

Sirinthip Boonmee, Ubonratchathani University, ThailandIssariya Thaveesilpa, Kasetsart University, Thailand

Pornapit Darasawang, King Mongkut’s University of Technology Thonburi, ThailandRichard Watson Todd, King Mongkut’s University of Technology Thonburi, ThailandSaowalak Tepsuriwong, King Mongkut’s University of Technology Thonburi, Thailand

Wareesiri Singhasiri, King Mongkut’s University of Technology Thonburi, Thailand_____________________________________________________________________________

rEFLections is a refereed print-based English language journal published by the Department of Language Studies, School of Liberal Arts, King Mongkut’s University of Technology Thonburi, Thailand. It is intended for teachers, students, researchers and anyone who is interested in research in applied linguistics and/or involved in English language teaching, especially in the EFL context.

Page 3: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

GUIDELINES FOR SUBMISSION

rEFLections, a refereed print-based English language journal, welcomes all research articles that deal with English language teaching, especially in the EFL context, and applied linguistics in areas such as second language acquisition, teaching methodology, materials development, course design and evaluation, self-access learning, learner strategies, CALL and discourse analysis.

rEFLections consists of three main columns:1. Research Articles (approx. 3,000-6,000 words excluding bibliography and appendices)

focus on theory, research, and pedagogy related to English language teaching and learning.

2. The Research Methods Column provides guidance and discussion of issues involved in conducting applied linguistics research.

3. Book Reviews (approx. 500-1,000 words) includes only two categories: reference books and textbooks.

All articles submitted should be double-spaced, 12-point Times New Roman font with 1 inch margins on all sides. Manuscripts must be accompanied by a 50-word biographical statement, a 200-word abstract and a cover page in separate files. The cover page should include the author’s name, affiliation, address, telephone numbers, and e-mail addresses. Authors should also follow the APA style for biographic references (the 6th Edition). All articles will be double-blind refereed, so all references to the author in the manuscript should be deleted.

Manuscripts should be submitted electronically by e-mail as an attachment in Word format. Manuscripts including other attachments must be e-mailed to [email protected]. In the e-mail subject line, please type ‘rEFLections (Author’s name).’

All manuscripts submitted must be original work and must not be under consideration or published elsewhere. Authors must inform the Editors if the article is based on a paper presented at a conference.

Compensation for articles published in the rEFLections Journal will be two copies of the issue in which the article appears.

DEADLINE FOR SUBMISSION:For July issue: 28 FebruaryFor February issue: 31 October

CONTACT:Assoc. Prof. Sonthida KeyuravongDepartment of Language StudiesSchool of Liberal Arts, King Mongkut’s University of Technology Thonburi126 Pracha-Uthit Road, ThungkruBangkok 10140, ThailandE-mail: [email protected]

Page 4: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections J u l y 2 0 1 3 V o l u m e 16

ArticlesCreating a Comprehensible Item Pool for a Student Questionnaire: Challenges and Suggestions 1Junko Noguchi, Keiko Takahashi and Katherine Thornton

Relationship between Students’ Responses and Teacher’s Feedback Strategies 10Thanissorn Pochanukul and Saowaluck Tepsuriwong

A Study of Reading Strategies and Effects of Reading Ability Levels and Text Types on Rational Deletion Cloze Test Performance of EFL University Students 25Nantawan Senchantichai and Suphat Sukamolson

Research MethodsWhy do Articles Get Rejected by International Journals ? 46Richard Watson Todd

Book ReviewLiterature in Language Education 58Thanis Bunson and Wareesiri Singhasiri

Page 5: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

From the editors

As usual, rEFLections volume 16 is packed with insightful research articles on English language teaching and applied linguistics. This issue covers three research articles, one research methods article and one book review. What makes this issue particularly thought-provoking is the variety of ELT-related research conducted both domestically and internationally.

The first article, by Junko Noguchi, Keiko Takahashi and Katherine Thornton, Kanda University of International Studies, Japan, offers useful ways of creating a valid student questionnaire as part of a needs analysis survey for curriculum development. While several problems and challenges arose during the process, the three writers managed to overcome them, and they provided us with practical suggestions. In the second article, Thanissorn Pochanukul and Saowaluck Tepsuriwong, King Mongkut’s University of Technology Thonburi, Thailand, examine teachers’ strategies in handling their students’ responses. They deploy Richards and Lockhart’s feedback strategies framework (1994) in the analysis and interestingly reveal a complicated relationship between the responses and the feedback. The last research article, by Nantawan Senchantichai and Suphat Sukamolsan, Chulalongkorn University, Thailand, is related to ELT assessment. The researchers investigate the effects of reading ability levels and two text types, narrative and expository text, on the rational deletion cloze test performance. They also study the EFL university students’ use of reading strategies on their taking the cloze test. In research methods, Richard Watson Todd, King Mongkut’s University of Technology Thonburi, Thailand, exposes us to the unfortunate reality where the probability of our articles being rejected for publication exists for a (number of) reason(s). He studies several reviewer reports in applied linguistics, categorises the comments and criteria, and gives us valuable guidelines for writing quality research articles. Last, Thanis Bunsom and Wareesiri Singhasiri, King Mongkut’s University of Technology Thonburi, Thailand, review Hall’s Literature in Language Education (2005). Their review gives an overall picture of how teachers of language and literature can make use of the book in their pedagogical and research practice.

We are certain that our readers will enjoy and benefit from all the articles as much as we have.

Sonthida Keyuravong and Thanis BunsomEditors

Page 6: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

1

reating a Comprehensible Item Pool for a Student Questionnaire: Challenges and Suggestions

Junko Noguchi Kanda University of International StudiesKeiko Takahashi Otemon Gakuin UniversityKatherine ThorntonKyoto University of Foreign Studies

AbstractA survey is one of the most common tools used to elicit students’ voices in the process of curriculum development. Even though it is a common tool, it can also be challenging to design question items that elicit the information that researchers are interested in. This challenge is heightened if the topic of the survey is something very abstract such as self-directed learning (SDL). In this paper, we will share how we overcame this challenge during the generation of an item pool for a needs analysis survey about SDL, which was conducted as a part of a systematic evaluation of the current curriculum. The importance of administering multiple pilots and taking creative approaches to data elicitation when designing item pools will be discussed.

1. IntroductionSince its inception in 2001, the Self Access Learning Centre (SALC) at Kanda University of International Studies (KUIS) in Chiba, Japan has promoted SDL through several optional learner training and autonomous learning modules (Noguchi and McCarthy, 2010; Yamaguchi et al., 2012). Learning advisors at KUIS are currently involved in a systematic evaluation of the current curriculum, using an adapted version of Nation and Macalister’s (2010) curriculum development framework to guide the process (cf. Thornton, 2012, 2013). One of the major stages of this design process was a needs analysis, which elicited views from learning advisors, teachers, senior management, and students. For an overview of the whole needs analysis project, please see Takahashi et al. (2013).

The focus of this paper is on the research method used to design an item pool for the needs analysis survey employed to gain a better understanding of students’ SDL needs and wants. While many examples of methods for collecting information on learners’ linguistic and even strategic needs can be found in the literature (Graves, 2000; Long, 2005; Munby, 1978;Richards, 2001), very few institutions have reported on how they have gained an understanding of learners’ SDL needs. By this we mean the skills and knowledge, including both cognitive and metacognitive skills, but also affective strategies, required for learners to be successful autonomous learners (Holec, 1981;

C

Page 7: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

2

Little, 1991; Wenden, 1998). In this paper, we would like to share the process used to generate an item pool for the survey, detailing the challenges we faced and the solutions we developed. Due to space restrictions, we will not discuss the wider needs analysis project in this paper. We hope our description of the survey design process can inform others who may want to develop a similar instrument.

2. Methodology2.1 ContextKUIS is a small university, with around 3,000 students all majoring in foreign languages, and a particular emphasis on fostering learner autonomy. The courses, which are the subject of a long-term curriculum development project, are voluntary courses designed to promote autonomy offered through the SALC to all freshmen students.

This survey formed part of the curriculum project guided by the following research questions:What are the SDL needs of KUIS Freshmen?How can the SALC best address them?

2.2 ParticipantsAlthough we were interested in the needs of freshmen, we decided to use second-year students as our target group, as they have already experienced the freshman year, and may be able to reflect on what they had required to complete it successfully. In order to get voices from different populations of students, the survey was conducted across campus with students from different departments, and with differing levels of familiarity with the self-access centre.

2.3 The challenge - creating a comprehensible item poolAlthough we felt it important that we consult students directly as part of the needs analysis, through our experience of working with them we were aware of the fact that they had very little awareness of the nature of SDL. How could we administer an instrument designed to discover their needs and wants about SDL if they did not clearly understand what this process entails?

One accepted method in survey design is to base the items on interviews or other data elicited from the target group (Dornyei, 2003). This is the approach we decided to take in this study, using an open-ended written prompt combined with a follow-up interview, in the learners’ native language, Japanese, to generate the initial data. This data was then analysed and reworked into a list of statements which comprised the final item pool. By generating data directly from students, and using their own voices in these statements, we hoped to make each item comprehensible to the target group.

The most challenging part of the process was to develop a good preliminary prompt to elicit data for generating the item pool for the survey. This required a lot of brainstorming and piloting of the prompt with students. In developing the prompt, we followed a three-step process:

1. brainstorm and decide a question;

Page 8: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

3

2. pilot the question with several students; and3. analyse the results and feedback from the pilot to tweak the question.

All the prompts and follow-up interviews, and the final survey itself, were conducted in Japanese, the first language of the students, in order to maximise the respondents’ ability to express themselves. For the purposes of this paper, all of this is translated into English. It took three attempts to develop a suitable prompt that would give us the kind of information that we needed. The following section explains this process in more detail.

2.4 Designing a suitable promptIn the first attempt to collect information about their SDL, we asked students: 1) what was difficult about SDL, and 2) what kind of support they felt they needed for their SDL.

The first version of the prompt was as follows (translated here from the original Japanese):“Think back on your freshman year at KUIS. What are the things you felt were difficult in your self-directed study? What kind of support would have been helpful?”

We realized from the results of this first prompt that students had little time to do SDL activities since they have many assignments and other commitments. Also, as for support, they gave us more technical suggestions that related more to the administration of the SALC, such as longer opening hours, which could not help us in developing a curriculum. As this question about their actual experience of SDL had not been effective, we decided to ask a more general question about their knowledge of SDL.

Our second pilot specifically asked students about the skills and knowledge that helped them to do SDL effectively.

The actual prompt was as follows (translated here from the original Japanese):“Think back on your freshman year at KUIS. What kind of skills/knowledge/knowhow would help you to do your self-study (study that you will voluntarily do with or without other students; homework is not included) effectively? Write down as many ideas as you can think of.”

As students are naturally often more focused on the linguistic aspects of language learning, this prompt, with its use of the words skills and knowledge, elicited a variety of cognitive strategies students used to learn languages in their spare time, reassuring us that they did have some understanding of the nature of SDL. However, these items, such as reading graded readers, watching movies with English subtitles etc., were too specific for our purposes. If we were to create an item pool from this data we would have needed an exhaustive list of cognitive strategies, such as that from Oxford’s SILL (1990), which would result in a lengthy survey. Their responses also didn’t feature the metacognitive aspects of learning, such as planning, which we knew to be important in SDL.

It was proving difficult to come up with the appropriate wording which would resonate with

Page 9: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

4

students in order for them to be able to understand what they were being asked to answer. In the third attempt we tried a different tactic. Rather than asking students about their own experience of SDL, which in many cases seemed minimal, we decided to be less personal, and have students describe others whom they see as effective self-directed learners.

The prompt we used was (translated here from the original Japanese): “Think of students who do their self-directed studies (studying that you voluntarily do; homework is not included) effectively (regardless of their language ability). What kinds of characteristics do they have in common? Write down as many as you can think of. Among the characteristics you listed above, which characteristics do you think you need? Write down the characteristics with the reasons.”

This final technique proved very effective. The students’ answers included the kind of skills that are commonly defined as SDL in the literature, such as setting goals and using time effectively (see below), and the number of answers increased. This approach seemed to work most appropriately for us to see the students’ world.

In analysing the wording of the different prompts, we believe there were several reasons why this less-personal but broader technique was effective. By asking students simply to describe the characteristics of other people, this broad prompt allowed them to stimulate their imagination and write down whatever occurred in their minds without any restriction which may have been caused by using words such as support, skills or knowledge. Additionally, by thinking of others, they did not need to be confined to their own experience but could reach out to all the possible options, including their ideals. Finally, as they were asked to describe actual people they knew, the respondents also had a concrete image upon which to base their answers. It is often easier to see the characteristics and actions of others than of oneself. The lesson we learned from the third trial is the importance of creativity.

Using this prompt, administered in written form then accompanied by a follow-up interview in which each student was asked to expand on or clarify their answers, we collected data until the answers reached the point of saturation (Corbin and Strauss, 2008) and no new answers were emerging. This happened after 11 students had been interviewed. We now had the concepts on which to base our item pool, expressed in the students’ own words. The next step was to organise them into individual statements, each expressing a single idea.

3. Coding the dataThe second step involved the researchers coding and categorizing the data gathered from the prompt, to make a list of items which could be included in the final survey to find out what aspects of SDL students felt they needed to learn more about.

In the process of analyzing data, the researchers agreed to eliminate the following groups of items because those items were not relevant to our research questions:

1. items which addressed issues which the SALC curriculum has no control over or

Page 10: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

5

remit for, such as those relating to linguistic proficiency (e.g. effective learners have good basic grammar knowledge)

2. items detailing specific cognitive strategies and activities (e.g. effective learners speak to themselves in English, read graded readers). As mentioned above, to include an exhaustive list of these would have produced an impractically long survey

3. items that are not teachable such as personality traits (e.g. effective learners are active, motivated and keen to study)

After excluding the irrelevant items, the three researchers then examined the remaining items individually and developed rough codes for the data using a grounded research approach (Corbin and Strauss, 2008). The codes were refined and agreed upon through discussion, resulting in six overall categories, each with several subcategories, with a total of 23 items in all. Each item represented one single idea. Although we had to change the wording at times, as much as possible we preserved the students’ own expressions.

The six categories, which emerged from the coding process, resulted in the following statements (here translated into English):

Goal Setting How to set clear learning goalsHow to design a plan to achieve the goals l've setImagining my futureHow to carry out the plan that l've made

Time Management How to balance learning and relaxationHow to reduce the amount of wasted timeHow to use my free time/commuting time for studyingHow to understand deadlines and set priorities for thing I need to doHow to get in the habit of studying at a regular time

Affective Factors To have the confidence to not be scared of speaking EnglishHow to actively tackle difficult thingsHow to be able to continue learning English

Resources Asking teachers about way to learnHow to find opportunities to talk with exchange studentsCommunicating with ELI teachers activelyHow to have conversations with teachers on my own initiativeHow to use the SALC in a way which suit my goalGetting advice from teachers at the Writing Centre

Learning Activities How to learn English in lot of different ways (not just sitting at a desk)How to connect my interests with learning EnglishHow to in corporate learning into my everyday life(by listening to music or watching movies etc)Finding the ways to learn that suit me

Learning Environment Finding an environment where I can concentrate on studying alone

Page 11: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

6

This item pool was then used to design the final survey, which was administered with just under 240 second-year students across the university departments. While that administration and its findings are not the subject of this paper, a copy of the survey can be found in the appendix. The use of four distinct categories rather than a Likert scale proved to be a limitation in the analysis stage of the project, and for this reason we would advise against replication studies using the exact same survey design.

4. Conclusion: Implications for survey designCreating a survey is a notoriously complicated process, and although we were prepared for it to take some time, we were surprised at quite how time-consuming it was to produce a pool of items which we were confident would represent students’ current knowledge of SDL skills, expressed in language that they could understand. As we have explained, the fact that the topic was one of which students themselves do not necessarily have a clear understanding, made it considerably more challenging. What was effective for our case was not only changing the wording (from “skills” to “characteristics”), but also changing the focus of the question from asking the participants to think about their own experiences of SDL to asking their perceptions of those who are good at SDL. This shift from the first person to the third person seemed to free up students to respond more broadly with a concrete image, that of other students, to describe.

The time spent developing an item pool using students’ own voices, based on a prompt, which was itself the result of several pilots, with follow-up interviews, meant that the final survey was comprehensible to students and elicited useful data for our needs analysis. In particular, we would recommend anyone faced with a similar situation to pay sufficient attention to the construction and piloting phases, which proved invaluable in our study. We hope our description of the survey design process can inform others who may want to develop a similar instrument eliciting information from learners on concepts of which they may not have a detailed conscious understanding or the metalanguage required to express that understanding.

ReferencesCorbin, J. & Strauss, A. (2008). Basics of qualitative research: Grounded theory procedures and

techniques (3rd ed.). Newbury Park, CA: Sage Publications.Dörnyei, Z. (2003). Surveys in second language research: Construction, administration, and processing.

Mahwah, NJ: Lawrence Erlbaum.Graves, K. (2000). Designing language courses: A guide for teachers. Boston, MA: Heinle & Heinle.Holec, H. (1981). Autonomy and foreign language learning. Oxford, England: Pergamon.Little, D. (1991). Learner autonomy 1: Definitions, issues and problems. Dublin, Ireland: Authentik.Long, M. H. (2005). Methodological issues in learner needs analysis. In M. H. Long (Ed.), Second

language needs analysis (pp.19-76). Cambridge, England: Cambridge University Press.Munby, J. (1978). Communicative syllabus design. Cambridge, England: Cambridge University Press.Nation, I. S. P., & Macalister, J. (2010). Language curriculum design. London, England: Routledge.

Page 12: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

7

Noguchi, J., & McCarthy, T. (Eds.). (2010). Proceedings from JALT 2009: Reflective Self-study: Fostering Learner Autonomy. Tokyo, Japan: JALT.Oxford, R. L. (1990). Language learning strategies: What every teacher should know. Boston, MA:

Heinle & Heinle.Richards, J. (2001). Curriculum development in language teaching. Cambridge, England: Cambridge

University Press.Takahashi, K., Mynard, J., Noguchi, J., Sakai, A., Thornton, K., & Yamaguchi, A. (2013). Needs analysis: Investigating students’ self-directed learning needs using multiple data soures. Studies in Self-Access Learning Journal, 4(3), 208-218.Thornton, K. (2012). Evaluating a curriculum for self-directed learning: A systematic approach.

Independence, 55, 8–11.Thornton, K. (2013). A framework for curriculum reform: Re-designing a curriculum for

self-directed language learning. Studies in Self-Access Learning Journal, 4(2), 142-153.Wenden, A. (1998). Learner strategies for learner autonomy. London, England: Prentice Hall.Yamaguchi, A., Hasegawa, Y., Kato, S., Lammons, E., McCarthy, T., Morrison, B. R., Mynard,

J., Navarro, D., Takahashi, K. & Thornton, K. (2012). Creative tools that facilitate the advising process. In C. Ludwig & J. Mynard (Eds.), Autonomy in language learning: Advising in action (pp. 137-153). Canterbury, England: IATEFL.

Authors:Junko Noguchi taught at a public high school in Chiba, Japan after getting her M.A. in TESOL from Soka University of America, and is currently working as a learning advisor at Kanda University of International Studies. She is also a Ph.D. candidate at Temple University, Japan. Her research interests include articulatory phonology, self-directed learning and [email protected]

Katherine Thornton is a learning advisor and the Program Director of E-CO (English Café at Otemon), a self-access centre at Otemon Gakuin University, Osaka, Japan. After gaining an MA in TESOL from the University of Leeds, UK, she has worked as a learning advisor for six years, previously at Kanda University of International Studies. She is also the current president of the Japan Association of Self-Access Learning (JASAL). Her research interests include self-directed learning, curriculum development and learner [email protected]

Keiko Takahashi holds an MA in TESOL from Monterey Institute of International Studies, California, USA. She is currently working as a learning advisor at Kyoto University of Foreign Studies. Her research interests are individualized learning, learner development and language learning advising and coaching skill [email protected]

Page 13: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

8

Appendix: Finalized questionnaire

Think back on your freshman year at KUIS. For the 22 items listed below, think about whether you would have liked the opportunity to leam about the following thing in your first year, and choose the most suitable response from the four options given. (If you don't have experience of a Freshman year at KUIS (for example if you transfered from another university) please answer by thinking about your experience this semester at KUIS.)

Answer Options

Yes, I couldn't do

this, so I would have

like the opportunity

to learn about it.

Yes, I was able to do this

to a certain extent, but I would have liked the

opportunity to learn more

about it.

No, I was able to do this to a certain

extent, so I don't think

it's necessary to learn about it.

No, I could already do

this so I don’t think it’s

necessary to learn about

it.

Getting advice from teachers at the Writing CentreHow to connect my interests with learning EnglishImagining my future

Communicating with ELI teachers activelyHow to use the SALC in a way which suit my goalHow to actively tackle difficult thingsHow to find opportunities to talk with exchange studentsAsking teachers about way to learnHow to have conversations with teachers on my own initiativeHow to get in the habit of studying at a regular timeTo have the confidence to not be scared of speaking EnglishHow to be able to continue learning English

Page 14: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

9

Answer Options

Yes, I couldn’t do this, so I

would have like the

opportunity to learn about it.

Yes, I was able to do this

to a certain extent, but I would have liked the

opportunity to learn more

about it.

No, I was able to do this to a certain

extent, so I don’t think

it’s necessary to learn about it

No, I could already do this so I

don’t think it’s necessary

to learn about it.

Finding the ways to learn that suit meHow to set clear learning goals

How to design a plan to achieve the goals l’ve setHow to carry out the plan that l’ve madeHow to learn English in lot of different ways (not just sitting at a desk)How to use my free time/commuting time for studyingHow to balance learning and relaxationHow to understand deadlines and set priorities for thing I need to doHow to in corporate learning into my everyday life (by listening to music or watching movies etc)Finding an environment where I can concentrate on studying alone

Page 15: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

10

Thanissorn Pochanukul & Saowaluck Tepsuriwong King Mongkut’s University of Technology Thonburi

AbstractThis paper investigates how teachers use feedback strategies to respond to different kinds of students’ responses in a pre-reading stage of a lesson. The data was taken from three micro-teaching sessions taught by novice teachers. The feedback strategies framework suggested by Richards and Lockhart (1994) was used to analyze the teachers’ feedback. The findings suggest a complicated relationship between students’ responses and teachers’ feedback. The teachers used one to four feedback strategies to deal with the students’ responses. A combination of feedback strategies used, however, seemed to lead to imbalanced turns in the IRF patterns of classroom interaction. Discussion and implications on the nature of teachers’ feedback and students’ responses are highlighted.

1. IntroductionIn a typical language classroom, a picture of a teacher standing in front of the class interacting and giving comments on students’ utterances is very common. As a part of the interaction, the teacher often simply says “Right. Well done!” or “Are you sure?” or “Umm… Maybe!” or “That’s not quite right. Try again.” This ordinary part of the classroom interaction, however, is not as trivial as it seems to appear, as these statements provide ‘feedback’ or useful information fed back to learners to be aware of the quality of their performance, which is the basic step required for learning improvement. A teacher’s feedback is, therefore, a vital part of cultivating students’ learning.

Feedback motivates students to make progress in their learning. It allows students to reflect upon their performance and see the effectiveness of their language use because, as Arends (1989: 380) points out, “Without knowledge of results, practice is of little value to students.” Knowing the results of their practice makes students be well-informed about the level of their success, and this encourages them to advance and/or find ways to work on their weaknesses, so teachers’ feedback enables students to monitor their learning.

Feedback is considered a significant component of classroom interaction (Chaudron, 1988). This part of classroom discourse usually occurs when the teacher asks a question, a student gives an answer, and then the teacher provides a follow-up statement to the answer. This three-move exchange is commonly known as an IRF pattern which consists of Initiation, Response, and Follow-up, or feedback moves (Ur, 1991; Sinclair and Brazil, 1982; Sinclair and Coulthard, 1975).

elationship between Students’ Responses and Teacher’s Feedback StrategiesR

Page 16: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

11

As the IRF pattern constitutes a vital part of classroom learning, these immediate constituents invite research to unveil the causal relationship between the moves. Pantakod (2010), for example, revealed a strong relationship between initiation and response moves, explaining that they were interrelated. Although students’ responses were clearly dependent upon teachers’ initiation, teachers’ questions were, in fact, determined by ‘expected responses’. Teachers tended to use questions that helped elicit responses they expected to get from learners. Pantakod’s study, however, focused only on the first two moves in the exchange. He suggested further investigation on the relationship between feedback and responses, an area which has not been thoroughly explored.

Kaoropthai (2005) explored the use of feedback strategies in the classroom, focusing on the relationship between teachers’ beliefs and their actual practice in giving feedback. The findings revealed that only some of the feedback strategies used by the teachers matched their beliefs. In practice, obstacles arose and unexpected situations occurred so teachers could not apply the strategies they intended to use. Apart from their beliefs, teachers’ feedback might be influenced by many factors. One main factor seemed to be the students’ responses, as the two moves in the exchange are immediate constituents. Therefore, it is interesting to further investigate these moves focusing on the relationship between teacher’s feedback and students’ responses.

This study aims to cast more light on how teachers give feedback to different types of students’ responses. It is expected to clarify the relationship between these immediate constituents to answer the research question, “How do students’ responses lead to teachers’ feedback?” The types of teachers’ feedback strategies would be matched with different responses from students to identify common patterns that tended to co-occur. The finding is expected to be useful for teacher trainees in terms of awareness on making a decision to give feedback and how to deal with students’ responses.

2. Literature review2.1 Students’ responsesA response generally appears as a second move of the three-move exchange of classroom interaction (Sinclair and Coulthard, 1975; Cullen, 2002). It happens after the teacher asks a question. A student’s response is usually a reaction to the teacher’s question (Sinclair and Brazil, 1982). Responses could be in various forms. They could be non-verbal or verbal. Students may respond to teachers’ questions by being silent, nodding, raising hands or using other gestures. These non-verbal behaviours provide useful feedback to teachers about their teaching. Most gestures are very clear and explicit, while silence is likely to be ambiguous. It may imply either clear understanding or total confusion. In spite of its implicit nature, silence accounts for a large portion of students’ responses in class. Tsui (1995) found that about 40 percent of secondary ESL teachers’ questions received no response from students. Similarly, Thongmark’s (2002) and Pantakod’s (2010) studies in Thai university contexts pointed towards an occurrence of a number of silent responses in students’ reactions to teacher’s initiations. Thongmark (2002)

Page 17: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

12

further explained that the absence of students’ responses seemed to be influenced by factors ranging from not understanding the teachers’ questions to the pace of the teachers’ questions, limited world knowledge and language knowledge, as well as unfavorable attitudes towards English.

Verbal responses also provide a lot of meaningful information to teachers. Verbal responses may range from a single word to long sentences. McMurrey (2010 cited in Pantakod, 2010) classified them into lexical (word level), phrasal (phrase level), and sentence (sentence or clause level) groups. These different levels of responses require different levels of cognition and linguistic competence from students. The longer the responses, the higher the competence demanded. The number and quality of these responses thus reflect students’ knowledge, language ability and/or problems they may encounter.

Verbal responses can also be classified according to contents into answers and diversions (Sinclair and Brazil, 1982). Answers can be further grouped into ‘correct answers’, ‘tentative answers’, and ‘incorrect answers’. Diversions are verbal responses which show that students pay attention to the teacher’s questions but they do not catch the questions, do not understand or do not know the answer, so they ask for clarification or reveal their opposition. For example:

T: What is your least favorite subject?S: What do you mean by “least favorite subject”?

All kinds of answers and diversions help teachers know more about students’ levels of language proficiency, what they know and do not know, as well as whether they understand or do not understand the lessons (Watson Todd, 1997). At a macro level, students’ responses provide useful information for teachers to make an important decision in their teaching on whether they should move on to the next lesson, provide a remedial action or repeat the whole lesson. Students’ responses, therefore, help teachers to improve their teaching in areas where students have difficulties and adjust the lesson to effectively achieve the learning goals. At a micro level, students’ responses provide useful information for teachers to immediately and appropriately react to the students or to provide instant feedback to students.

2.2 Teachers’ feedbackFeedback is defined as information fed back to individuals about the appropriateness of their actions or responses (Watson Todd, 1997). In class, teachers can provide this useful information in any forms of communication to inform students about the quantity or quality of their performance in a learning situation (Cole and Chan, 1987). A teacher’s feedback generally functions as a follow-up or an evaluation move in the IRF pattern of classroom discourse. It assists students in evaluating their performance, provides corrections for imperfect utterances, asks for clarification, praises perfect rendition, or simply acknowledges the answers by giving backchannel cues such as ‘Mmm’ (Ellis, 1985).

Page 18: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

13

Feedback is an important component in the learning process. “The learner needs to be told or shown how he is learning, to receive a judgment from a teacher on his performance” (Sinclair and Brazil, 1982: 44). Students need to check the adequacy of their performance and monitor their learning progress as these are decisive steps for effective learning (Cole and Chan, 1987; Watson Todd, 1997). Teachers’ feedback carries information for learners to advance their learning through the process of reflecting upon how well they have performed, analyzing their weaknesses, and improving learning based on their own reflection or suggestion obtained from the feedback. The quality of teachers’ feedback, therefore, affects students’ learning. It guides students’ thinking and can lead to autonomy (Lewis, 2002) if the cycle of reflecting, analyzing and improving learning is highlighted so as to fortify the process of learning.

Teachers’ feedback also provides intrinsic motivation to students’ learning (Lewis, 2002). It drives students to improve themselves, as success usually brings further success. When students get positive and constructive feedback from teachers, their self-efficacy can be reinforced, and this encourages further improvement (Williams and Burden, 1997). Moreover, knowing how well they perform can be more motivating than marks or grades as it is more informative. Teachers’ feedback and appropriate suggestions can encourage students to use language to the best of their ability.

Teachers can use different kinds of feedback to deal with students’ responses. They can select from a range of, for example, positive or negative feedback, reinforcement or punishment, intended or non-intended feedback, evaluative or non-evaluative feedback, verbal or non-verbal feedback, intrinsic or extrinsic feedback, corrective or indicative feedback, and immediate or delayed feedback (Cole and Chan, 1987; Richards and Lockhart, 1994). These different forms of feedback contribute to learning in different ways as they are suitable for different purposes and situations. For instance, even though immediate feedback is considered more effective, it may interrupt class activities and affect students’ motivation (Watson Todd, 1997), so the teacher may consider delayed feedback instead. Corrective feedback can help students get the right answers immediately. It, however, may limit students’ opportunities to do self-correction.

Therefore, in order to provide effective feedback to students, teachers need to make a critical decision concerning the choices of appropriate feedback. From literature on pedagogy, general guidelines are suggested for teachers, for example, to provide specific feedback to learners, to focus on critical points, and to give feedback that is appropriate to students’ level and needs, and not loaded with metalinguistic terms, etc. (Watson Todd, 1997).

Moreover, teachers need to know strategies for applying feedback to students’ responses. The following strategies are suggested by Richards and Lockhart (1994).

• Acknowledging a correct answer. (Responding by saying, for example, “Yes” or “Right” to show that the teacher hears the answer and it is correct.)

• Indicating an incorrect answer. (Signaling to students that the answer is incorrect saying, for example, “No, that’s not quite right.”Repeating. (Echoing the student’s answer.)

Page 19: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

14

• Expanding or modifying an answer. (Responding to an answer by providing more information.)

• Asking follow-up questions. (Asking students to clarify the answer.)• Summarizing. (Paraphrasing or concluding students’ responses.)• Praising. (Complimenting students, for example, by saying “Good” or “Excellent.”)• Criticizing. (Commenting on the responses.)

Other feedback strategies include motivating and encouraging students to take part in the interactions, using gestures and other non-verbal communication to indicate errors, and transferring responsibility for feedback to peers (Arends, 1989).

These guidelines on feedback strategies are comprehensive and useful for teachers. They clearly suggest how teachers could react to different kinds of students’ responses. However, in a real classroom situation, teachers may face situations where they are required to make an abrupt, on-the-spot decision. Moreover, the relationship between teachers’ feedback and students’ responses might not be simple, and it requires critical decisions from teachers, as discussed earlier. This might be the reason why ‘zero feedback’ is also noticed in a classroom (Kaoropthai, 2005). Zero feedback is a situation when a teacher intentionally or unintentionally decides not to give any feedback to students’ responses. Zero feedback (either intentional or non-intentional) does not inform students of the correctness or incorrectness of their answers and may lead to students’ confusion or uncertainty about their performance; this, in turn, may limit learning (Cole and Chan, 1987). Issues on how teachers actually give feedback to students’ responses are, therefore, worth investigating and would contribute to better insights into the relationship between these two immediate constituents of classroom discourse.

3. Research procedures The study focuses on feedback strategies that teachers used to give feedback to different types of students’ responses. The data was collected from three micro teaching classes taught by three teacher trainees. The classes aimed at the teaching of reading. Only the pre-teaching stages of the lessons were selected for analysis, as this part of the lesson aims at contextualizing and activating students’ background knowledge relevant to the text. Thus, it generally invited teachers to use different kinds of questions as initiations to elicit responses from students. Richness of interactions was, thus, expected.

A video recording was used to record the subjects’ classroom interaction. The three subjects were asked for permission to record their lessons in a natural teaching setting. The interactions that occurred during the pre-teaching stages of each subject, which lasted from 9 to 15 minutes, were then transcribed verbatim. The students’ responses were classified into answers (correct, tentative, and incorrect answers) and diversions based on Sinclair and Brazil’s (1982) framework, as this framework is comprehensive and it could be used to analyze both contents and the language of the students’ responses. Non-verbal responses (such as nodding, and head

Page 20: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

15

shaking) as well as silence or zero responses were also taken into consideration and grouped under ‘others’. However, because of the unclear nature of non-verbal behaviours, only silence or zero responses were presented in the study. The teachers’ feedback strategies were classified based on Richards and Lockhart’s (1994) categories into: Acknowledging a correct answer, Indicating an incorrect answer, Repeating, Expanding or modifying a student’s answer, Asking follow-up questions, Praising, Summarizing, and Criticizing. This framework was selected for the analysis due to its practical nature, and it covered common feedback strategies for classroom teaching. However, the framework focused primarily on verbal feedback. The researcher, thus, decided to include non-verbal strategies and strategies that might be identified but did not match the main categories in ‘others’. The frequencies of the responses and the feedback strategies were calculated. Data about students’ responses and teachers’ feedback strategies was also compared to see the relationship between these constituents in the interaction.

4. Findings4.1 Numbers and types of students’ responsesWithin 9-15 minutes of the pre-teaching stage, the students from the three classes provided a total of 67 responses as illustrated in Table 1. The total numbers of responses from these classes were not much different. There were 17, 23, and 27 responses from the students from the three classes.

Table 1: Students’ responses during the pre-teaching stage

Students' responses Numbers of responses Total Percentage

Subject 1 Subject 2 Subject 3

Answers correct 12 14 12 38 56.72

tentative 8 5 0 13 19.40

incorrect 4 2 1 7 10.45

Diversions 1 0 0 1 1.49

Others (Zero responses or silence)

2 2 4 8 11.94

Total 27 23 17 67 100

Most of the responses were ‘answers’ (86.57%) where the majority was ‘correct’ (56.72%). Tentative answers accounted for 19.40%, while incorrect answers were 10.45%. Noticeably, most of these answers were short and limited to the word level rather than phrases or sentences. In Extract 1, for example, the teacher used a picture to elicit students’ responses, and the answer obtained was simply a single-word response, ‘hacking’. The nature of this type of students’ response seemed to affect the teachers’ use of feedback strategies, and this will be discussed in more detail later.

Page 21: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

16

T: What is he doing? (The teacher shows a picture of a hacker.) S: Hacking.

(Extract 1: Subject 1)

In this study, diversions occurred only once in one class, when a student asked the teacher to repeat the question, as he could not clearly understand it, as shown in Extract 2. T: What kind of data is there in your computer that you love to keep and watch alone? S: Again, please. T: What kind of data do you have in your computer?

(Extract 2: Subject 1)

Silence, which was grouped under the ‘others’ category, was noticed eight times (11.94%), suggesting that some questions from the teachers did not receive any response from the students. This lack of responses will be further investigated with the feedback that the teachers provided to react to such instances. 4.2 Numbers and types of teachers’ feedback strategiesTable 2 shows that the total number of teachers’ feedback from the three classes was 123. During this short period of the pre-teaching stage, each subject’s feedback varied slightly, from 35, to 44 and 51 times. They employed four types of feedback strategies (acknowledging a correct answer, repeating, expanding or modifying an answer, and praising) listed in the analysis framework and three other strategies, namely, giving zero feedback (13 times), providing a correct answer (4 times), and repeating the question (1 time).

Table 2: Teachers’ feedback strategies

Teachers’ feedback strategies No. of teachers’ feedback Total Percentages (%)S 1 S 2 S 3

1. Acknowledging a correct answer 9 15 5 29 23.58

2. Indicating an incorrect answer 0 0 0 0 0

3. Repeating students’ responses 12 14 9 35 28.46

4. Expanding / modifying an answer 12 11 9 32 26.02

5. Asking follow-up questions 0 0 0 0 0

6. Praising 3 4 2 9 7.32

7. Summarizing 0 0 0 0 0

8. Criticizing 0 0 0 0 0

9. Others (i.e. Zero feedback, Providing answers, Repeating questions)

8 4 6 18 14.63

Total 44 51 35 123 100

Page 22: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

17

Repeating students’ responses, expanding or modifying an answer, and acknowledging a correct answer were the top three strategies that the subjects used, and they were used at similar frequencies (35, 32 and 29 times, respectively). Noticeably, the subjects did not make any use of ‘indicating an incorrect answer’, ‘asking follow-up questions’, ‘summarizing’, or ‘criticizing’. ‘Praising’, however, was noticed at a low frequency (9 times).

Comparing the numbers of feedback and the students’ responses, it can be clearly seen that the teachers’ feedback outnumbered the students’ responses. This implied that the subjects used more than one feedback strategy with one response. The relationship between these two immediate constituents is illustrated in the following section.

4.3 Relationship between students’ responses and teachers’ feedbackSince the majority of students’ responses were answers, which were classified into correct, tentative and incorrect answers, the teachers’ feedback used with these three kinds of answers is summarized in Table 3 below.

Table 3: Feedback strategies used with answers

Feedback strategies No. of feedback

Correct Tentative Incorrect

Repeating students’ responses 30 4 1

Acknowledging a correct answer 24 5 0

Expanding or modifying answers 17 14 1

Praising 4 4 0

Giving zero feedback 3 0 3

Providing a correct answer 0 0 3 1) Correct answersIt seems that the subjects dealt with correct answers most frequently by repeating and acknowledging the answers (30 and 24 times, respectively), and many of the correct answers (17) were modified. Moreover, a combination of feedback strategies was noticed with a single response. These feedback strategies of the teachers might be influenced by the nature of students’ answers which were quite short and seemed to be limited to word levels. T: When we talk about chocolate what do we think of ? S: Sweet. T: Sweet, yes. Sweet is the taste of chocolate, but sometimes you may feel like it’s a little bit bitter.

(Extract 3: Subject 3)

Page 23: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

18

Extract 3 shows that Subject 3 used three feedback strategies with one response: namely, repeating (“Sweet”), acknowledging (“yes”), and expanding or modifying a student’s answer (“Sweet is the taste of chocolate, but sometimes you may feel like it’s a little bit bitter.”). The subject modified the answer to add more information in order to cover what she expected to elicit from the students. Pantakod (2010) noticed similar instances in the teachers’ talk, concluding that teachers’ expectations determined the discourse.

Moreover, it was observed that for each pair of responses and feedback constituents, the subjects would employ up to four feedback strategies with a single response from the student. Extract 4 provides an example of the use of four feedback strategies with one correct answer. T: Anything else do you think of ? S: Ocean, underwater. T: Ocean, underwater. Yes. Very good. Other locations can be in the forest, in town, etc. (Extract 4: Subject 2)

From Extract 4, the teacher repeated the student’s answer (“Ocean, underwater”), acknowledged the answer (“yes”), praised the students (“very good”), and expanded the answer, talking about other possible locations. Similar to Subject 3, Subject 2 elaborated on the students’ answer to cover the information he expected to elicit in the initial initiation.

Noticeably, repeating was very often used as the first feedback strategy for dealing with correct answers. This might be because echoing students’ answers seems to be a simple and natural reaction for teachers. It also helps teachers to buy more thinking time before moving on with the next move. After that, the teacher may acknowledge that the answer was correct, or praise the students for their good performance, and/or modify the answer.

2) Tentative answersAs for tentative answers, the subjects used the strategy of expanding or modifying the answers the most (14 occurrences). Acknowledging, repeating, and praising were also used (5, 4, and 4 times, respectively). Moreover, a combination of these feedback strategies was noticed as seen in Extracts 5 and 6. T: Sleeping… what? (The teacher attempted to elicit the words, ‘sleeping beauty’.) S: Sleeping princess. T: It is close. Good try. Actually the story is “Sleeping Beauty”.

(Extract 5: Subject 2) T: What will happen if a virus gets into your computer? S: Com hang. T: Hang! Computer hang! You mean your files will be damaged. The virus will damage or destroy your files.

(Extract 6: Subject 1)

Page 24: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

19

In Extract 5, Subject 2 intended to elicit the answer ‘sleeping beauty’ from the students. However, the student did not supply the exact answer, so he acknowledged that the answer was close to the correct answer, praised the student for the attempt, and modified the answer. Extract 6 reveals a similar instance where the teacher repeated a student’s tentative answer before modifying it. Noticeably, teachers’ feedback strategies used with correct and tentative answers were similar. 3) Incorrect answersAlthough incorrect answers occurred only a few times in the study, interestingly, the subjects tended to ignore them or provided zero feedback to these answers, as can be seen in Extract 7 below. T: How often did you use it (a computer)? S: Through night long. T: Ø (The teacher did not say anything and moved on to other questions.)

(Extract 7: Subject 2)

Another feedback strategy that the subjects used with incorrect responses seemed to be providing a correct answer to the students, and it was usually found as a single strategy, not accompanied by other feedback strategies. Extract 8 below illustrates the use of this feedback strategy with an incorrect answer. T: What does it (clue) mean? Ss: คำ�ตอบ (Answers) T: ‘Clue’ here means something that helps you find out the answer.

(Extract 8: Subject 3)

4) DiversionIn this study, a diversion response occurred only once. This response from a student occurred when he did not understand the teacher’s question and asked for a repetition of the question from the teacher. The teacher, thus, repeated the question by paraphrasing it to deal with the response. T: What kind of data in your computer do you love to keep and watch alone? S: Again please. T: What kind of data do you have in your computer?

(Extract 9: Subject 1)

5) Zero responsesNoticeably, zero responses or silence occurred 8 times, and five of these silent responses received no reaction or zero feedback from the teachers. The teachers ignored the old question and moved on to the next question. For the other three silent reactions, the subject himself provided the answer (1 time), repeated the question (1 time), and provided an incomplete sentence as a hint (1 time).

Page 25: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

20

5. Discussion and implicationsThe findings indicate that the subjects used a few feedback strategies to react to students’ responses in the pre-teaching stage. Three main feedback strategies, namely, repeating students’ responses, acknowledging correct answers, and expanding or modifying students’ answers, were employed most frequently, while indicating an incorrect answer, asking follow-up questions, summarizing, and criticizing were not observed. A number of instances of zero feedback were also noticed. Moreover, it was noted that a combination of feedback strategies was used with one response. These main findings lead to the discussion on four main points below.

5.1 Most common feedback strategies. Unsurprisingly, the most common feedback strategies were repeating, acknowledging, and expanding or modifying students’ answers. Repeating or echoing and acknowledging the answers perform an important function of classroom discourse (Richards and Lockhart, 1994). They share the same functions of accepting the answer and informing the students that the teacher has heard and is attending to the response the student produced. Repeating and acknowledging students’ answers are considered neutral or non-evaluative (Cole and Chan, 1987). They are, thus, unthreatening, and this might explain why they seemed to be the most common feedback strategies of teachers.

Expanding or modifying students’ responses was also used quite often in the study. This might be because the function of this feedback strategy perfectly matches the teacher’s role as a resource. Teachers usually play important roles in providing explanations to clarify students’ understanding, and giving information that contributes to students’ learning.

However, in spite of its advantages in providing useful input and language exposure for learners, extending students’ responses could cause negative effects on language learning. Since teachers are usually in a powerful position in a classroom discourse and commonly have more information, longer turns in the interaction are produced. Moreover, students’ responses are comparatively short. In this study, for example, most of the responses were noticed to be at the word level. The teachers, therefore, attempted to extend the answers. This leads to an imbalance in the power relationship and makes classroom discourse unique, full of IRF patterns, and with a higher proportion of teacher talk time (Sinclair and Coulthard, 1975). A good balance of teachers’ and students’ communication time in relation to appropriate use of feedback strategies is, therefore, complicated and demands careful attention from teachers.

Interestingly, all of the feedback noticed in the study focused on content, and not the form of the language. This part of the finding might be influenced by the nature of the pre-reading stage of the lesson, where the aim of teaching is to contextualize and to activate students’ background knowledge before reading. Moreover, it could be noticed that when mistakes occurred, the subjects corrected them without explicitly pointing out the language problems to students, as the focus was on content (e.g. SS: “Com hang.” T: “You mean the files will be damaged.”) The subjects seemed to be primarily focused on the purposes of the pre-reading stage.

Page 26: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

21

5.2 Unpopular feedback strategies. Surprisingly, indicating an incorrect answer, asking follow-up questions, summarizing, and criticizing were not observed in the study. Students’ incorrect answers tended to be ignored by the subjects, who were novice teachers. This might be because these answers did not match their expectations. Novice teachers are likely to attend to expected responses that help them proceed to the next stage. This non-intended feedback (ignoring the incorrect or unexpected answers as wrongly interpreted by learners) may lead to negative effects such as de-motivation or feelings of discouragement in learners (Williams and Burden, 1997). Moreover, non-learning and false learning may occur as students do not get adequate information to guide their further development.

As for asking follow-up questions, teachers need to be trained and encouraged to use this technique more, as it is an important communication skill which helps make a classroom discourse become more natural, meaningful, genuine, and communicative (Cullen, 1998). Asking follow-up questions shows that the teacher is really interested in the content of the interaction (not simply focusing on the lesson plan) and helps elicit more responses from students. These, in turn, help balance teacher and student power relationship as teachers’ power could be shared by students while communicating ideas. Moreover, when a genuine conversation occurs, students’ talk time would be increased. A student-centred feature and favorable class atmosphere could then be achieved as well.

5.3 Zero feedback and zero responses. A small number of zero feedback and zero responses was also noticed in this study. As discussed earlier, zero feedback does not carry information fed back to learners, so they do not receive adequate information to reflect upon their performance. This lack of feedback limits learning. On top of it, when zero feedback is matched with zero responses, a silence period occurs. Silence, however, is not simply a vacuum. It carries meaninful information to teachers and can be interpreted differently ranging from confusion to negotiation of meaning, thinking time, and many others. Because of its unclear nature, silence could be a frustrating period especially for novice teachers so they should be well-trained on how to intepret students’ zero responses or silence, how to cope with it, and how to use wait-time or elicitation techniques (Richards & Lockhart, 1994). Silence responses, therefore, need careful attention from teachers.

5.4 Nature of teachers’ feedback and students’ responses. In the study, an imbalance between the students’ responses and teacher feedback was noticed. Most students’ responses fell into the word level. Only some of the responses were at the phrasal or sentence levels. Pantakod (2010) and Dalton-Puffer (2006) also marked that students usually produced a large number of lexical responses which were very short and typically consisted of one word such as yes/no, a noun, or a verb. This might result from their low level of language proficiency and/or the nature of the teachers’ questions or elicitation techniques. In this study, for example, sometimes the subjects used an incomplete sentence as a cue for students to fill in the missing word. This technique limits students’ answers to a phrase or a word. A longer response from students could be promoted

Page 27: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

22

or encouraged by the teacher’s use of appropriate questioning (Dalton-Puffer, 2006). More use of divergent and referential questions to stimulate thinking, for example, should be taken into consideration (Richards and Lockhart, 1994).

In addition, a combination of feedback strategies used by the teachers clearly contributed to the length of teachers’ turns. Up to four feedback strategies were used with a single response. The teachers acknowledged and repeated the response, praised the students, and also expanded or modified the answer. The length of the teacher’s feedback stiffened the imbalance proportion of teacher and student interaction in the IRF pattern of classroom discourse. The I move seems to be in the form of a question or statement by a teacher, while the R is a word-level response from students. The F, on the other hand, appears to be in the form of a combination of feedback strategies building upon the R. The feedback move, thus, seems to be the longest and the most complicated part of the constituents.

This relationship between teachers’ feedback and students’ responses could be viewed in a positive and a negative way. On the bright side, the combination of feedback strategies used adds learning value to the discourse, providing information for learners to evaluate their responses and expansion of information, which directly contributes to effective learning. The IRF pattern is, thus, also known as the IRE or Initiation-Response-Evaluation pattern of interaction (Hall and Walsh, 2002).

On the contrary, the longer proportion of the F in the IRF pattern observed in this study reflects that classroom discourse is different from genuine communication discourse where IRF patterns rarely occur (Long and Sato, 1983). The length of the F move signals the need for measures to promote students’ talk time and to introduce more genuine communication to classroom discourse. A teacher should sometimes move out of an uninterrupted IRF pattern and establish a more communicative interaction with learners (Waring, 2009). Teachers’ feedback should be more open and varied, including communicative functions that allow students to initiate, negotiate and co-participate, not only to repeat or acknowledge the responses, but also to add information and/or to provide correct answers. Spoken discourse outside the classroom is complicated and rich, with a variety of communicative functions (Hoey, 1992).

The teachers’ feedback strategies for incorrect answers should also be carefully considered. In this study, there were certain cases where the teachers tended to ignore the incorrect answers and sometimes supplied the correct answers too soon without showing attempts to use other strategies such as paraphrasing or repeating the question. Negotiation for meaning through requesting further clarification from students and opportunities for them to take a greater part in the interaction, thus, should be encouraged.

6. ConclusionSince a teachers’ feedback does not function simply to inform students of correct answers but also to clarify and consolidate learning, it should contain useful information, and be precise,

Page 28: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

23

concise and thought-provoking for learners (Cole and Chan, 1987; Cullen, 1998). More emphasis, therefore, should be placed on how effectively teachers’ feedback could contribute to learning and promote meaningful communicative interaction in a language class. Teachers, especially novices, should be trained on how to react to students’ errors, correct and incorrect answers, as well as to other kinds of responses. These issues should be highlighted in teacher education.

ReferencesArends, R. I. (1989). Learning to teach. Singapore, Singapore: McGraw-Hill.Chaudron, C. (1988). Second language classrooms: Research on teaching and learning. New York, NY:

Cambridge University Press.Cole, P. & Chan, L. (1987). Teaching principles and practice. Sydney, Australia: Prentice Hall.Cullen, R. (1998). Teacher talk and the classroom context. ELT Journal, 52(3), 179-187.Cullen, R. (2002). Supportive teacher talk: The importance of the F-move. ELT Journal, 56(2), 117-127.Dalton-Puffer, C. (2006). Questions as strategies to encourage speaking in content-and-

language-integrated classrooms., In Uso-Juan, E. and Martinez-Flor, A. (Eds.) Current trends in the development and teaching of the four language skills (pp. 187-214). Berlin, Germany: Mouton de Gruyter.

Ellis, R. (1985). Understanding second language acquisition. Oxford, England: Oxford University Press.Hall, J. K. & Walsh, M. (2002). Teacher-student interaction and learning. Annual Review of Applied

Linguistics, 22, 186-203.Hoey, M. (1992). Some properties of spoken discourse., In Bowers, R. and Brumfit, C. (Eds.) Applied

Linguistics and English Language Teaching Review of ELT 2/1. (Vol. 2, pp. 65-84). London, England: Macmillan.

Kaoropthai, C. (2005). Teachers’ beliefs and practice concerning feedback strategies, Master’s Thesis, Master of Arts, Applied Linguistics (English Language Teaching), School of Liberal Arts, King Mongkut’s University of Technology Thonburi.

Lewis, M. (2002). Giving feedback in language classes. Singapore, Singapore: RELC.Long, M. & Sato, C. (1983). Classroom foreigner talk discourse: Forms and functions of teachers’

questions., In Seliger, H. and Long, M. (Eds.) Classroom-oriented research in second language acquisition (pp. 268-285). Rowley, MA: Newbury House.

McMurrey, D. (2010). “Words, phrases, clauses” [online], Available: http://www.io.com/~hcexres/style/word_phrase_clause.html. [2010, Aug 2]

Pantakod, P. (2010). A study of teachers’ initiations and students’ responses, Master’s Thesis, Master of Arts, Applied Linguistics (English Language Teaching), School of Liberal Arts, King Mongkut’s University of Technology Thonburi.

Richards, J. C. & Lockhart, C. (1994). Reflective teaching in the second language classroom. Cambridge. England: Cambridge University Press.

Sinclair, J. M. & Brazil, D. (1982). Teacher talk. Oxford, England: Oxford University Press.Sinclair, J. M. & Coulthard, R. R. (1975). Towards an analysis of discourse. Oxford, England: Oxford

University Press.

Page 29: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

24

Thongmark, R. (2002). Teachers’ questions and students’ responses in foundation English courses at Prince of Songkla University, Hat Yai campus (Unpublished master’s thesis), Prince of Songkla University, Songkla, Thailand.

Tsui, A. B. M. (1995). Introducing classroom interaction. London, England: Penguin.Ur, P. (1991). A course in language teaching. Cambridge, England: Cambridge University Press.Waring, H. Z. (2009). Moving out of IRF (Initiation-Response-Feedback): A single case analysis.

Language Learning, 59(4), 796-824.Watson Todd, R. (1997). Classroom teaching strategies. Hertfordshire, England: Prentice Hall.Williams, M. & Burden, R. L. (1997). Psychology for language teachers: A social constructivist approach.

Cambridge, England: Cambridge University Press.

Authors:Thanissorn Pochanukul was an MA participant in the Applied Linguistics (ELT) programme at the School of Liberal Arts, KMUTT. She worked under Saowaluck Tepsuriwong’s supervision on the special study about students’ responses and teachers’ [email protected]

Page 30: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

25

Study of Reading Strategies and Effects of Reading Ability Levels and Text Types on Rational Deletion Cloze Test Performance of EFL University StudentsNantawan Senchantichai and Suphat SukamolsanChulalongkorn University

AbstractThis study aims to investigate the effects of reading ability levels and two text types, narrative and expository texts, on the rational deletion cloze test performance and to study the students’ use of reading strategies on their taking the cloze test. One hundred seventy-four first-year university students participated in this study. They were assigned into three groups of high, average and low reading ability. The instruments included a rational deletion cloze test and a reading strategies questionnaire. The interaction effects of reading ability levels and text types on the cloze test performance were found to be nonsignificant. However, each of these two variables had a significant effect on the cloze test performance, with large effect sizes. The uses of reading strategies by the three reading ability groups while working on the cloze test were also found to be significantly different.

Key words: cloze test, rational deletion cloze test, reading ability, reading strategies

I. IntroductionCloze tests have been extensively used, for more than 30 years, as completion measures, aimed at tapping reading skills. The cloze procedure is regarded as an integrative method of assessment since the completion of cloze items requires simultaneous processing of several linguistic components (Madsen, 1983 cited in Keshavarz and Salimi, 2007). Many studies on the concurrent validity of cloze procedures (e.g. Oller, 1972; Irvine et al., 1974; Stubbs and Tucker, 1974; Alderson, 1979; Brown, 1980; Hinofotis, 1980) show high correlations between cloze tests and standardized tests, and with their sub-tests. This has led to the assumption that the cloze test can be used as a measure of overall proficiency in English as a second language (Saito, 2003) as well as a measure of reading comprehension (Alavi, 2005).

However, the precise language abilities required by a given cloze test and the effects of cloze methods — fixed-ratio, rational, and multiple-choice — have been controversial issues. Among these three methods, research on fixed-ratio cloze procedure, with every nth word deletion, has been the focus. The advocates of cloze procedure claim that cloze tasks involving the discourse processing ability can measure reading comprehension at the macro level (Oller, 1979 cited in Chapelle and Abraham, 1990; Chavez-Oller et al., 1985; Jonz, 1990; McKenna and Layton, 1990; Fotos, 1991). However, researchers like Alderson (1980, 1983, and 2000) and Cohen (1998) regard cloze tests as only measures of local-level reading ability.

A

Page 31: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

26

To construct cloze tests to evaluate reading comprehension, Alderson (1979) suggested that the tests should obtain the deletion criteria from aspects of the reading process so as to indicate that test takers relate different pieces of information beyond clause boundaries of the deleted word to restore the gap, and therefore the tests could measure ‘higher order processing abilities.’ Bachman (1985) has suggested the development of a rational deletion cloze test, “a cloze test of specific abilities through the use of a rational deletion procedure.” He has proposed the principal basis for classifying and selecting words to be deleted by using the hierarchical structure of written discourse as a criterion, since “not all deletions in a given cloze passage measure exactly the same abilities” (Bachman, 1985: 535). These criteria were derived from discourse processing theory, which is the fundamental principle that asserts that learners proceed through the text by using both micro-level and macro-level text processing strategies (Read, 2000: 107). Despite being frequently recommended in cloze testing, few studies have employed Bachman’s category of cloze items. Only Sasaki (2000) and Yamashita (2003) have employed this category in their coding scheme to analyze the subjects’ self-report on their cloze test taking process, but not as the classification for cloze deletion as Bachman has suggested.

In conducting language testing research, to further focus on test construction, researchers need to take into account factors that can affect performance on language tests (Bachman, 1995: 155). In the field of cloze testing, there is a vast amount of research in cloze tests on different variables — such as deletion ratio, scoring systems, passage difficulty, and method of student response. However, text type is a variable that does not receive much attention in cloze test research. This may stem from the standard practice of cloze testing, which employs only one passage of a certain length. Nevertheless, this practice has often been criticized, since a single text cannot be a representative sample of the language (Klein-Braley, 1997: 59).

Research evidence suggests that text type is related to reading comprehension (Alderson, 2000). Narrative and expository are two main text types (Koda, 2005). These two text types have different effects upon language learners; narrative text appears to be easier to understand and monitor than expository text (Alderson, 2000; Koda, 2005). While research on reading assessment has studied the relationships of text types to reading comprehension (Brantmeier, 2005), only a handful of studies to date has concentrated on the effects of text types on a cloze task. Among those few is Wu’s (1994) work. He found that the narrative texts were more suitable for measurement of students’ reading comprehension than the expository texts.

Another perspective that only a small number of research studies on cloze tests seem to pay attention to is that of the test takers themselves. In a normal reading situation, a reader only concentrates on reading strategies that enable him or her to interpret the text, whereas in the testing situation, not only does a test taker have to be concerned with the interpreting of the text, but he/she also “needs to develop different strategies to interpret the test as well as to complete the task….[t]he strategies applied in the testing situation vary with test tasks” (Francis, 1999: 46). There are few studies investigating reading strategies in cloze testing,

Page 32: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

27

except for Kletzien’s (1991) and Lu’s (2006) studies. They found that individual test takers used different reading strategies in restoring the cloze blanks. They both seemed to agree that research on reading strategies in cloze test performance would yield some significant information for reading education.

Traditionally, cloze tests are regarded as measures of reading comprehension. Test takers are required to search for “a distribution of elements” in restoring cloze gaps (Weaver, 1965: 127, cited in Raymond, 1988: 91) and to supply the gaps using surrounding words and context (Paris and Jacob, 1984: 2087). Due to the fact that cloze tests highly correlate with other measures of language proficiency (e.g. Oller, 1972; Irvine et al., 1974; Stubbs and Tucker, 1974; Alderson, 1979; Brown, 1980; Hinofotis, 1980), high achievers in cloze tests tend to achieve highly in the reading comprehension tests. However, there are only a few studies on rational cloze testing with second-language learners (e.g., Bensoussan and Ramraz, 1984; Hale et al., 1989; Jonz, 1990; Abraham and Chapelle, 1992; Sasaki, 2000; Yamashita, 2003). Findings from some of these studies seem to support that rational deletion cloze tests can be a measure that performs well in differentiating good and poor readers (Yamashita, 2003).

Accordingly, it is interesting to explore whether the rational deletion cloze test can be used as a measure of English as a Foreign Language (EFL) reading comprehension that can differentiate students of different reading ability. Since few studies have employed Bachman’s (1985) classification of cloze items in cloze testing, it is also interesting to explore whether the rationale proposed by Bachman can help generate a cloze test consisting of items that can elicit different types of information, ranging from clause-level to text-level information. And, since the traditional cloze test has been criticized for its use of a single text, it is noteworthy to explore whether the test results can confirm the differences between text types, when both narrative and expository texts are included, as these two text types have been predicted to have different effects upon language learners. Finally, if it has been stipulated that the rational deletion cloze test can be a measure of reading comprehension, it is also interesting to explore whether students use different reading strategies in their cloze test performance.

This paper is part of a larger study which included data from retrospective interviews to gain in-depth information about the use of reading strategies in filling cloze gaps and to validate the test construct. The focuses of this paper are restricted to the following three research questions:

1. Do students’ different reading ability levels have a significant effect on their rational deletion cloze test performance?

2. Do different text types have different effects on the rational deletion cloze test performance of the students with different reading ability?

3. Are there differences in the use of reading strategies by students with different reading ability levels in their taking the rational deletion cloze test comprising two different text types?

Page 33: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

28

2. Methods2.1 ParticipantsOne hundred seventy-four university students were randomly selected to participate in this study. They were then chosen to represent readers of high ability (n = 58), average ability (n = 58) and low ability (n = 58). The reading test used to select these participants consisted of 40 multiple-choice test items. It was developed by the researchers and had reliability values (Cronbach’s alpha) of 0.80 (the 2009 academic year), and 0.84 (the 2010 academic year). Based on the scores of the test, students at or above the 70th percentile rank were identified as readers of high ability. Students between the 69th and the 35th percentile ranks were identified as readers of average ability, and those below the 35th percentile rank were identified as readers of low ability. Table 1 shows the mean scores of the reading test and the differences among the three reading ability groups.

Table 1 Mean scores, standard deviations and standard errors of the reading test

N Mean Std. Deviation Std. Error

Low reading ability groupAverage reading ability groupHigh reading ability groupTotal

585858174

9.431016.120725.034516.8621

1.634202.347664.283657.05908

.21458

.30826

.56247

.53515

2.2 InstrumentsIn order to address the three research questions, two instruments, as listed below, were employed with the participants. A description of each instrument is provided in the sections that follow.

• the rational deletion cloze test• the reading strategies questionnaire

a. The rational deletion cloze test Two passages from pre-intermediate-level EFL textbooks, “Old Age in Present Society” (Day et al., 1999) and “The Ant and the Grasshopper” (Dos Santos, 2007), were adapted to be used in this study. The passages were 254 and 257 words in length and were determined to be at the fifth-grade readability level, using the Flesch-Kincaid Grade Level formula (Child, 2004). The pre-intermediate-level passages were used in order to facilitate the test-taking process, as none of the participants were familiar with the construct-responded cloze task. Another thing that was taken into consideration in selecting texts was strategy use. Researchers (e.g. Kletzien, 1986 and Bednar, 1987, cited in Kletzien, 1991; Paris et al., 1996 cited in Hudson, 2007) suggest that the use of reading strategies would decline if the reading task becomes harder. Thus, it was hoped that the given cloze texts would stimulate the students in their cloze test performance to a certain degree.

Page 34: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

29

These two passages had undergone the selection and trial processes. “The Ant and the Grasshopper,” an Aesop’s fable, was used as a narrative cloze. This story was expected to activate the participants’ world knowledge due to text familiarity. “Old Age in Present Society” was used as an expository cloze. This passage was also expected to activate the participants’ world knowledge in terms of cultural familiarity. The rhetorical pattern of organization of the expository text is compare/contrast. This pattern of organization was used since previous studies have suggested that learners of different levels seem to be sensitive to compare/contrast text structure (Meyer and Freedle, 1984; Ghaith and Harkouss, 2003).

Each cloze text consisted of 20 items. Each blank, or item, required one word. The students responded to the cloze items by supplying their own words. The rationale for item deletion used in this study was adopted from Bachman’s (1985) classification of cloze item types. These types of items are as follows.

1. The “within clause” item type. This item type requires the information within the clause where the cloze blank appears as a source of information for gap filling.

2. The “across clause, within sentence” item type. This type of cloze item requires the information across clause, but within the boundary of the sentence where the cloze blank appears as a source of information for gap filling.

3. The “across sentence” item type. This third type of item requires the students to read beyond the sentence where the gap appears in order to find source of information to restore the gap.

4. The “extratextual” item type. This last type of item requires the information outside the text boundary. The students have to relate what they have read to their world knowledge in order to restore the gap.

From the analysis of the two selected texts, it turned out that these two simplified texts had similar ratios of different types of text information. Both consisted of approximately 45% of the clause text information, 45% of the intersentential text information, and 10% of extratextual information. These ratios were maintained in the two cloze texts, and thus resulted in the following numbers of cloze items for each level of text information: six “within clause” items, three “across clause, within sentence” items, nine “across sentence” items, and two “extratextual” items. The average deletion ratio for each cloze text was 1:9. (The average deletion rate for the narrative cloze text was every 9.65 words, and the average deletion rate for the expository cloze text was every 9.1 words.) The six “within clause” items require three content words (a noun, a pronoun, an adverb) and three function words (a preposition, a collocation and a conjunction under clause). The three “across clause, within sentence” items require two content words (a verb and an adjective), and one function word (a negation). The nine “across sentence” items require seven content words (four nouns, one verb, an adjective, and one adverb) and two function words (the conjunctions above clause). The two ‘extratextual” items require two content words (a noun and a verb).

Page 35: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

30

To validate the test, eight lecturers in English were asked to provide retrospective data upon their completion of the two cloze texts; narrative and expository texts. The reason for not including native speakers of English in this task was based on the researcher’s assumption that the way in which Thai teachers form their ideas on the cloze tasks would be similar to that of the students’ due to their similar background in education. All eight lecturers used the same types of information as what had been designed for the test to restore cloze gaps in both the narrative and expository texts. The agreement ratio was 100%.

An acceptable alternative scoring procedure was used in this study. The correct answers from the lecturers and the students during the trial of the test were used as alternatives in the scoring key for acceptable responses.

b. The reading strategies questionnaire The reading strategies questionnaire was employed in this study to capture the participants’ use of reading strategies while they were working on the cloze test, since the test was assumed to be a tool that could measure comprehension at both the local and global levels. It should be noted that this survey was aimed at getting information concerning the participants’ perception of their strategy use during the cloze test-taking process. The types of strategies enabling test takers to correctly solve cloze problems were not the focus of this survey.

The questionnaire was in the form of a checklist. The categories of reading strategies serving as the basis for constructing the questionnaire were based on the studies of Kletzien (1991) and Lu (2006). The strategies proposed by Kletzien and Lu are congruent with those “while-reading strategies” posited by Paris et al. (1996 cited in Hudson, 2007: 107-108). The questionnaire underwent trial processes. Strategies 1-11, as shown in Table 7, have been maintained throughout, and the strategy of “translating” was added, taken from the participants’ suggestion in the trial processes.

The split-half reliability estimation of the questionnaire using the Spearman-Brown coefficient and the Guttman Split-Half coefficient resulted in reliability values of .788 and .785 (first semester of the 2010 academic year). Thus, the checklist questionnaire had high internal consistency.

3. Research proceduresAll participants took the rational deletion cloze test in one test administration. The time allocation for each cloze text was 30 minutes. The two cloze texts were distributed to the participants in a counterbalancing manner to avoid practice effects. The checklist questionnaire was attached to each cloze text (narrative and expository) to elicit the participants’ use of reading strategies on their solving the cloze items.

Data AnalysisThe responses to the cloze test were scored, using the acceptable scoring key. Alternatives that

Page 36: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

31

are semantically acceptable with minor spelling mistakes were given 2 points. The alternatives that are in the following forms were given 1 point: those in which word choice is not appropriate, but reflects test takers’ understanding of the story; those that are not grammatically correct but the meaning of the slot is maintained; those that are not syntactically acceptable but the meaning of the slot is maintained; and those that violate the instructions by inserting more than one word, but the meaning of the slot is maintained.

The present study employed Cronbach’s alpha as an estimation of reliability values. This is because there was more than one test score for each cloze item due to the use of the semantically acceptable scoring method. The reliability value (Cronbach’s alpha) of the narrative cloze text was 0.84, and of the expository cloze text was 78.

3.1 Effects of reading ability levels and text types on rational deletion cloze test performanceA two-way ANOVA analysis with replication was carried out to portray the answers to the first two research questions. The two-way ANOVA analysis with replication was used when the same subjects were in two or more conditions (Arther, 2009). In this study, reading ability was the within-subject variable in which each subject was assessed in two conditions. The conditions were the narrative cloze and the expository cloze. The two-way ANOVA with replication was done in Excel.

In analyzing the effect of the reading ability levels on students’ rational cloze test performance, a one-way independent ANOVA and the post hoc tests were used.

The effect sizes of the two ANOVA analyses, on narrative and cloze text performance, were calculated. The measure of omega squared (ω2) was used to estimate the effect sizes for the one-way single factor ANOVA analyses for the comparisons of the performance of the three reading ability groups on the two cloze texts; narrative and expository (Field, 2009: 390).

3.2 Effects of text types on rational deletion cloze test performanceFor the effect of text types on the rational cloze test performance, a dependent t-test was used to compare the differences between the scores gained from the narrative and the expository cloze texts.

The measure of Pearson correlation coefficients effect size r was used to estimate the effect size for the dependent t-test analysis of the difference between the two text types, since this estimate is widely used for the t-statistic (Field, 2009: 332).

3.3 The use of reading strategiesResponses to the questionnaire on the use of each strategy were counted. Frequency counts of each strategy used by different groups of readers were then calculated.

The Kruskal-Wallis tests were used to find out whether there were differences in the use of reading

Page 37: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

32

strategies by students with different reading ability levels in doing the rational deletion cloze test comprising two different text types. The Mann-Whitney tests, which are the post hoc procedures for the Kruskal-Wallis tests, were then used to test differences in the use of reading strategies in all different combinations of the reading ability groups.

The effect sizes for the differences among all pairs compared were calculated by using the measure of Pearson correlation coefficients effect size r (Field, 2009: 570).

4. ResultsAll the statistical analyses were performed using SPSS version 13.0. The results are presented in accord with the research questions. Descriptive statisticsThe descriptive statistics of the rational cloze test are shown in Table 2 below. The mean scores of the two cloze texts reflect that the participants tended to have a higher performance on the narrative cloze than on the expository cloze. The reliability of each cloze text was regarded as high.

Table 2 Means, standard deviations, and reliability values of two cloze texts

Mean Variance Std D Cronbach’s Alpha No. of items

Narrative cloze 22.776 71.817 8.474 .841 20

Expository cloze 19.419 54.106 7.356 .778 20

Effects of reading ability levels and text types on rational deletion cloze test performanceA two-way ANOVA analysis, as shown in Table 3, revealed that both reading ability levels and text type had significant effects on the rational cloze test performance.

Table 3 The main effects and the interaction effect of reading ability level and text type

Source of Variation SS df MS F p-value Fcrit

Reading ability level 5371.195 2 2685.598 55.51134* 0.001 3.022127

Text type 1051.796 1 1051.796 21.74064* 0.001 3.868792

Interaction 72.16092 2 36.08046 0.745784 0.48 3.022127

Within 16545.71 342 48.37926

Total 23040.86 347

* p< .05

Page 38: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

33

Sum of Squares df Mean Square F Sig.

Between Groups 2241.701 2 1120.851 25.931* .000

Within Groups 7391.500 171 43.225

Total 9633.201 173

* p< .05

Paired comparisons were then conducted. It was found that the high reading ability group scored significantly higher than the average and the low reading ability groups (p< .05) in both cloze texts.

4.2 Effects of text types on rational deletion cloze test performanceAs shown earlier, the initial results from the two-way ANOVA analysis with replication indicated that the two text types, namely narrative and expository, had significant effects on the cloze test performances of the three reading ability groups. At this stage, the dependent t-test, or paired samples t-test, was used to compare the differences between the scores gained from the narrative and the expository cloze texts. Table 6 shows the details of the standard deviations and the standard error means of the two cloze texts. On average, the participants had higher scores on

Since the performances of the three reading ability groups on the rational cloze test were significantly different, an ANOVA single factor was employed on the reading ability levels to find out which levels differed on the performance on each cloze text. It was found that there were significant differences among the three reading ability groups in their performance on each cloze text.

The effect sizes of the effect of reading ability levels on the rational deletion cloze test performance were calculated using the omega squared (ω2) equation. It was found that the effect of reading ability levels on the narrative and expository cloze were large, ω2 = .26 and .22 respectively, which represented a substantial finding.

Table 4 The main effect of reading ability levels on narrative cloze performance

Sum of Squares df Mean Square F Sig.

Between Groups 3201.655 2 1600.828 29.903* .001

Within Groups 9154.207 171 53.533

Total 12355.862 173

* p< .05

Table 5 The main effect of reading ability levels on expository cloze performance

Page 39: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

34

the narrative cloze text (M = 22.76, SE = .64) than on the expository cloze text (M = 19.28, SE = .57). From the table, it was evident that the average scores of the two text types were significantly different, t = 23.46, p< .05. The effect size of the effect of text types on cloze test performance was calculated using the estimation of Pearson correlation coefficients (r) equation. This resulted in a large effect size, r = 0.46, which suggested a substantial finding.

Table 6 Comparison of narrative and expository cloze test performance

Paired Differences t df Sig. (2-tailed)

MeanStd.

DeviationStd.

Error Mean

95% Confidence Interval of the

Difference

Lower Upper

Pair 1

Narrative cloze –

Expository cloze

3.477 1.955 .1482 3.184 3.769 23.46* 173 .000

* p< .05

4.3 The use of reading strategies The results from the Kruskal-Wallis H tests, as shown in Table 7, revealed that the use of each reading strategy by the three reading ability groups was significantly different. All participants reported the use of “reading the whole cloze passage before working on the blanks” and “using context to restore the blanks.” The strategies of “making inferences” and “using main idea” were reported the least, while none of the participants reported the use of the “using main idea” strategy in their narrative cloze performance.

Table 7 The percentages and the H values of reading strategies used on the narrative and expository cloze (In each cell, the data of the narrative cloze is presented on the first line, and that of the expository cloze on the second line)

StrategiesHigh-reading ability group

Aveage- reading ability

group

Low-reading ability group X2

1. reading the whole cloze passage before working on the blanks

58 (100%)58 (100%)

58 (100%)58 (100%)

58 (100 %)58 (100%)

.0 .000

.0 .000

2. skipping unknown words while reading the cloze passage

30 (51.7%)41 (70.6%)

50 (86.2%)50 (86.2%)

52 (89.7%)52 (89.7%)

131.00*142.00*

Page 40: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

35

StrategiesHigh-reading ability group

Aveage- reading ability

group

Low-reading ability group X2

3. using sentence structures 51(87.9%)51(87.9%)

32 (55.2%)32 (55.2%)

28 (48.3%)28 (48.3%)

110.00*110.00*

4. using rhetorical patterns of organization

40 (69%)51 (87.9%)

32 (55.2%)32 (55.2%)

28 (48.3%)28 (48.3%)

99.00*110.00*

5. focusing on vocabulary 55 (94.8%)55 (94.8%)

33 (56.9%)33 (56.9%)

33 (56.9%)33 (56.9%)

120.00*120.00*

6. using context to restore the cloze blanks

58 (100%)58 (100%)

58 (100%)58 (100%)

58 (100%)58 (100%)

.000

.000

7. looking for key words and phrases

58 (100%)58 (100%)

42 (72.4%)42 (72.4%)

32 (55.2%)32 (55.2%)

131.00*131.00*

8. using punctuation 58 (100%)58 (100%)

31 (51.7%)31 (51.7%)

28 (48.3%)28 (48.3%)

116.00*116.00*

9. making inferences 33 (56.9%)33 (56.9%)

31 (53.4%)31 (53.4%)

21 (36.2%)21 (36.2%)

84.00*84.00*

10. using main idea -38 (65.5%)

-18 (31%)

- -

-55.00*

11. using prior or world knowledge

54 (93.1%)54 (93.1%)

55 (94.8%)55 (94.8%)

46 (79.3%)46 (79.3%)

154.00*154.00*

12. translating 50 (86.2%)50 (86.2%)

58 (100%)58 (100%)

58 (100%)58 (100%)

165.00*165.00*

Total 545 (78.30%)605 (86.93%)

480(68.97%)498(71.55%)

442(63.51%)442(63.51%)

1466.00*1544.00*

*p <.05

The Mann-Whitney tests employing a Bonferroni correction were used to follow up this

Page 41: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

36

finding. It appeared that the use of reading strategies on performing both cloze texts by the high reading ability group was significantly different when it was compared to that of the average and low ability groups. Table 8 shows the results of paired comparisons among all ability groups on their working on narrative and expository texts.

Table 8 Paired comparisons

Strategy use on narrative cloze Strategy use on expository cloze

High/Average ability

High/Low ability

Average/Low ability

High/Average ability

High/Low ability

Average/Low ability

Mann-Whitney U 106389 92486.5 98390 116187 104510.5 106346

Wilcoxon W 221829 190389.5 196293 24043 202413.5 204249

Z -5.339* -6.515* -1.960 -6.726* -6.233* -.918

Asymp. Sig. (2-tailed)

.000 .000 .050 .000 .000 .358

*p< 0.01Grouping Variable: Ability levels

The effect sizes of the differences in paired comparisons were calculated, using the estimation of Pearson correlation coefficients (r) equation. The large effect sizes were obtained, suggesting that the high reading ability group used more reading strategies in their performance on the narrative cloze test, r = .5 and .6, and the expository cloze test, r = .62 and .58.

4. DiscussionIn this section, each of the research questions is discussed consecutively.

Q 1: Do students’ different reading ability levels have a significant effect on their rational deletion cloze test performance? Regarding the first research question, the finding supports McKamey (2006) in that reading ability contributes to performance in the rational deletion cloze test. The work of Yamashita (2003) is also confirmed in that the rational deletion cloze test differentiates well between learners of different reading abilities.

The cloze item classification suggested by Bachman (1985) tends to be the appropriate criteria for item deletion since different types of items require different types of information. The finding also concurs with what Brown (2002) has pointed out, that each cloze item may function differently for different language groups, depending on their proficiency level. Moreover, this finding lends support to previous studies’ assertions that rational deletion cloze tests can measure both

Page 42: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

37

sentential and text-level comprehension (Alderson, 1979, 2000; Levenston, Nir and Blum-Kulka, 1984, cited in Storey, 1997; Read, 2000; Yamashita, 2003).

Finally, successful cloze test performance requires a number of different language skills, (e.g. grammatical knowledge, vocabulary knowledge, and reading comprehension) in which students are required to read across sentence boundaries, and, in certain ways, they have to relate what they have read to their world knowledge. In this study, learners who had high reading ability tended to possess more of these language skills than the average and low ability groups, which resulted in better cloze test performance.

Q 2: Do different text types have different effects on rational deletion cloze test performance?The finding supports the work of Wu (1994) in that text types have an effect on the students’ cloze performance and that the narrative texts are more sensitive to intersentential comprehension. The finding also confirms the theory that narrative text appears to be easier to understand and monitor than expository text (Alderson, 2000: 64; Lipson and Wixson, 2003: 181; Koda, 2005: 155; Zabrucky and Ratner, 1992, cited in Carnine et al, 2004: 336).

The data seems to support Koda (2005: 155), who noted that “narrative discourse appeals to readers’ shared knowledge of the world.” Results from informal interviews showed that the story of “The Ant and the Grasshopper” (the narrative cloze text) was familiar to the participants. The participants agreed that the background knowledge of the story facilitated their text comprehension. They seemed to have little difficulty following the sequence of events narrated in the story.

The difficulty of the expository text used in this study appears to emerge from the following factors. In normal reading, well-presented text enables readers to identify the relevant textual information, including main ideas and relationships between ideas, which is central to comprehension (Dickson, Simmons, and Kameenui, n.d.: 8). For example, important components of well-presented text are the location of main idea sentences and signal words. In cloze testing, however, text is presented in a different way with certain words deleted, some of which may be the signal words and the cues for the main idea. Good readers are generally aware of physical patterns of text organization even when the text is altered, as in cloze text (Grabe, 2004: 52). In this study, the high reading ability group seemed to be able to follow the story related in the expository cloze text, which, as mentioned earlier, was presented in the compare/contrast text structure.

Another factor in the perceived difficulty of the expository cloze may be the perception of vocabulary difficulty. The participants, especially the average and the low ability students, in the informal interviews observed that the vocabulary used in the expository cloze made the text more difficult than the narrative cloze. For them, the expository text was a formal report on the life of old people. In their opinion, more difficult words were found in the report than those found in the fable. Kletzien (1991) posited that the subject’s perception of text difficulty may have certain effects on his/her test performance. This may result in the low performance of the

Page 43: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

38

average and low reading ability groups. Moreover, research has shown that vocabulary knowledge plays an important role in L2 reading comprehension (Koda, 2005: 48; Zhang and Annual, 2008: 1), and that vocabulary knowledge correlates more highly with reading comprehension than other factors (Koda, 2005: 49). Thus, inadequacy of vocabulary knowledge in the average and low ability groups further contributed to their poor cloze test performance.

Q 3: Are there differences in the use of reading strategies by students with different reading ability levels in doing the rational deletion cloze test comprising two different text types?Regarding the third research question, the overall employment of the 12 strategies by the three reading ability groups as shown in Table 7 revealed that there is no difference in the use of reading strategies in completing the narrative cloze and expository cloze. The finding shows that the high reading ability group used reading strategies in their cloze test-taking processes significantly more frequently than the average and low reading ability groups. This finding supports the work of Kletzien (1991) and Xiaoying and Xiangdong (2008). It seems that there is a positive relationship between the students’ reading ability and the frequency of their use of reading strategies as suggested by different researchers (e.g. Paris and Meyer, 1981; Block, 1986; Upton, 1997). This may be interpreted as Xiaoying and Xiangdong (2008) have put it, “Proficient readers are more active in their attempt to comprehend than less-proficient ones.”

Since the frequency of the use of reading strategies in solving both the narrative and the expository cloze texts as reported by the students was almost the same (see Table 7), the discussion in this part would focus on the overall employment of reading strategies regardless of the text type. The most frequently used strategies are “reading the whole cloze passage before working on the blanks” and “using the context to restore the blanks.” Regarding the strategy of “reading the whole cloze passage before working on the blanks,” the finding seems to be in contrast to what has been found in the previous studies (Emanuel, 1982, and Hashkes and Koffman, 1982, cited in Cohen, 1998: 104). In those studies it was found that only a quarter of non-native respondents read the entire EFL cloze passage before responding. In this study, however, all reading ability groups reported the use of “reading the whole cloze passage before working on the blanks.” The reason why all participants in the present study read the whole text before completing the cloze test lies in the fact that the participants were not familiar with the constructed-responses cloze task. Results from the informal interviews revealed that the participants needed to read the whole cloze texts in order to get the idea of what they were about and how to tackle the task. Another explanation for the contrast to what was found by Emanuel, 1982, and Hashkes and Koffman, 1982, as cited in Cohen (1998), may lie in the differences of the samples used. The participants used in this study are EFL university students while the subjects used in those studies are schoolchildren. Students in different levels of education may have different ways of approaching cloze tasks. A possible explanation for the use of “using context to restore the blanks” by all participants may lie in redundancy. As the participants went through the cloze texts, they may have observed the language repetition in the texts (see the following example).

Page 44: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

39

You didn’t work this summer. You sang and danced. (16) _____ didn’t follow my advice. You called me foolish.

According to Foto (1991), redundancy serves as a guiding principle which test takers can rely on to restore gaps and make inferences about the ideas in a cloze passage.

It was not surprising that “translating” and “using prior world knowledge” were reported as the second and third most-used reading strategies. It is possible that whenever the text comprehension was obstructed by language problems, the students, especially the average and low reading ability students, would resort to the “translating” strategy, to clarify the problem, as suggested by Wirotanan (2002). It is possible that use of the “translating” strategy frequently led to the unsuccessful restoration of the cloze gaps. Concerning the use of prior world knowledge, Grabe (2004: 50) stipulates that “the prior world knowledge facilitated the students’ text comprehension.” In addition, background knowledge, according to McCormick (1992, cited in Urquhart and Weir, 1998: 84), is more important in the understanding of expository texts than in narrative texts. However, it seems that the participants in this study used their prior world knowledge to facilitate their comprehension of the narrative text better than the expository text. Their familiarity with the fable presented in the narrative, despite the absence of several signaling cues, may have facilitated their constructing the text meaning. In the case of the expository text that is about the old people in the present society, despite their familiarity with the situation of the elderly, the story itself may not have been that stimulating. With the absence of discourse markers and several content words, the high reading ability participants tended to recognize the compare/contrast text organization better than the other two ability groups (Grabe, 2004: 52).

The students in the average and the low reading ability groups reported more frequent use of the strategy of “skipping unknown words while reading” than the students in the high reading ability group. This is in contrast to Hosenfeld (1977, cited in Hudson, 2007) and Carrell (1989), who claimed that low-proficiency readers viewed words as “equal in terms of their contribution to the phrase meaning”, while the high-proficiency readers read in “broad phrases” and skipped words viewed as unimportant to the entire phrase meaning. It is possible that vocabulary knowedge of the average and the low reading ability participants was inadequate. Whatever words they did not understand, they would skip. That the high reading ability participants did not skip unknown words while reading may result from the test task itself, in which the deleted words must be restored. The participants may need to read every word in order to construct text meaning for restoring the blanks. That’s why all the high reading ability participants reported using the strategies of “focusing on vocabulary” and “looking for key words and phrases.”

Linguistic knowledge may have an effect on the use of the strategies of “using sentence structures,” “using rhetorical patterns of organization,” and “using punctuation.” These strategies require linguistic knowledge from the readers in order to use them (Anderson, 1991). While the majority of the high reading ability group reported using these three reading strategies, only about 50 percent of the average group and less than 50 percent of the low group employed

Page 45: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

40

these strategies. It is possible that the average and the low reading ability groups were equipped with insufficient linguistic knowledge.

Inferencing skills have been suggested as one of the important factors for successful reading comprehension (Alderson, 2000: 164) and cloze test performance (Stansfield and Hansen, 1983, cited in Fotos, 1991: 319). However, in this study, only a few students reported using the strategy of “making inferences,” which was quite a surprising finding. From the informal interviews, the participants of all ability groups inferred the missing words from surrounding words and context, and they seemed to make inferences about the characters in the story of the narrative text. The reason that only a few of them reported using this strategy may lie in the Thai language used in the questionnaire, which may have misled them into thinking of making inferences as being something more complex than inferring the missing words and small incidents throughout the story. This point is worth pursuing with further investigation.

None of the students reported the strategy of “using main idea” in their work on the narrative cloze, however, and none of the low reading ability group reported using this strategy in their work on the expository cloze text. It may be as Bauman (1986, cited in Hudson, 2007: 109) has put it, that identifying the main idea of a text depends on text factors and students’ reading ability. Bauman also suggests that when the main idea of a text is explicitly stated and presented early in the paragraph, it is more easily identifiable. That none of the participants reported use of the “using main idea” strategy while working on the narrative cloze text may reflect their comprehension of the narrative text structure, which, in general, does not require a main idea sentence. However, when reading the expository text, in which the main idea sentence is found early in the text, it is apparent that low ability groups have problems in identifying the main idea of the text. It seems that none of them were aware of the main idea sentence in the first paragraph. Not recognizing the main idea led to confusion in interpreting the subsequent parts of the text. This may be another factor in why the average scores on the expository cloze by the low-ability group were lower than those earned on the narrative cloze.

As mentioned earlier, research on the use of reading strategies in cloze testing is rare. So far, only the works of Kletzien (1991) and Lu (2006) have been found by the researchers. While the subjects in Kletzien’s study were fifth- to seventh-grade native speakers of English, Lu’s subjects were graduate students with unknown language ability. Thus, the findings on the use of reading strategies in cloze testing situations found in this study could be compared to those studies to a lesser degree. However, these findings have shed some light on the reading and cloze test-taking procedures of EFL university students with different reading ability levels. This may be useful for further studies.

5. ConclusionThe rational deletion cloze test in this study, making use of two different text types, was found to be an appropriate measure of reading comprehension. It was designed to have different types

Page 46: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

41

of cloze items that could measure different levels of comprehension, ranging from clause-level information and text-level information to the incorporating of subjects’ world knowledge. The findings support what Chapelle and Abraham (1990: 125) have pointed out that, despite the inconsistency of rational deletion cloze tests in terms of the characteristics of responses, this type of cloze “should have the advantage of allowing more consistent and controllable results to the extent that distinct item types can be understood and identified.” The rational deletion cloze test used in this study was also found to differentiate well among good, average, and low reading ability students.

The findings regarding the use of reading strategies for taking cloze tests are interesting in that they have helped to determine that the rational deletion cloze test used in this study can measure reading comprehension in that students need to employ their local and global reading skills. However, it should be noted that reading strategies alone cannot help students to be successful in their cloze test performance. As Anderson (1991) has pointed out, successful reading is not directly concerned with the number of strategies used, but depends on how a strategy is used and how different strategies are combined in order to comprehend a given reading task. The findings in this study tend to suggest that the high reading ability students know better than the average and low ability groups how to employ and orchestrate reading strategies in order to make sense of the cloze texts.

Finally, it should be pointed out that the rational deletion cloze test used in this study produced processes that are not directly relevant to reading comprehension, for it required the production processes in which the students had to construct their own responses. The students had to activate several types of language knowledge, including knowledge of grammar, vocabulary and reading processes. However, even though the production of responses was found to be an unfamiliar test format by the majority of the students and may not be directly relevant to measure reading ability, the cloze test used in this study was perceived by the students to have had a positive impact on them. The students perceived that if they were trained to do this type of cloze test, the test would help them improve their English language competence.

Since the study has been of restricted scope, the results should not be overgeneralized. There are several limitations that should be addressed in future research.

1. The present study included only two cloze texts, designed to represent narrative and expository text types, respectively. The narrative text was in a fable form, and the expository text was presented in a compare/contrast rhetorical pattern of organization. Future studies should include a variety of each text type.

2. The cloze test performance of the participants on two text types was assessed in one test administration. Each cloze text was accompanied by a questionnaire surveying subjects’ use of reading strategies in taking the cloze test. This may cause fatigue to the participants, which may have led to similar responses on both questionnaires. It is recommended that the cloze test

Page 47: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

42

of each text type, together with the survey on subjects’ use of reading strategies on each text type, be administered separately.

3. The texts used in the present study were designed to facilitate the lower reading ability students in answering correctly by employing their knowledge of the world. This was done because the test format was expected to be perceived as unfamiliar. More difficult texts should be used in future studies for EFL university students, and the effect of world knowledge on the process of taking cloze tests should be further investigated.

4. The use of reading strategies in taking a cloze test is worth investigating. The findings of this study have shed some light on the reading and cloze test-taking procedures of EFL university students with different reading ability levels. However, more studies should be conducted to explore exactly what strategies are used in taking cloze tests.

AcknowledgementsI would like to thank the anonymous reviewer of rEFLections for valuable comments. I am also indebted to Associate Professor Sonthida Keyuravong and Assistant Professor Wareesiri Singhasiri for their support, and to Tony Criswell for editing the paper. The positions taken and any errors that are found in this article are solely my responsibility.

ReferencesAbraham, R. G. and Chapelle, C. A. (1992). The meaning of cloze test scores: An item difficulty

perspective. The Modern Language Journal, 76(4), 468-479.Alavi, S. M. (2005). On the adequacy of verbal protocols in examining an underlying construct

of a test. Studies in Educational Evaluation, 31, 1-26.Alderson, J. C. (1979). The cloze procedure and proficiency in English as a Foreign Language.

TESOL Quarterly, 13(2), 219-227.Alderson, J. C. (1980). Native and nonnative speaker performance on cloze tests. Language

Learning, 30, 59-76.Alderson, J. C. (1983). The cloze procedure and proficiency in English as a foreign language. In

J. W. Oller, Jr. (Ed.), Issues in language testing research (pp. 205-217). Rowley, MA: Newbury House Publishers.

Alderson, J. C. (2000). Assessing reading. Cambridge, England: Cambridge University Press.Anderson, N. J. (1991). Individual differences in strategy use in second language reading and

testing. Modern Language Journal, 75, 460-472.Bachman, L. F. (1985). Performance on cloze tests with fixed ratio and rational deletions. TESOL

Quarterly, 19(3), 535-556.Bachman, L. (1995). Fundamental considerations in language Testing. Oxford, England: Oxford University Press.Bensoussan, M. & Ramraz, R. (1984). Testing EFL reading comprehension using a mulple-choice

rational cloze. The Modern Language Journal, 68(3), 230-239.

Page 48: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

43

Block, E. (1986). The comprehension strategies of second language readers. TESOL Quarterly, 20(3), 463-494.

Brantmeier, C. (2005). Effects of reader’s knowledge, text type and test type on L1 and L2 reading comprehension in Spanish. The Modern Language Journal, 89(1), 37-53.

Brown, J. D. (1980). Relative merits of four methods for scoring cloze tests. The Modern Language Journal, 62(2), 311-317.

Brown, J. D. (2002) Do cloze tests work? Or, is it just an allusion? Second Language Studies, 21(1), 79-125.

Carnine, D. W., Silbert, J., Kame’enui, E. J. and Tarver, S. G. (2004). Direct instruction reading. (4th ed.). Upper Saddle River, NJ: Pearson Education.

Carrell, P. L. (1989). Metacognitive awareness and second language reading. Modern Language Journal, 73, 121-134.

Chapelle, C. A. and Abraham, R. G. (1990). Cloze method: What difference does it make? Language Testing, 7(2), 121-146.

Chavez-Oller, M. A., Chihara, T., Weaver, K. A. & Oller, J. W., Jr. (1985). When are cloze items sensitive to constraints across sentences? Language Learning, 35(2), 181-206.

Child, D. (2004). Text readability scores. [Online]. Available from: http://www.addedbytes.com/lab/readability-score/. [2007, October 4].

Cohen, A. D. (1998). Strategies in learning and using a second language. London, England: Longman.Day, R. R., Swan, J. & Masayo, Y. (1999). Journeys: Reading 3. Singapore, Singapore: Prentice Hall

ELT.Dickson, S. V., Simmons, D. C. and Kameenui, E. J. (n.d.). Text organization and its relation to

reading comprehension: A synthesis of the research. [Online]. Available from: http://idea.uoregon.edu/~ncite/documents/techrep/tech17.html. [2005, August 12].

Dos Santos, M. (2007). My world 5. Bangkok, Thailand: Thai Watana Panich/McGraw-Hill. Field, A. (2009). Discovering statistics using SPSS. (3rd ed.). Thousand Oaks, CA: Sage Publications.Fotos, S. S. (1991). The cloze test as an integrative measure of EFL proficiency: A substitute for

essays on college entrance examinations? Language Learning, 41(3), 313-336. Francis, N. (1999). Applications of cloze procedure to reading assessment in special circumstances

of literacy development. Reading Horizon, 40(1), 23-46. [Online]. Available from: http://202.28.92.194/hwwmds/detail.nsp. [2006, April 1].

Ghaith, G. M. & Harkouss, S. A. (2003). Role of text structure awareness in the recall of expository discourse. Foreign Language Annals, 36(1), 86-96.

Grabe, W. (2004). Research on teaching reading. Annual Review of Applied Linguistics, 24, 44-49.Hale, G. A., Stansfield, C. W., Rock, D. A., Hicks, M. M., Butler, F. A. & Oller, J. W., Jr. (1989).

The relation of multiple-choice cloze items to the Test of English as a Foreign Language. Language Testing, 6(1), 47-76.

Hinofotis, F. B. (1980). Cloze as an alternative method of ESL placement and proficiency testing. In J. W. Oller & K. Perkins, (Eds.)., Research in language testing (pp. 121-128). Rowley, MA: Newbury House Publishers.

Page 49: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

44

Hudson, T. (2007). Teaching second language reading. Oxford, England: Oxford University Press.Irvine, P., Atai, P, & Oller Jr., J. W. (1974). Cloze, dictation, and the test of English as a foreign

language. Language Learning, 24(2), 245-252.Jonz, J. (1990). Another turn in the conversation: What does cloze measure? TESOL Quarterly,

24(1), 61-83. Keshavarz, M. H. & Salimi, H. (2007). Collocational competence and cloze test performance:

A study of Iranian EFL learners. International Journal of Applied Linguistics, 17(1), 81-92Klein-Braley, C. (1997). C-Tests in the context of reduced redundancy testing: An appraisal.

Language Testing, 14(1), 47-84.Kletzien, S. B. (1991). Strategy use by good and poor comprehenders reading expository text of

different levels. Reading Research Quarterly, 26(1), 67-86.Koda, K. (2005). Insights into second language reading: A cross-linguistic approach. Cambridge,

England: Cambridge University Press.Lipson, M. Y. & Wixson, K. K. (2003). Assessment and instruction of reading and writing difficulty:

An interactive approach. (3rd ed.). Boston, MA: Pearson Education.Lu, G. (2006). Cloze tests and reading strategies in English language teaching in China. Master’s

Thesis, University of the Western Cape.McKamey, T. (2006) Getting closure on cloze: A validation study of the ‘rational deletion”

method. Second Language Studies, 24(2), 114-164.McKenna, M. C. & Layton, K. (1990). Concurrent validity of cloze as a measure of intersentential

comprehension. Journal of Educational Psychology, 82(2), 372-377.Meyer, B. J. F. & Freedle, R. O. (1984). Effects of discourse type on recall. American Educational

Research Journal, 21(1), 121-143.Oller, J. W. (1972). Scoring methods and difficulty levels for cloze tests of proficiency in English

as a second language. Modern Language Journal, 56, 151-157.Paris, S. G. & Meyer, M. (1981). Comprehension monitoring, memory and study strategies of

good and poor readers. Journal of Reading Behavior, 13, 5-22.Paris, S. G. & Jacobs, J. E. (1984). The benefits of instruction for children’s reading awareness

and comprehension skills. Child Development, 55(6), 2083-2093.Raymond, P. (1988). Cloze procedure in the teaching of reading. TESL Canada Journal, 6(1),

91- 100.Read, J. (2000). Assessing vocabulary. Cambridge, England: Cambridge University Press.Saito, Y. (2003). Investigating the construct validity of the cloze section in the examination of

the certificate of proficiency in English. Spann Fellow Working Papers in Second or Foreign Language Assessment, 1(April). [Online]. Available from https://legacyweb.lsa.umich.edu/UMICH/eli/HomeResearch/Spaan%20Fellowship/pdfs/spaan_working_papers_v1_FULL.pdf#page=43. [2007, September 17].

Sasaki, M. (2000). Effects of cultural schemata on students’ test-taking processes for cloze tests: A multiple data source approach. Language Testing, 17(1), 85-114.

Page 50: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

45

Storey, P. (1997). Examining the test-taking process: A cognitive perspective on the discourse cloze test. Language Testing, 14(2), 214-231.

Stubbs, J. B. & Tucker, G. R. (1974). The cloze test as a measure of English proficiency. The Modern Language Journal, LVIII(5-6), 239-242.

Upton, T. A. (1997). First and second language use in reading comprehension strategies of Japanese ESL students. TESL-EJ, 3(1)[Online]. Available from www.writing.berkeley. edu/TESL-EJ/ej09/a3.html. [2009, January 9].Urquhart, A. and Weir, C. J. (1998). Reading in a second language: Process, product and practice.

London, England: Longman.Wirotanan, J. (2002). Reading strategies of university EFL Thai readers in reading Thai and English

expository texts. Doctoral dissertation, University of Pittsburgh.Wu, R. Jenn-Rong. (1994). The Sensitivity of the cloze procedure to discourse constraints in English as

a foreign language. Doctoral Dissertation, University of Kansas.Xiaoying, Gao & Xiangdong, Gu (2008). An introspective study on test-taking process for

banked cloze. CELEA Journal, 311(4). [Online]. Available from http://www.celea.org.cn/teic/80/0810-3.pdf. [2010, January 6].

Yamashita, J. (2003). Processes of taking a gap-filling test: Ccomparison of skilled and less skilled EFL readers. Language Testing, 20(,3), 267-293.

Zhang, L. J., and Annual, S. B. (2008). The role of vocabulary in reading comprehension: The case of secondary school students learning English in Singapore. RELC Journal, 39(51). [Online]. Available from: http://rel.sagepub.com/cgi/content/abstract/39/1/51. [2010, March 8].

Authors:Nantawan Senchantichai is a Ph.D. candidate in the English as an International Language Program, Chulalongkorn University. She has worked as an English teacher at Khon Kaen [email protected]

Suphat Sukamolson, Ph.D, is an associate professor at Chulalongkorn University Language Institute. His research interests include language testing and [email protected]

Page 51: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

46

Research Methodology hy Do Articles Get Rejected by International Journals ?

Richard Watson ToddKing Mongkut’s University of Technology Thonburi

AbstractWith increasing pressure for university staff and students to publish in international refereed journals, researchers will benefit from guidance on how decisions to accept or reject articles are made by these journals. This paper examines 28 reviewer reports in applied linguistics, and categorises the 115 individual criticisms made in two ways. First, the criticisms were categorised based on the article section that prompts the comment, and second, they were categorised based on the research quality criterion that the comment focuses on. There is no clear indication that any particular section or criterion is most likely to lead to an article being rejected. Rather, rejection typically appears to be based on an accumulation of comments of different types. Example comments linked to guidelines for writing articles are given to help novice researchers to produce articles less likely to be rejected.

1. IntroductionUniversities are increasingly requiring their academic staff and postgraduate students to publish research in international refereed journals (Cheng, 2006; Huang, 2010). For many staff and students in applied linguistics, this is a major challenge which appears to have little chance of success since many of the major journals have acceptance rates of 20% or lower (Egbert, 2007) and the staff and students are competing with experienced researchers for the few slots for publication available. In this paper, I will examine some of the reasons why journals reject submissions in the hope that this will provide guidance for prospective authors.

Most of the existing advice for prospective authors of research articles is derived from genre analyses and thus focuses on the generic or linguistic features of articles (e.g. Swales, 2004; Swales and Feak, 2004). While undoubtedly useful for authors, articles are rarely rejected on generic or linguistic grounds (Jaroongkhongdach et al., 2012). Rather, most rejections are based on content or technical research issues (Gosden, 2003; Mungra and Webber, 2010).

To identify what sorts of content or research issues are most likely to lead to rejection, we need to examine reviewers’ comments. The typical process facing an article submitted to a reputable international journal starts with an in-house evaluation of the article to see if it is of sufficient quality to send to reviewers. While in-house rejection rates are reputed to be increasing (Zuengler and Carroll, 2010), this stage of the process is not open to examination. Those

W

Page 52: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

47

articles deemed to be of sufficient quality and suitability by the editors are sent to reviewers for comments and recommendations (typically, requiring minor revisions, requiring major revisions, or rejection). Those papers requiring revisions may go through this process two or three times as the drafts of the article change. Although still an “occluded genre” (Gosden, 2003, p. 87), reviewers’ comments are to a limited extent available for analysis. This paper examines 28 referee reports to attempt to identify points perceived as problematic in articles submitted for publication in international refereed journals.

2. Methodology2.1 The dataThe 28 reviewer reports examined in this paper are ones I have written for five different international journals over the last few years. With all the reports being from the same reviewer, there may be some limitations in the topics focused on, with some being under-represented and others over-represented, but the difficulties of gaining access to reviewer reports mean that obtaining a fully representative sample is impossible. Of the 28 reviews, three recommend minor revisions and thus the submissions are likely to be accepted, five recommend major revisions with the decision for acceptance dependent on the revised version, and 20 recommend rejection. These rates reflect the overall acceptance rates of the journals, suggesting that the reports are fairly typical. All of the reports are included in the analysis, since even the criticisms made on those articles likely to be accepted can shed light on the content and research issues underpinning journals’ decisions on articles. A typical report is 500-800 words long and consists of a general introductory paragraph which includes the recommendation followed by 5-20 detailed criticisms identifying points needing revision. Some of these criticisms are minor (e.g. linguistic infelicities that suggest the need for better proofreading such as “who did the dog barked at” and “to same students”) which would not be a cause for rejection, although they could stimulate an unconscious bias in the mind of the reviewer who would then tend to view other aspects of the article more negatively. These minor criticisms are straightforward to correct and are not included in the analysis. In total, 115 points of criticism are analysed.

2.2 Data analysisThe 115 individual criticisms were first grouped to identify similar criticisms with different wordings. For instance, “The conclusion and discussion sections appear to simply repeat the findings without adding any insights or generalisations and thus still leave me wondering about the value of the research”, “A large part of the Discussion simply repeats the findings”, and “The discussion just presents a summary of the findings” were all summarised as Discussion is a repetition of the findings. This process resulted in 51 grouped criticisms.

These grouped criticisms were then categorised according to two sets of criteria:

1. Article section: introduction and literature review, purpose (e.g. research question), methodology, results, discussion, other (e.g. title, abstract).

Page 53: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

48

These sections follow the standard format for research articles, and the categorisation involves identifying the section which is primary in prompting the criticism. All grouped criticisms were categorised into article section. Categorisation into article section may allow sections particularly likely to prompt criticisms, and thus those which are particularly problematic, to be identified. In addition to the six article sections, a further category of ‘overall suitability’ was created for those criticisms which concerned the whole article rather than being linked to a section.

2. Research quality criteria (based on Jaroongkhongdach et al., 2012): justification, clarity, coherence, appropriacy, awareness.

These criteria allow the criticisms to be categorised on the basis of their contents (rather than what prompts them, as in categorisation based on article section). Jaroongkhongdach et al. (2012, p. 197) define these five research quality criteria as follows:

• Justification: “reasoning provided for decisions made in research”• Clarity: “the sufficiency of descriptions or explanations of a term/concept/procedure,

and the style of writing that makes the term/concept/procedure easy for an intelligent general reader to understand”

• Coherence: “the logical relationships within a section or across sections in terms of contents or ideas”

• Appropriacy: “the match/compatibility between two or more potentially related components”

• Awareness: “the thoughtful concern of alternative views or of possible impacts of research decisions”

42 of the 51 grouped criticisms (accounting for 102 individual criticisms) could be categorised by research quality criteria. Although not providing full coverage of the criticisms, as with categorisation into article sections, this may allow particularly problematic aspects of research to be identified. Criticisms not falling into these categories include Article is too long and Literature is dated.

The frequencies for each of these criteria were counted. Key grouped criticisms which appeared to be particularly important in decisions for rejecting articles (either because they appeared frequently across articles or because they were identified as the key issue leading to rejection) were identified. Details of the actual contents of the reviewer reports for these points are shown to provide a more in-depth perspective on reasons for rejecting articles.

3. Results3.1 Grouped criticisms by article sectionTable 1 presents the frequencies of individual criticisms and grouped criticisms by article section to identify those sections which may be particularly likely to lead to a submission being rejected.

Page 54: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

49

Those grouped criticisms consisting of three or more individual criticisms are shown to concretise the data (with number of individual criticisms in each group given in brackets).

Table 1 Grouped criticisms by article section

Article section No. of grouped

criticisms(N = 51)

No. of individual criticisms(N = 115)

Sample grouped criticisms

Introduction and literature review

8 (15.7%)

19 (16.5%)

No link between literature and research (7)List-like with no argumentation (3)Missing key references or aspects requiring discussion (3)

Purpose 4 (7.8%)

8 (7.0%)

No reason why research is useful (3)Lack of clarity in purpose (3)

Methodology 15 (29.4%)

31 (27.0%)

Lack of details (10)Claimed principles not followed (3)No reporting of rationales for decisions (3)Unclear foundation or sources of data in general (3)

Results 9 (17.6%)

23 (20.0%)

Poor interpretations (7)Inappropriate statistics (3)Very limited data presented (3)

Discussion 8 (15.7%)

22 (19.1%)

Dubious explanation of results or unsubstantiated claims (6)Discussion not based on findings or not related to purpose (4)Discussion is repetition of literature review (3)Discussion is repetition of findings (3)

Other (e.g. title)

4 (7.8%)

5 (4.3%)

Overall suitability

3 (5.9%)

7 (6.1%)

Inappropriate topic for journal (3)Article is too long (3)

From Table 1, we can see that the methodology section of articles is the one stimulating most criticism with Lack of details concerning the methodology being the most frequent grouped criticism. However, the literature review, results and discussion sections also prompt criticism, suggesting that no single section of articles can be identified as the one most likely to lead to article rejection.

Page 55: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

50

3.2 Grouped criticisms by research quality criteriaThe five criteria of research quality suggested by Jaroongkhongdach et al. (2012) were also used to categorise the criticisms. Not all criticisms fell into these categories (ones not falling into these categories are not considered in this section), and Table 2 presents the frequencies of criticisms by these criteria and the most frequent grouped criticisms.

Table 2 Grouped criticisms by research quality criteria

Research quality criteria

No. of grouped

criticisms(N = 42)

No. of individual criticisms(N = 102)

Sample grouped criticisms

Justification 7 (16.7%)

13 (12.7%)

No reason why research is useful (3)Missing key references or aspects requiring discussion (3)

Clarity 14 (33.3%)

35 (34.3%)

Lack of details in methodology (10)Lack of clarity in purpose (3)Unclear foundation or sources of data in general (3)Very limited data presented (3)

Coherence 9 (21.4%)

19 (18.6%)

No link between literature and research (7)Claimed principles not followed (3)

Appropriacy 7 (16.7%)

22 (21.6%)

Discussion not based on findings or not related to purpose (4)Poor interpretations (7)Inappropriate statistics (3)

Awareness 5 (11.9%)

13 (12.7%)

Dubious explanation of results or unsubstantiated claims (6)No reporting of rationales for decisions (3)

From Table 2, clarity is the criterion generating most criticisms, but the frequency of all of the criteria is enough to suggest that no single criterion can be identified as the most likely cause for rejection.

3.3 Key criticismsTables 1 and 2 provide the frequencies of the various criticisms, but it is not clear whether these frequencies should be linked to causes for rejecting submissions. Indeed, the most frequent grouped criticism, namely, Lack of details concerning the methodology, is relatively straightforward for authors to revise since additional details can be added easily. Indeed, this criticism appears to be linked to article rejection in only one case, where the lack of details was so pervasive that it was impossible to understand how the research had been conducted. Nevertheless, this criticism

Page 56: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

51

is still important since a recommendation for rejecting an article may be due to the cumulative effects of numerous minor issues, rather than a single main issue. The frequency of criticisms of missing methodological details suggests that, as this is an area which is relatively straightforward to revise, Lack of details concerning the methodology is a point prospective authors should be aware of so that the probability of an article being rejected because of the accumulation of minor criticisms is reduced. In contrast, some other criticisms are not frequent but are the main cause for rejection, such as Inappropriate topic for journal and Invalid poor quality main instrument (neither of which are discussed further below since they are self-evident). In this section, I will provide the verbatim comments (with some amendments to preserve anonymity where necessary) for those eight grouped criticisms that are either very frequent (leading to the possibility of a cumulative rejection) or very salient (being the main issue in rejecting an article). I hope that providing verbatim criticisms from reviewer reports can allow researchers new to international journal publication to become familiar with the kinds of comments typically given on articles. I also hope to be able to induce some guidelines that may help researchers to be able to write articles that are less likely to be rejected.

1) No link between literature and researchCriticism 1.1:“There is a general lack of foundation to the article. This may be related to how the previous literature in the area is used. For example, the author lists previous research into SMS without showing how the current study builds on these. The lack of a clear foundation means that the interesting findings of the study are not discussed in an insightful way (indeed, the discussion section of the article is not really related to the findings).”

Criticism 1.2:“The rationale for conducting the research is unclear. The introduction presents the teaching context and then jumps to the purpose of the study without providing any links between the two. Similarly, while the literature review provides a useful overview of feedback on students’ writing, it does not lead to the purposes of conducting the research.”

Guideline: The term ‘literature review’ is perhaps a misnomer, as the purpose of this section is to give an argument providing the rationale for the research, rather than simply provide a review of the existing literature. Perhaps because of this misunderstanding, some novice researchers use the literature review to list (with some details) the previous key studies in the area of their research. Instead, although there may be paragraphs providing background knowledge on areas readers are unlikely to be familiar with, the majority of the literature needs to be presented in a way that builds an argument for why the research is being conducted.

2) No reason why research is usefulCriticism 2.1:“Most seriously, I am not clear how the research adds to the existing literature on L1/L2

Page 57: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

52

use. Generally, in a given field of research (such as language of classroom communication), there is a longitudinal move from initial descriptions towards explanations and evaluations. The current article is descriptive in nature, somewhat unexpected for a field of research with a history of over 30 years. In itself, this might not be a problem, but there is no clear indication of how the article adds to the field.”

Criticism 2.2:“I am not really clear what the point of the article is. On a simplistic level, the article shows that students interact in different ways for different task types – a finding which is not very surprising. The literature review, while well-argued and covering a good range, appears to state that there has been a lot of previous research in the area of floor management, including the floor structure of different tasks. With such a wealth of existing literature, it is unclear how the current article adds anything.”

Criticism 2.3:“The purpose of the research is not altogether clear. While the literature review covers a fair amount of ground, it does not lead into the present study. No research questions are given and, by the end of the article, I’m not really sure why I should be interested in the study.”

Guideline: The author needs to persuade the audience that the research is worthwhile since it serves a valid and useful purpose. The purpose can be either theoretical (e.g. challenging an existing theory) or practical (e.g. having implications that could change current practice). Often, the purpose is presented as filling a gap in the literature which is argued to exist in the literature review. However, not all gaps that exist are worth filling (e.g. I am not aware of any research into the effects of the teacher’s sock colour on learning), and many are of dubious value (e.g. implementing a standard questionnaire in a new context). The research therefore needs to show a worthwhile purpose argued for in the literature review, stated clearly (usually in research questions) and highlighted in the discussion.

3) Lack of details concerning the methodologyCriticism 3.1:“A lack of rigour in the article is also a concern. This is perhaps best illustrated in the error analysis. How was this conducted? How were errors identified and categorised? There is no need for great detail concerning such points, but some statements of principles used would be useful. Similarly, a lack of details about the data analysis in the survey leaves me wondering why the percentages reported don’t add up to 100.”

Criticism 3.2:“There are some problems with the methodology of the study. Given the centrality of the interview and questionnaire to the research, I would have expected to see what questions were asked. This is especially important for the questionnaire as it is impossible to evaluate the findings. Without knowing the questions and the meaning of the rating scales, I don’t

Page 58: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

53

know what, say, a mean of 3.66 for instrumental orientation means.”

Guideline: Enough details are needed to allow readers to understand the findings, make judgments about their validity and reliability, and make their own interpretations. Word count permitting, too many details are better than too few.

4) Claimed principles not followedCriticism 4.1:“More seriously, the research claims to ‘develop an insider’s view of the phenomenon’, but the approach of counting frequencies of reference and providing illustrative quotations means that this is not achieved.”

Criticism 4.2:“In the methodology section, the author states that the data will be analysed using grounded theory analysis, but there is no evidence concerning how coding was conducted or how themes were identified. Indeed, from the findings, it appears that the identified themes follow the interview questions, and thus that no attempt to apply a grounded theory analysis was made.”

Criticism 4.3:“On p. 7, the research design is termed “A mixed-methods research design”, but no qualitative data is used.”

Guideline: It may be tempting to claim that a certain methodological approach has been used in the research since this appears to provide a veneer of credibility. However, most methodological approaches have clear implications for, and even restrictions on, how the research needs to be conducted. If the approach is not implemented as intended, then claiming to follow it will lead to an adverse reaction in the reviewer that clearly outweighs any benefit from making the claim.

5) Poor interpretationsCriticism 5.1:“Some of the interpretations of the findings are not very persuasive. For instance, on p. 23 the authors interpret the findings as suggesting that students prefer lectures; yet, on p. 24 the students appear to complain about lectures and the lack of opportunities to practise.”

Criticism 5.2:“The findings are very weak. Quotations, where included, are generally of clause length and thus do not provide the necessary context for the reader to interpret the quote. More seriously, many points are made without any reference to the data (even in the form of very short quotes). For instance, in the final paragraph of 4.1 concerning school commitment, no evidence concerning the school under investigation is provided. The paragraph reads more like a paragraph from a literature review than part of the findings. Furthermore, where evidence is provided, often this does not match the arguments being made. In the first

Page 59: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

54

paragraph of 4.1, for example, it is argued that the CALL program encourages learning and boosts confidence, but the quotations concerning these points concern enjoyment.”

Guideline: Interpretations generally aim to highlight key aspects of the findings for the readers, often by focusing attention on a particularly salient finding or by summarising a pattern in the findings. Interpretations should concern those aspects that are truly worth highlighting (and not, for example, simply be a repetition of a table in prose) and should be valid and reasonable. To allow readers to judge the validity of the interpretations, findings should be given in enough depth.

6) Discussion not based on findings or not related to purpose Criticism 6.1:

“Several contributions of the research are claimed in the discussion section, but some of these are already well accepted in the field, while others are not based on the data. For instance, there is discussion of the motivations of teachers (bottom p. 17) and reasons for variability between teachers (top p. 19), yet there is no data concerning motivation or reasons.”

Criticism 6.2:“The implications are not derived from the findings. For instance, there is nothing in the findings asking that the contents of programs be regularly updated. The conclusion is also not linked to the findings (e.g. there is nothing in the findings to indicate a problem of access that could be solved by increasing the number of licences). Indeed, at one point, the conclusion conflicts with the findings: in point 2 of the conclusion it is stated that monitoring is limited to the teachers’ resourcefulness, whereas on p. 12 it is stated that the program includes a tool to monitor progress.”

Guideline: Although the researcher may be tempted to use the discussion section as a platform for arguing for their beliefs, the discussion must be (at least initially) based on the findings which take priority. Care should be taken that arguments which are not derivable from the findings are not presented.

7) Discussion is repetition of literature reviewCriticism 7.1:“Much of the discussion section appears to be a repetition of the literature review, rather than a true discussion of the findings. As an alternative the author could be suggesting things like certain plateaus of proficiency that need to be reached before improvements in the comprehension of different text types become apparent.”

Criticism 7.2:“The conclusion shifts the focus away from instructional strategies and back to the general benefits of CMC. It thus appears to be more like a reiteration of the literature review than

Page 60: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

55

a conclusion to the article.”

Guideline: The discussion section is sometimes (with good reason) titled ‘Discussion of the findings’ and should use the key findings as the starting point for discussion. Where relevant, the discussion can refer back to the literature review, but it should go beyond what has already been stated in the literature review by, for example, focusing on salient findings which have implications contradicting or not previously considered in the literature.

8) Dubious explanation of results, or unsubstantiated claimsCriticism 8.1:“The discussion and conclusion have the feel of shoehorning the data into a predetermined explanation. While norms are one way of explaining the data, there is no direct evidence for norms, and several other explanations are possible. Furthermore, although the addition of qualitative data from another paper does shed light on the findings, this additional data appears to be used where arguments for norms based solely on the quantitative data are not convincing. The overemphasis of interpreting data as being due to norms also appears earlier where it is claimed that any similarities between classes are due to institutional norms (again, several alternative explanations, such as the homogeneity of the students, are possible).”

Criticism 8.2:“The discussion claims that students have a ‘confused mindset’ (a doubtful description in itself), but also that their opinions can be ‘explained in a rational manner’. Much of this explanation is of a very dubious post hoc nature. For example, the preference for NSTs for both speaking and reading is explained based on the high correlation between phonological awareness and reading ability. If this dubious claim is correct, I don’t see why it wouldn’t also explain a preference for NSTs for listening. Nearly any combination of results can be ‘explained’ on doubtful bases such as this, but explanations based on other literature from the same area (such as the differences between explicit and implicit beliefs in students’ preferences for NSTs or NNSTs) is more persuasive. It would also be useful if the results from this study were compared to results from similar studies.”

Guideline: Researchers have their own reasons and beliefs which motivate them to conduct the research. There is often a tendency for discussion to centre around such beliefs. However, a discussion which considers the findings from several perspectives or which presents several competing explanations for findings is more persuasive than one which presents a single viewpoint based on the researcher’s existing beliefs.

4. DiscussionThe findings suggest that no single article section or research quality criterion is particularly likely to lead to an article being rejected, since the articles receive on average over four substantive

Page 61: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

56

comments each. Reading the introductory paragraphs to the reports, only three articles were clearly rejected on the basis of a single comment (two for Inappropriate topic for journal, and one for Poor quality main instrument). Rather, most articles were rejected based on an accumulation of criticisms. The following extract from an introductory paragraph is typical of the explanation of reasons for rejection:

“The article under review has some potential in this area, but its purely descriptive nature (rather than being explanatory) means that it is unclear how it contributes to the field. Together with several weaknesses in the methodology and the writing up (detailed below), this means that the article cannot be accepted in its present form.”

Some of these accumulating criticisms are relatively frequent but also fairly straightforward to revise, such as Lack of details concerning the methodology, the frequency of which led to the overall high frequencies for the methodology section and the criterion of clarity. By themselves, these criticisms have little effect on a decision to reject an article, but when combined with four or five or more other criticisms, they may lead the reviewer to pass a tipping point and decide to reject the article. Being aware of the types of criticisms which are frequently made by reviewers may enable researchers to reduce the number of accumulated minor criticisms and thus lessen the chance of their article being rejected.

Jaroongkhongdach et al. (2012) identified five criteria of research quality based on the literature analysing research articles and from an analysis of the articles themselves. These five criteria also seem applicable to reviewer comments, with over three-quarters of the individual criticisms assignable to a quality criterion. Both Jaroongkhongdach et al. and this paper examine research in applied linguistics, a soft-applied discipline where variables are varied, causal connections are tenuous, synthetic inquiry strategies are used, authors typically make personal stands in the text, and the readers’ need to be involved in the negotiation of knowledge-making is acknowledged (Hyland, 1998). These characteristics of applied linguistics mean that most research in the field aims to persuade readers through the discourse of the article, rather than simply presenting relevant facts within the context of the discipline as the main persuasive strategy as in the hard-pure sciences. A prerequisite for persuasive discourse is comprehensibility, a characteristic to which both clarity and coherence contribute. In addition, a well-argued text with valid justifications and awareness of problems is generally more persuasive than one without these features (Cockcroft and Cockcroft, 2005). The research quality criteria therefore may be subsumed into the broader purpose of persuading the reader of the value and usefulness of the research. Reviewers’ comments leading to rejection, then, indicate that the article has not persuaded them of the value of publishing the research.

I hope that this article may have raised the awareness of researchers, especially those new to publishing in international refereed journals, of what to expect from reviewers and that paying attention to the points covered in the article may reduce the chances of research being rejected.

Page 62: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

57

Finally, I also hope that this article avoids most of the problems covered in reviewer comments that can lead to rejection.

ReferencesCheng, A. (2006). Analyzing and enacting academic criticism: The case of an L2 graduate

learner of academic writing. Journal of Second Language Writing, 15, 279-306.Cockcroft, R. and Cockcroft, S. (2005). Persuading people: An introduction to rhetoric, (2nd ed.).

Basingstoke, England: Palgrave Macmillan.Egbert, J. (2007). Quality analysis of journals in TESOL and applied linguistics. TESOL

Quarterly, 41, 157-171.Gosden, H. (2003). ‘Why not give us the full story?’: Functions of referees’ comments in peer

reviews of scientific research papers. Journal of English for Academic Purposes, 2, 87-101.Huang, J. C. (2010). Publishing and learning writing for publication in English: Perspectives of

NNES PhD students in science. Journal of English for Academic Purposes, 9, 33-44.Hyland, K. (1998). Persuasion and context: The pragmatics of academic metadiscourse. Journal

of Pragmatics, 30, 437-455.Jaroongkhongdach, W., Watson Todd, R., Keyuravong, S. and Hall, D. (2012). Differences in

quality between Thai and international research articles in ELT. Journal of English for Academic Purposes, 11, 194-209.

Mungra, P. and Webber, P. (2010). Peer review process in medical research publications: Language and content comments. English for Specific Purposes, 29, 43-53.

Swales, J. M. (2004). Research genres. Cambridge, England: Cambridge University Press.Swales, J. M. and Feak, C. (2004). Academic writing for graduate students: Essential tasks and skills,

(2nd ed.). Ann Arbor, MI: University of Michigan Press.Zuengler, J. and Carroll, H. (2010). Reflections on the steady increase in submissions. The Modern

Language Journal, 94, 637-638.

Author:Richard Watson Todd is Associate Professor in Applied Linguistics at KMUTT. He holds a PhD from the University of Liverpool and is the author of numerous articles and books, most recently, Much Ado about English (Nicholas Brealey Publishing)[email protected]

Page 63: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

58

ook ReviewB Title Literature in Language Education

Author Geoff Hall

Publisher Palgrave Macmillan

Publication date September 2005

No. of pages 272 pages

ISBN 9781403943354

Reviewers Asst. Prof. Thanis Bunsom and

Asst. Prof. Dr. Wareesiri Singhasiri

King Mongkut’s University of Technology

Thonburi

Having only a slight background and experience in teaching literature in a language class, we were delighted to have read a book on how to apply English literature into the English language teaching. To some teachers, teaching literature and teaching language may be two distant disciplines and therefore the two fields are literally separated; to others, regrettably, they may not know how to make full use of literary texts in their classroom. For both language and literature teachers, this book is a solution and as one might say a guide for an exciting pedagogical journey.

The author’s introductory chapter, Literature as Discourse, gives a brief overview of existing research into literary language and thus reminds us of the inevitable: a paradox of literature and language. While the study of literary language has “provoked a better understanding of language and language use as a whole, common sense traditionally opposes a stereotype of literary language to ordinary language” (10). Nevertheless, to contradict such beliefs and probably to give us some comfort, Hall reviews a wide range of research and theories and reveals the surprising degree of literariness in the ordinary language and vice versa.

Given this revelation, the book further opens up possibilities of the application of literature in the language classroom by explicating relevant theories and practice in profound detail. Hall divides the book into four parts: (i) Language, Literature and Education, (ii) Exploring Research in Language, Literature and Education, (iii) Researching Literature in Language Education and (iv) Resources. The first part of the book describes several approaches to the teaching and reading of literature, covering the origins and evolution of the literary curriculum, the communicative

Page 64: EDITORS - PR.SoLA KMUTTsola.kmutt.ac.th/sola/wp-content/uploads/2015/01/... · design and evaluation, self-access learning, learner strategies, CALL and discourse analysis. rEFLections

rEFLections July 2013 Vol. 16

59

language teaching and literature, literature in cultural studies, reader response criticism in reading literature, and the assessment of literary reading. Part 2 expands on existing research into literature, language and education, ranging from linguistics-related issues in stylistics and corpus analysis of literary texts to education-related ones such as curricula and syllabuses of second language literature. For teachers who would like to understand how the teaching of language and literature could be merged, this section of the book is a must-read.

For those interested in conducting their own research, Part 3 gives clear, easy-to-follow instructions, justifications and examples of different research methods: experimental research, surveys, case studies and ethnography. Hall also advises his readers on possible projects for literature in language education and raises the awareness of what to take into consideration while conducting research. The last part is particularly useful for teachers attempting to exploit literary texts in their language classroom, as a thorough list of scholarly journals, reliable websites, and professional organisations is provided. The book ends with an extensive glossary of relevant terms in literature, linguistics and education, and a long, useful list of references that we can explore on our own.

All in all, Hall’s Literature in Language Education is a great start for teachers of language and teachers of literature alike. With the difficult-made-easy explanations, abundant examples, and valuable recommendations, the book can serve as an inspiration for educators wishing to expand their pedagogical horizons.

Reviewers:Thanis Bunsom is an assistant professor at the Department of Language Studies, School of Liberal Arts, King Mongkut’s University of Technology Thonburi (KMUTT), [email protected]

Wareesiri Singhasiri is also an assistant professor in the Department of Language Studies, School of Liberal Arts, KMUTT. She was awarded her PhD in English Language Teaching from the University of Essex, UK. Her research interests include research methodology and learning strategies and styles. [email protected]