side participation and linguistic alignment

166
EFFECTS OF PRIMING AND WORK RELATIONSHIP ON LINGUISTIC ALIGNMENT IN COMPUTER-MEDIATED COMMUNICATION AND HUMAN-COMPUTER INTERACTION A DISSERTATION SUBMITTED TO THE DEPARTMENT OF COMMUNICATION AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FUFLILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY Jiang Hu August 2011

Upload: others

Post on 06-Jun-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Side Participation and Linguistic Alignment

EFFECTS OF PRIMING AND WORK RELATIONSHIP ON LINGUISTIC ALIGNMENT IN COMPUTER-MEDIATED COMMUNICATION AND

HUMAN-COMPUTER INTERACTION

A DISSERTATION

SUBMITTED TO THE DEPARTMENT OF COMMUNICATION

AND THE COMMITTEE ON GRADUATE STUDIES

OF STANFORD UNIVERSITY

IN PARTIAL FUFLILLMENT OF THE REQUIREMENTS

FOR THE DEGREE OF

DOCTOR OF PHILOSOPHY

Jiang Hu

August 2011

Page 2: Side Participation and Linguistic Alignment

http://creativecommons.org/licenses/by-nc/3.0/us/

This dissertation is online at: http://purl.stanford.edu/mh959wt3079

© 2011 by Jiang Hu. All Rights Reserved.

Re-distributed by Stanford University under license with the author.

This work is licensed under a Creative Commons Attribution-Noncommercial 3.0 United States License.

ii

Page 3: Side Participation and Linguistic Alignment

I certify that I have read this dissertation and that, in my opinion, it is fully adequatein scope and quality as a dissertation for the degree of Doctor of Philosophy.

Clifford Nass, Primary Adviser

I certify that I have read this dissertation and that, in my opinion, it is fully adequatein scope and quality as a dissertation for the degree of Doctor of Philosophy.

Jeremy Bailenson

I certify that I have read this dissertation and that, in my opinion, it is fully adequatein scope and quality as a dissertation for the degree of Doctor of Philosophy.

Daniel Jurafsky

I certify that I have read this dissertation and that, in my opinion, it is fully adequatein scope and quality as a dissertation for the degree of Doctor of Philosophy.

Byron Reeves

Approved for the Stanford University Committee on Graduate Studies.

Patricia J. Gumport, Vice Provost Graduate Education

This signature page was generated electronically upon submission of this dissertation in electronic format. An original signed hard copy of the signature page is on file inUniversity Archives.

iii

Page 4: Side Participation and Linguistic Alignment

iv

Abstract

People engaged in a conversation tend to express themselves in similar ways

by using comparable or identical words, phrases, sentence structures, accent, speech

rate, etc. This process and end results are termed “linguistic alignment,” and have also

been observed in both computer-mediated communication (CMC) and human-

computer interaction (HCI). Many researchers have demonstrated that linguistic

alignment can be easily induced through priming, while others focus on the social

aspect of linguistic alignment. Moreover, previous research work on linguistic

alignment mostly focused on conversation within dyads. In this dissertation, I report

two experimental studies that, in the context of a triadic conference chat setting,

investigated the co-presence of alignment as a result of priming and alignment

attributable to difference in work relationship (cooperation vs. competition).

Similarities and differences observed in the HCI and CMC conditions were also

examined. Results show that priming is a strong predictor of alignment even when

interlocutors do not directly communicate with each other, but work relationship

between interlocutors and communication type (i.e., HCI vs. CMC) could also sway

the degree of alignment. Additionally, the priming effect on certain stylistic

dimensions (e.g., vocabulary complexity) lasted relatively longer than the effect on

other features (e.g., capitalization). As a whole, the dissertation proposes a holistic

way of examining and understanding linguistic alignment, and offers researchers a

new methodology utilizing realistic user contexts and tasks to study human language

behaviors in general and those specific to HCI and CMC.

Page 5: Side Participation and Linguistic Alignment

v

Acknowledgements

The first study was conceived largely due to the influence of a series of

linguistic alignment studies that were part of the Edinburgh-Stanford Link project. I

was funded by Scottish Enterprise through the Link project when the first experiment

was conducted. I would like to thank Liz Harrison, Erin Geary, and Devin Carter for

their help with the development and running of Study I. My advisor, Cliff Nass, was

especially instrumental during the methodological refinement process for Study II. I

am forever grateful for his advice, insights, encouragement, kindness, and patience.

During my years at Stanford, I was fortunate to work alongside with some

talented students who are also beautiful human beings: Jane Wang, Erica Robles, Leila

Takayama, and Roselyn Lee. I am also indebted to some wonderful people who helped

run the Communication Department and the CHIMe lab: Susie Ementon, Joan

Furguson, Barbara Kataoka, and many more. In particular, with a smile that lights up

the whole department, Susie has always been there for me with all the help and advice

I needed to survive and to succeed.

I want to thank my parents and sisters for everything they gave me when they

themselves did not have much. My father passed away before I started pursuing my

PhD at Stanford. I am sure he would be extremely proud of me if he were still alive.

My partner of almost 11 years, Yancheng (George) Mei, provided me with all

the love and support I needed to finish my degrees at Stanford. This dissertation is

dedicated to him.

Page 6: Side Participation and Linguistic Alignment

vi

Table of Contents

Abstract ..................................................................................................................................... iv Acknowledgements .................................................................................................................... v Table of Contents ...................................................................................................................... vi List of Tables ............................................................................................................................. ix List of Figures ........................................................................................................................... xi CHAPTER 1 Introduction .......................................................................................................... 1 CHAPTER 2 Linguistic Alignment ........................................................................................... 6

2.1 Terminologies ................................................................................................................ 6 2.2 Levels of Alignment....................................................................................................... 7

2.2.1 Lexical Level .......................................................................................................... 8 2.2.2 Syntactic Level ....................................................................................................... 8 2.2.3 Acoustic-Prosodic Level ....................................................................................... 10 2.2.4 Stylistic Level ....................................................................................................... 11

2.3 Communication Type and Modality ............................................................................ 13 2.3.1 Face-to-Face Communication ............................................................................... 13 2.3.2 Computer-Mediated Communication ................................................................... 14 2.3.3 Human-Computer Interaction ............................................................................... 16

2.4 Research Methods ........................................................................................................ 16 2.4.1 Corpus-Based Approach ....................................................................................... 17 2.4.2 Experimental Approach ........................................................................................ 17 2.4.3 Corpus-Based vs. Experimental Approaches........................................................ 22

CHAPTER 3 Linguistic Alignment in CMC and HCI: Empirical Findings ............................ 23 3.1 Alignment in FtF, CMC, or HCI .................................................................................. 23

3.1.1 Alignment in FtF .................................................................................................. 23 3.1.2 Alignment in CMC ............................................................................................... 26 3.1.3 Alignment in HCI ................................................................................................. 28

3.2 Comparison of Alignment in FtF, CMC, and HCI....................................................... 30 3.2.1 CMC vs. FtF ......................................................................................................... 30 3.2.2 CMC vs. HCI ........................................................................................................ 31

CHAPTER 4 Beyond Dialog: Linguistic Alignment in Polylogue .......................................... 36 CHAPTER 5 Linguistic Alignment Explicated ....................................................................... 40

5.1 Automaticity ................................................................................................................ 40 5.1.1 Priming: Mechanistic Theories ............................................................................... 41 5.1.2 Automatic Social Responses: “Over-Learned” Social Behaviors ........................ 41

Page 7: Side Participation and Linguistic Alignment

vii

5.2 Intentionality ................................................................................................................ 42 5.2.1 For Communication Efficiency ............................................................................ 42 5.2.2 For Social Affect .................................................................................................. 43

5.4 Automaticity and Intentionality: Working Together? .................................................. 44 CHAPTER 6 Competition and Cooperation as Social Contexts for Alignment ...................... 46

6.1 Conceptualization and Experimental Operationalization ............................................. 46 6.2 Effects of Competition and Cooperation...................................................................... 48 6.3 Biological Bases for Competition and Cooperation..................................................... 50 6.4 Competition, Cooperation, and Linguistic Alignment ................................................. 51

CHAPTER 7 Overview of Studies ........................................................................................... 53 CHAPTER 8 Study I Alignment as a Result of Side-Participation: Shared Learning Subject 58

8.1 Method ......................................................................................................................... 60 8.2 Participants ................................................................................................................... 63 8.3 Materials ...................................................................................................................... 64 8.4 Procedure ..................................................................................................................... 65 8.5 Measures ...................................................................................................................... 67

8.5.1 Behavioral Measures ............................................................................................ 67 8.5.2 Attitudinal Measures ............................................................................................. 70

8.6 Results .......................................................................................................................... 71 8.6.1 Behavioral Measures from Q&A Session ............................................................. 71 8.6.2 Behavioral Measures from Paper-And-Pencil Quiz ............................................. 76 8.6.3 Post-Task Attitudinal Measures ............................................................................ 80

8.7 Discussion .................................................................................................................... 87 CHAPTER 9 Study II Alignment as a Result of Side-Participation: Unshared Learning Subjects .................................................................................................................................... 90

9.1 Method ......................................................................................................................... 91 9.2 Participants ................................................................................................................... 96 9.3 Materials ...................................................................................................................... 96 9.4 Procedure ..................................................................................................................... 98 9.5 Measures .................................................................................................................... 100

9.5.1 Behavioral Measures .......................................................................................... 100 9.5.2 Attitudinal Measures ........................................................................................... 102

9.6 Results ........................................................................................................................ 103 9.6.1 Behavioral Measures from Q&A Session ............................................................. 103 9.6.2 Behavioral Measures from Browser-Based Quiz ............................................... 108 9.6.3 Post-Task Attitudinal Measures .......................................................................... 117

Page 8: Side Participation and Linguistic Alignment

viii

9.7 Discussion .................................................................................................................. 124 CHAPTER 10 General Discussion and Conclusion ............................................................... 128

10.1 Summary of Findings ............................................................................................... 128 10.2 Contributions ............................................................................................................ 131

10.2.1 Theoretical Contributions ................................................................................... 131 10.2.2 Methodological Contributions ............................................................................ 133 10.2.3 Contributions to Research in HCI, CMC, and Social Psychology ..................... 133

10.3 Limitations and Future Work ................................................................................... 135 10.4 Conclusion ............................................................................................................... 137

Appendix I: Canned Questions and Answers Used by the “Other Learner” in Studies I & II 139 Appendix II: Questionnaire Used in Study I (HCI Condition) ............................................... 141 List of References ................................................................................................................... 150

Page 9: Side Participation and Linguistic Alignment

ix

List of Tables

Table 1 Side-by-Side Comparison of Study I and Study II ....................................................... 55 Table 2 Means and Standard Deviations for Average Question Length .................................. 72 Table 3 Means and Standard Deviations for Overall Question Vocabulary Simplicity Score 73 Table 4 Observed and Predicted Frequencies for the Use of the Critical Noun “Mosua” in

Questions with the Cutoff of 0.50 ....................................................................................... 74 Table 5 Logistic Regression Analysis of the Use of Critical Noun in Typed Questions by SPSS

19 ....................................................................................................................................... 74 Table 6 Observed and Predicted Frequencies for the Use of Proper Capitalization in

Questions with the Cutoff of 0.50 ....................................................................................... 75 Table 7 Logistic Regression Analysis of Proper Capitalization of Typed Questions by SPSS 19

........................................................................................................................................... 75 Table 8 Means and Standard Deviations for Average Fact Length ......................................... 77 Table 9 Means and Standard Deviations for Overall Fact Vocabulary Simplicity Score........ 78 Table 10 Observed and Predicted Frequencies for the Use of “Mosua” in Learned Facts from

Quiz with the Cutoff of 0.50 ............................................................................................... 79 Table 11 Logistic Regression Analysis of the Use of Critical Noun “Mosua” in Learned Facts

by SPSS 19 ......................................................................................................................... 79 Table 12 Means and Standard Deviations for Positivity of Learning ...................................... 80 Table 13 Means and Standard Deviations for Easiness of Learning ....................................... 81 Table 14 Means and Standard Deviations for Teaching Agent Competency ........................... 82 Table 15 Means and Standard Deviations for Other Learner Ability ...................................... 84 Table 16 Means and Standard Deviations for Self Ability ....................................................... 85 Table 17 Means and Standard Deviations for Vocabulary Similarity Perceived by Participants

........................................................................................................................................... 86 Table 18 Means and Standard Deviations for Overall Syntax Similarity Perceived by

Participants ........................................................................................................................ 87 Table 19 Means and Standard Deviations for Average Question Length by Condition ........ 104 Table 20 Means and Standard Deviations for Overall Question Vocabulary Simplicity Score

by Condition ..................................................................................................................... 105 Table 22 Logistic regression analysis of the use of critical noun “Zoonkaba” in questions by

SPSS 19 ............................................................................................................................ 107 Table 23 Observed and Predicted Frequencies for the Use of Proper Capitalization in

Questions with the Cutoff of 0.50 ..................................................................................... 108 Table 24 Logistic Regression Analysis of Proper Capitalization of Typed Questions by SPSS

19 ..................................................................................................................................... 108 Table 25 Means and Standard Deviations for Average Fact Length in Self-Subject List ...... 109 Table 26 Means and Standard Deviations for Overall Fact Readability Score in Self-Subject

List ................................................................................................................................... 110 Table 27 Observed and Predicted Frequencies for the Use of the Critical Noun “Zoonkaba”

in Learned Facts with the Cutoff of 0.50 ......................................................................... 111 Table 28 Logistic Regression Analysis of the Use of “Zoonkaba” in Learned Facts by SPSS 19

......................................................................................................................................... 111 Table 29 Logistic Regression Analysis of the Use of Proper Capitalization in Facts from the

Self-Subject List ............................................................................................................... 112 Table 30 Means and Standard Deviations for Number of Facts in Other-Learner-Subject List

......................................................................................................................................... 114

Page 10: Side Participation and Linguistic Alignment

x

Table 31 Means and Standard Deviations for Average Length of Facts in Other-Learner-Subject List ....................................................................................................................... 115

Table 32 Means and Standard Deviations for Overall Fact Vocabulary Simplicity Score in Other-Learner-Subject List .............................................................................................. 116

Table 33 Logistic Regression Analysis of the Use of Proper Capitalization in Facts from Self-Subject List ....................................................................................................................... 116

Table 34 Logistic Regression Analysis of the Use of Proper Capitalization in Facts from Other-Learner-Subject List .............................................................................................. 117

Table 35 Means and Standard Deviations for Positivity of Learning in Study II .................. 118 Table 36 Means and Standard Deviations for Easiness of Learning in Study II ................... 119 Table 37 Means and Standard Deviations for Teaching Agent Competency in Study II ....... 120 Table 38 Means and Standard Deviations for Other Learner Ability in Study II .................. 121 Table 39 Means and Standard Deviations for Perceived Self Ability in Study II................... 122 Table 40 Means and Standard Deviations for Perceived Vocabulary Similarity in Study II . 123 Table 41 Means and Standard Deviations for Perceived Syntax Similarity in Study II ......... 124 Table 42 Comparison of key results from Studies I and II ..................................................... 130

Page 11: Side Participation and Linguistic Alignment

xi

List of Figures

Figure 1. A Tangram figure used for psycholinguistic research. ............................................. 19 Figure 2. Screenshot showing chat window of Yahoo! Messenger. ........................................ 61 Figure 3. Screenshot showing user interface of simulated conference chat room. .................. 93

Page 12: Side Participation and Linguistic Alignment

1

CHAPTER 1

Introduction

A large number of researchers interested in human speech found proofs that

interlocutors coordinate their contributions and adapt to each other in dialog (e.g.,

Branigan, Pickering, & Cleland, 2000; Brennan & Clark, 1996; Garrod & Anderson,

1987; Garrod & Doherty, 1994; Levelt & Kelter, 1982). One important artifact of the

adaptive behavior is that through the course of a dialog or series of dialogs, paired

speakers tend to express themselves in similar ways, termed alignment (Branigan,

Pickering, Pearson, McLean, & Nass, 2003; Pearson, Hu, Branigan, Pickering, & Nass,

2006b; Pearson et al., 2004; Pickering & Garrod, 2004). This phenomenon is also

referred to as “accommodation” (Fais, 1996; Leiser, 1989), “convergence” (L. Bell,

Gustafson, & Heldner, 2003; Brennan, 1996; Giles & Wiemann, 1987; Leiser, 1989;

Natale, 1975; Oviatt, Darves, & Coulston, 2004), “entrainment” (Brennan, 1996;

Garrod & Anderson, 1987), “matching” (Cappella & Planalp, 1981; Niederhoffer &

Pennebaker, 2002), “mimicry” (Chartrand & Bargh, 1999; Chartrand, Maddux, &

Lakin, 2005; Good, Whiteside, Wixon, & Jones, 1984; Guindon, 1991; Lakin &

Chartrand, 2003; Lakin, Jefferis, Cheng, & Chartrand, 2003), “reciprocity” (Cappella,

1981), and “repetition” (Cleland & Pickering, 2003; McLean, Pickering, & Branigan,

2004), etc.

Substantial evidence has been found for linguistic alignment in terms of

lexicon and syntax, meaning that paired interlocutors often share a significant amount

of words and sentential/phrasal structures (e.g., Cleland & Pickering, 2003; Garrod &

Page 13: Side Participation and Linguistic Alignment

2

Clark, 1993; Garrod & Pickering, 2004; Gries, 2005). At the same time, alignment at

the acoustic-prosodic level was also reported (Oviatt, Bernard, & Levow, 1999; Street,

1984; Street & Cappella, 1989).

Research on alignment of linguistic style, which was traditionally an area of

interest for sociolinguists, has been extended to written discourses of interactive text-

based communication along with the advancement of communication technology.

Consequently, the study of stylistic alignment now includes but is not limited to the

sub-levels of phonology, graphology, lexicon, morphology, syntax, and pragmatics. In

recent years, researchers have reported alignment of stylistic dimensions including

sentence length and discourse formality (Brennan, 1991; Niederhoffer & Pennebaker,

2002; Zoltan-Ford, 1991). Research work along this line has since expanded to cover

language use in computer-mediated communication (CMC) as well as human-

computer interaction (HCI). This development, in the meantime, offered new

methodologies from the HCI research domain to be added to a toolbox of mostly

traditional research methods.

Most recently, the research on alignment reached beyond the conventional

dyadic groups to include dynamics and interlocutor roles that could only be found in

polylogues (Branigan, Pickering, McLean, & Cleland, 2007). This unique setting

affords researchers new and interesting ways to study linguistic alignment because

both direct and indirect interaction between interlocutors could take place in

polylogues. An important premise underlying the attractiveness of this setup is that if

an effect could be achieved within indirect interaction, then a stronger effect of the

Page 14: Side Participation and Linguistic Alignment

3

same nature should be expected within direct interaction. For example, if priming is

effective in generate alignment via indirect interaction, then presumably it should be

able to induce a higher degree of alignment via direct interaction.

But why do humans adapt their language behaviors to achieve alignment? On

one hand, a mechanistic view is that priming is powerful and ubiquitous and causes

alignment. On the other hand, a social-process view posits that it is necessary to align

as long the interlocutors intend to jointly progress the conversation and achieve mutual

understanding. The former is more about an automatic process based on priming; the

latter has an intentional flavor in that alignment is considered a strategy. Both theories

have merits, but could these two processes work together? If so, how would they

interact with each other? If the social aspect is essential to achieve alignment, how

would other social contexts and relationships contribute to or moderate alignment?

What effect could communication type and modality have on linguistic alignment?

Answers to these questions will be able to show that whether or not a more holistic

view incorporating both automaticity and intentionality is a valid and better approach.

After this introduction, Chapter 2 offers an overview of linguistic alignment,

covering popular terminologies used for this phenomenon, linguistic levels where

alignment has been reported, communication type and modality involved or used by

alignment research, and popular research methods. Chapter 3 presents empirical

findings of linguistic alignment found in FtF, CMC, and HCI, plus studies that

compared alignment in these settings. Chapter 4 starts with the introduction of a key

concept beyond dialog: polylogue, as well as the definitions of speaker, addressee,

Page 15: Side Participation and Linguistic Alignment

4

and side-participant. I then summarize a series of alignment studies that took place in

polylogues.

In Chapter 5, I present the two major theories about linguistic alignment: a

mechanistic view that focuses on the effect of priming, and a social-process view that

emphasizes the joint effort to progress a conversation and achieve mutual

understanding. After reviewing the main arguments from both sides, I propose a

holistic view with an automatic branch and an intentional branch to examine linguistic

alignment. I further suggest that under automaticity there could be the rather

mechanistic priming process and automatic social responses; under intentionality there

could be an emphasis on communication efficiency and a focus on gaining social

affect. Chapter 6 follows the discussion on alignment for social affect, and reviews the

conceptualization and effects of competition and cooperation. After reviewing an

fMRI study about the biological basis for competition and cooperation, I predict that

these two social contexts or relationships have the potential to mediate alignment from

priming.

Chapter 7 offers an overview of the two experimental studies for this

dissertation. Chapter 8 reports Study I that examined how priming (simple vs.

complex discourse), social relationship (competition vs. cooperation), and

communication type (HCI vs. CMC) could cause or influence stylistic alignment as a

result of side-participation in a polylogue in three-way conference text chat room.

Chapter 9 reports Study II, which was a follow-up of Study I but had one fundamental

difference. I introduce this major change and talk about other methodological

Page 16: Side Participation and Linguistic Alignment

5

refinements made for Study II. Finally, in Chapter 10, I summarize findings from the

two experiments and offer general discussions of contributions, limitations, and future

research directions.

Page 17: Side Participation and Linguistic Alignment

6

CHAPTER 2

Linguistic Alignment

2.1 Terminologies The action and result that interlocutors express themselves in similar ways in

conversation are called alignment. The linguistic portion of such action and result is

simply termed linguistic alignment. Among the vast literature on human language use,

this phenomenon has been presented and discussed using many other terms. In this

dissertation, “alignment” is chosen over many other alternatives. Below is a list of

some popular ones:

• “accommodation” (e.g., Fais, 1996; Giles & Smith, 1979; Leiser, 1989;

Staum Casasanto, Jasmin, & Casasanto, 2010; Street & Giles, 1982)

• “adaptation” (e.g., Darves & Oviatt, 2002; Nilsenová & Nolting, 2010;

Oviatt, et al., 1999; Street & Cappella, 1989)

• “co-ordination” or “coordination” (e.g., Branigan, et al., 2000; Garrod

& Anderson, 1987; Garrod & Clark, 1993; Garrod & Doherty, 1994)

• “convergence” (e.g., L. Bell, et al., 2003; Brennan, 1996; Coulston,

Oviatt, & Darves, 2002; Giles & Wiemann, 1987; Leiser, 1989; Natale,

1975; Oviatt, et al., 2004)

• “entrainment” (e.g., Brennan, 1996; Garrod & Anderson, 1987)

• “imitation” (e.g., Bosshardt, Sappok, Knipschild, & Holscher, 1997)

Page 18: Side Participation and Linguistic Alignment

7

• “matching” (e.g., Cappella & Planalp, 1981; Gonzales, Hancock, &

Pennebaker, 2010 ; Niederhoffer & Pennebaker, 2002; Taylor &

Thomas, 2008)

• “mimicry” (e.g., Chartrand & Bargh, 1999; Chartrand, et al., 2005;

Good, et al., 1984; Guindon, 1991; Lakin & Chartrand, 2003; Lakin, et

al., 2003; Scissors, Gill, & Gergle, 2008; van Baaren, Holland,

Kawakami, & Knippenberg, 2004)

• “reciprocity” (e.g., Cappella, 1981)

• “repetition” (e.g., Cleland & Pickering, 2003; McLean, et al., 2004;

Tannen, 1989)

To describe participants in conversation, this dissertation follows Herbert

Clark and colleagues’ categorization of speakers, addressees, and side-participants.

Unless specified, naïve participants or participants refer to human subjects in

experimental studies, while interlocutors and conversation partners are people

engaged in conversations who play the roles of speakers and addressees alternately.

2.2 Levels of Alignment Alignment behaviors have been observed mostly at the lexical, syntactic,

acoustic-prosodic, and stylistic levels. Among these levels, lexical and syntactic

features of language are linguistic; acoustic-prosodic features of speech are often

considered paralinguistic or extralinguistic; stylistic features could be combinations of

both linguistic and extralinguistic elements, and are close to sociolinguistics. Taking

Page 19: Side Participation and Linguistic Alignment

8

these and the space limitations into consideration, discussions on alignment behaviors

at the latter two levels are selective in this review, and only include studies deemed

classic and/or relevant to HCI research.

2.2.1 Lexical Level

Most lexical features observed or examined in linguistic alignment research

have been the names or referring expressions used to describe abstract shapes (e.g.,

Clark & Wilkes-Gibbs, 1986), objects (Brennan, 1996; Brennan & Clark, 1996;

Pearson, et al., 2006b), and locations (Garrod & Anderson, 1987). These names and

expressions can be as short as a single word, or as long as a noun phrase with a

relative clause (e.g., “the square that is moving to your left”). In the latter case, such

alignment could also be considered structural or syntactic. For example, Garrod &

Anderson (1987) reported a study in which participants played a maze game in pairs

via computer. The authors examined a corpus of 56 dialogs between participants, and

found that paired participants adopted very similar forms of spatial descriptions. This

pervasive phenomenon is not limited to dialogs between adults; similar lexical

alignment behavior was also observed among children (Garrod & Clark, 1993).

2.2.2 Syntactic Level

Syntactic features observed in linguistic alignment include the level of noun

phrase (NP) and the level of complete sentence.

Specifically, noun phrase structures that have been studied include pre-

nominal (PN) and relative clause (RC). A PN noun phrase is an utterance in which an

adjective precedes a noun (e.g., “red square,” “the red square,” “a red square”),

Page 20: Side Participation and Linguistic Alignment

9

whereas in a RC noun phrase the noun is followed by a post-nominal phrase

containing an adjective (e.g., “square that’s red,” “the square that’s red,” “square that

is red,” “the square that is red,” “square which is red,” “the square which is red”). The

use of the RC construction is in general less frequent than that of the PN construction,

at least when the modifier is a single word (e.g., “the book that was red” vs. “the book

that was red and torn”) (Cleland & Pickering, 2003).

At the sentence level, researchers have found evidence of alignment of passive

voice, datives structures, and the particle placement of transitive phrasal verbs. The

occurrence of passive voice in English is much less frequent than that of active voice.

Researchers have found that alignment is likely to take place for passive voice only

(Estival, 1985). That is to say, a speaker tends to use passive voice immediately or

shortly after his/her conversation partner did so. This alignment is asymmetric because

the use of active voice by one interlocutor dose not elicit the use of active voice by the

other.

Dative structures expressing ditransitive actions have been used in many

experimental studies and include two alternative forms: prepositional object (PO)

structures (e.g., “the teacher handing the book to the soldier”) and double object (DO)

structures (e.g., “the teacher handing the soldier the book”) (Branigan, et al., 2000;

Branigan, Pickering, McLean, & Cleland, in press). Unlike active and passive voices

of transitive verbs, interlocutors do not seem to have a pronounced propensity to use

one structure or the other.

Page 21: Side Participation and Linguistic Alignment

10

Finally, particle placement in transitive phrasal verbs is another feature

observed in alignment research (Gries, 2005). Difference between the two alternative

structures can be told as in “John picked up the book” vs. “John picked the book up.”

For example, as a result of alignment in spoken dialog, an interlocutor who just heard

his/her conversation partner utter “John picked up the book” or “John picked the book

up,” tends to produce a corresponding sentence like “Jane puts on the shoes” or “Jane

puts the shoes on.”

When syntactic alignment was first reported, many researchers believed that it

was probably caused by alignment of other non-syntactic factors such as lexical

priming (i.e., repetition of words used by conversation partner). Through

experimental control, researchers were able to provide some evidence of the

independent existence of syntactic alignment from lexical alignment. For example,

Branigan and colleagues (2000) conducted an experiment that used a confederate-

scripting technique to examine syntactic alignment as a result of priming. Each naïve

participant was paired with a confederate and they take turn to describe pictures to

each other. The confederate followed a script to generate descriptions with

systematically varied syntactic structures. Results show the syntactic structure used by

the confederate affected how the naïve participant syntactically construct his or her

subsequent description (Branigan, et al., 2000).

2.2.3 Acoustic-Prosodic Level

Apparently, alignment of acoustic-prosodic features can only be found in

spoken dialogs. Since the middle of 1970s, researchers have observed alignment of

Page 22: Side Participation and Linguistic Alignment

11

vocal intensity, speech rate, accent, and response latency (i.e., pauses between speaker

turns) between interlocutors. Although less regularly, alignment of some other features

such as turn duration, vocalization duration, and internal pause duration (i.e., pauses

within a speaker’s turn) has also been reported. For example, Street (1984) analyzed a

collection of recorded interviews to examine speech behaviors of interviewers and

interviewees. Using time-series analysis, the author found prosodic alignment in terms

of speech rate and response latency (i.e., pauses between speaker turns) between

paired interviewers and interviewees.

Again, such alignment behavior is not limited to adult interlocutors only;

researchers found evidence in children who engaged in dyadic conversations with an

adult that they varied their speech output to align with the adult’s change of amplitude,

speech rate, and response latency (Coulston, et al., 2002; Street & Cappella, 1989).

2.2.4 Stylistic Level

Traditionally, style has been studied primarily by sociolinguists to examine the

correlation between linguistic variations in speech and social factors such as sex, age,

race, and social class (A. Bell, 1984; Eckert, 2001; Labov, 1966, 2001; Moore, 2004).

The advancement of communication technology has helped to extend the research of

stylistic alignment to written discourses of interactive text-based communication.

Consequently, the study of stylistic alignment now includes but is not limited to the

sub-levels of phonology, graphology, lexicon, morphology, syntax, and pragmatics. In

recent years, psycholinguists and communication researchers have observed that,

without the interference of social factors, certain stylistic features such as sentence

Page 23: Side Participation and Linguistic Alignment

12

length and discourse formality used by one interlocutor (a person or a computer) often

elicit similar responses from the other interlocutor (a person) engaged in the same

dialog (Brennan, 1991; Niederhoffer & Pennebaker, 2002; Zoltan-Ford, 1991).

Using a Wizard-of-Oz (WOZ) study that simulated a train schedule inquiry

system via telephone, Richards and Underwood (1984) observed stylistic alignment in

that politeness and informality of a system’s introductory messages elicited similar

responses from participants. This was especially true when the system’s demands to

the participants were relatively unspecific. Some years later, in the context of a

database query task, Brennan (1991) conducted an experiment in an attempt to

compare linguistic dynamics in HCI vs. CMC and to exam the effect of the style of

responses on user input. Participants carried out keyboard conversations with a partner;

the WOZ method was used for the HCI condition. Results show that participants’

overall question length tended to become shorter when the partner always responded

with short answers, and to grow longer when the partner replied with longer sentences.

This stylistic alignment did not seem to differ between HCI and CMC. Similar

evidence of stylistic alignment in terms of sentence length was also found in Zoltan-

Ford’s (1991) WOZ study in which participants used both speech and text to interact

with an inventory system.

More recently, Niederhoffer & Pennebaker (2002) conducted two experiments

and one corpus-based research to examine stylistic alignment between paired

interlocutors. The authors used a self-developed, computer-based text analysis

program called Linguistic Inquiry and Word Count (LIWC) for data analysis. The two

Page 24: Side Participation and Linguistic Alignment

13

lab experiments involved college students who carried out one-on-one dialog in web-

based chat rooms; the corpus-based study analyzed parts of the official transcripts of

the Watergate tapes in which President Richard Nixon had one-on-one discussions

with his three aides. Results indicate that in both CMC and FtF, interlocutors in

dialogs align with each other on both the conversation level and on a turn-by-turn

level. Moreover, such alignment does not seem to be related to the quality of

interaction perceived by both interlocutors and judges. In conclusion, the authors

proposed that the degree of stylistic alignment is positively related to the level of

engagement between interlocutors. While the authors used the term “linguistic style

matching (LSM),” the LIWC program indeed only analyzes linguistic features at the

lexical level.

2.3 Communication Type and Modality When studying language use in conversation, there are three major types of

interaction to consider: face-to-face communication (FtF), computer-mediated

communication (CMC), and human-computer interaction (HCI). The modalities used

in these interactions include speech and text. For text-based communication,

interactions may be synchronous or asynchronous. Although this review surveys

research work on FtF, the focus remains on linguistic alignment in CMC and HCI, as

well as on the comparison of alignment behaviors observed in these three types of

interaction.

2.3.1 Face-to-Face Communication

Page 25: Side Participation and Linguistic Alignment

14

FtF used to be the only type of interaction upon which linguistic research was

conducted. This type of conversation is in general considered spontaneous and

synchronous. To study linguistic alignment in FtF, some researchers examine archived

corpora and analyze corpora collected from field studies or experiments. Increasingly,

researchers are turning to experimental methods where interlocutors are paired up in

lab environments to perform communication tasks or to play games that facilitate

conversation (Clark, 1996; Clark & Wilkes-Gibbs, 1986). In many lab environments,

however, an opaque screen is often used to separate two interlocutors in FtF so that

they rely on nothing but speech to communicate (Branigan, et al., 2000; Clark &

Wilkes-Gibbs, 1986; Hartsuiker, Pickering, & Veltkamp, 2004). Although this is not

strictly “face-to-face,” for simplicity purposes and the fact that it is a popular setup,

such interaction is still referred to as FCC hereafter. More details of this type of

experimental method can be found under Section 2.4.

2.3.2 Computer-Mediated Communication

Revolutions of communication technology have drastically changed the way

human beings communicate with each other by making geographic boundaries and

time zone differences less of a barrier to interpersonal communication. CMC has since

become an essential way for professionals and regular folks alike to communicate and

socialize. The pervasive and exponentially growing use of and reliance on networked

computers and the Internet for both synchronous text-based exchanges (e.g., instant

messaging and web chat) and for asynchronous communication (e.g., via e-mail and

Web forums) has attracted many researchers to examine the impact of CMC on

linguistic alignment in an attempt to understand the alignment phenomenon better

Page 26: Side Participation and Linguistic Alignment

15

(Branigan, et al., 2003; Brennan, 1991, 1998; Scissors, et al., 2008; Smith &

Wheeldon, 2001).

The advantages and constraints associated with the mediating communication

technology introduced some linguistic characteristics different from those observed in

other “conventional” contexts. For example, linking verbs, subject pronouns, and

articles are often omitted by interlocutors engaged in text-based chat. Some

researchers even suggest that text-based discourse in CMC be treated as a “register”—

a language variety for a particular situation, calling such discourse a “hybrid” with

features of both spoken and written language (Ferrara, Brunner, & Whittemore, 1991).

While the various types of text-based CMC appear to differ, they share several

important characteristics. First, each of these types requires the use of a keyboard. As

an acquired skill, typing is by nature a slower process than speaking yet, in general,

faster than writing by hand. The faster rate of typing has potential effects on the

composing process and allows easy revision or editing work. Second, most of these

types are rendered in text only, thus lack some useful paralinguistic cues necessary for

efficient communication. Third, co-presence in time and space is not required. Fourth,

the essentiality of acquaintance and identity in FtF has decreased or been transformed

(Ferrara, et al., 1991). Considering these distinctive features, it is natural for

psycholinguists and communication researchers to expect some alignment behaviors

different from those found in FtF.

At the same time, the impact of the computer as a medium on speech-based

CMC is also inevitable because many paralinguistic or non-verbal cues important to

Page 27: Side Participation and Linguistic Alignment

16

human communication are unavailable. Currently, however, CMC is predominantly

text-based, even though the use of speech, pictures (still and animated), and other

modalities are on the rise.

2.3.3 Human-Computer Interaction

While the theorization of the Media Equation (Reeves & Nass, 1996) has

tremendous implications for HCI research, the specificities of language use in HCI are

still to be explored. With the emergence and fast development of natural language (NL)

interface, dialogs between users and computer systems are no longer considered a

novelty. Before and during the development of natural language interfaces,

computational linguists and psychologists have to conduct exploratory research studies

or user testing sessions to examine the dynamics and characteristics of human-

computer conversation. Although the development of NL interface has not really

passed its infancy, empirical findings such as lexical and syntactic alignment in HCI

have provided important implications to improving system performance and user

satisfaction.

In addition to helping with NL interface design, experimental studies of

alignment in HCI also have great potential to help advance the understanding of

human language behavior as a whole. Chapter 3 offers a detailed review of research on

linguistic alignment in HCI.

2.4 Research Methods There are two major approaches to the study of linguistic alignment in

conversation: corpus-based and experimental approaches. Although the former has

Page 28: Side Participation and Linguistic Alignment

17

been the primary method for much linguistic research, an increasing number of

psycholinguists and communication researchers have turned to lab experiment for its

quantitative power and rigorous control over independent variables of interest. When

corpora are created using data collected in experiments (often in relatively natural

settings), however, a clear line between these two approaches can hardly be drawn.

2.4.1 Corpus-Based Approach

In linguistics, a corpus is often a large and structured set of texts. As

mentioned above, there are naturalistic corpora as well as corpora collected in

experiments. The International Corpus of English (ICE) is a good example of

naturalistic corpus. Corpora sometimes include both spoken and written texts, and they

may or may not have been annotated.

A typical annotation process is called part-of-speech tagging, or POS-tagging,

which refers to a procedure where researchers classify linguistic units into one of a set

of categories. For instance, words (verb, noun, adjective, etc.), phrases, referring

expressions, and syntactic constructions are some typical units to be tagged for the

analysis of linguistic alignment.

Traditionally, the corpus-based approach is used for theoretical research in FtF.

In recent years, however, corpora have been collected in HCI to identify design

problems within NL interfaces as well as to further understand language use in general.

2.4.2 Experimental Approach

Page 29: Side Participation and Linguistic Alignment

18

Most researchers cited in this dissertation adopted the experimental approach

to study linguistic alignment in conversation. Experiments for theoretical purpose

often involve a task or game that facilitates conversation, whereas studies geared

toward interface design tend to simulate a NL user interface with some specific

domain knowledge.

There are several popular tasks/games used by psycholinguists to examine

alignment. One classic task is to arrange cards containing abstract figures generated

from a Chinese puzzle game called “Tangram.” In this task, two naïve participants

play the roles of director and matcher. Separated by an opaque screen, each

participant has 12 cards with 12 different Tangram figures (See figure 1. as an

example) on his/her desk. The director and the matcher communicate with each other

so that in the end the matcher arranges his/her 12 cards in the same order as that of the

director’s cards. Many studies of H. Clark and colleagues used this task (e.g., Clark &

Wilkes-Gibbs, 1986). This task has been used to study FtF as well as CMC.

Page 30: Side Participation and Linguistic Alignment

19

Figure 1. A Tangram figure used for psycholinguistic research.

The second popular task is a maze game that has been repeatedly used to

collect corpora for analysis (Garrod & Anderson, 1987; Garrod & Clark, 1993; Garrod

& Doherty, 1994). In this game, two interlocutors seated in separate rooms collaborate

to solve a joint spatial maze displayed on computer screens. The maze is presented in

a way that each interlocutor has partial information about his/her partner’s location in

the maze. As a result, paired interlocutors generate utterances containing sequences of

location descriptions, which become the focus of the researchers’ analysis. This game

has been used in a CMC setup, but the researchers’ intention has been to study lexical

alignment in general.

The third communication task is a picture-description and -matching game

involving a naïve participant and a confederate. As in the Tangram task, the

participant and the confederate are seated face-to-face but separated by a screen. The

two partners take turns to describe pictures on cards to each other, and to pick cards

Page 31: Side Participation and Linguistic Alignment

20

that match the descriptions provided by each other. There are experimental items and

fillers: experimental items are cards that contain pictures to be described with the

linguistic features of concern; fillers are cards used to distract the participant from

detecting the purpose of study, and contain pictures that can only be described in

structures different from the features of concern. For maximum control, utterances and

responses from the confederate are pre-scripted. This type of task has been used to

examine syntactic alignment in FtF (Branigan, et al., 2000; Hartsuiker, et al., 2004). In

the meantime, a computer-based version of this game is also available for the study of

syntactic alignment in CMC and HCI (Branigan, et al., 2003). Finally, this task can be

transformed into a picture-naming and -matching game for the study of lexical

alignment. For this purpose, each card contains a single object, but each experimental

card contains an object that can be named in more than two ways. Likewise, a

computer-based version was devised to study lexical alignment in CMC and HCI

(Pearson, et al., 2006b).

After the study of language use extended to HCI, a new method called the

Wizard-of-Oz (WOZ) approach was introduced and widely adopted. In the WOZ

approach, a human wizard simulates a computer program/agent that interacts with

users and helps them to perform a particular task (e.g., information retrieval) in a

particular domain (e.g., travel). Compared to working with a realistic prototype or

system, the WOZ method is relatively inexpensive and easy to implement. Speech

interface developers use this method not only during the planning stage, but also when

they try to find remedies for certain system problems such as error handling. Very

frequently, along with discoveries that would benefit interface design, researchers also

Page 32: Side Participation and Linguistic Alignment

21

uncover phenomena that would interest linguists, psychologists, communication

researchers, and beyond (Guindon, 1991).

Originally intended for the exploration of issues related to NL interface

development, the WOZ method was later borrowed by psycholinguists and alike to

explore linguistic dynamics in HCI in general. For this purpose, an interactive game or

a seemingly realistic task such as database inquiry is used (Brennan, 1991; Pearson, et

al., 2006b). Of course, such method is often used only when programming is

impractical, difficult, or unnecessary.

Also, to experimentally study CMC, researchers sometimes adopt a method

that is the opposite of the WOZ approach—dubbed “reversed Wizard-of-Oz” (RWOZ).

In this approach, participants would be told that their interlocutor is a person when it is

indeed a computer program (Branigan, et al., 2003). Compared to CMC language

studies involving two real persons (with a confederate or not), the RWOZ method

provides rigorous control of one interlocutor’s utterances and language behavior,

making it possible for researchers to isolate certain factors and examine the existence

of causal relationships between these factors and some particular language behaviors

of the other interlocutor (i.e., a participant).

Among many good things brought by the WOZ method, for researchers this

revolutionary research method could help them to 1) uncover new phenomena of

interest, 2) quantify the prevalence of linguist phenomena observed in users, 3) isolate

and manipulate factors of interest, and 4) establish and interpret causal relationships

between the factors and linguistic phenomena observed (Oviatt & Adams, 2000).

Page 33: Side Participation and Linguistic Alignment

22

2.4.3 Corpus-Based vs. Experimental Approaches

A review of the history of linguistic alignment research found some early

corpus-based studies with abundant informal observations and speculations followed

by recent experimental research that provides statistical evidence for or against those

earlier findings and hypotheses. Corpus-based and experimental studies of linguistic

alignment complement each other in that 1) corpus-based approach to linguistic

alignment research has proved useful for spotting behavioral patterns and generating

hypotheses based on large amount of naturalistic data, and 2) controlled experiments

are good for providing unequivocal evidence of alignment caused by factors such as

syntactic priming.

It is also important to recognize that each corpus or experiment has its unique

dialog context. Consequently, findings from a single study should not be generalized

before more evidence becomes available. For example, if a corpus is based on job

interviews, it is quite likely that some social factors (e.g., an interviewer’s power over

an interviewee) may strengthen or weaken certain universally existing priming effects

(e.g., lexical choice). On the other hand, in experiments involving the use of a

confederate, the often non-aligning behavior of the confederate could implicitly affect

how participants comprehend and produce their utterances. Researchers using either

approach should be aware of the development based on the other approach. By doing

so, the chance to make false claims or careless generalizations may be reduced.

Page 34: Side Participation and Linguistic Alignment

23

CHAPTER 3

Linguistic Alignment in CMC and HCI: Empirical Findings

Except for closely related studies, empirical findings in the following sections

are listed in ascending chronological order.

3.1 Alignment in FtF, CMC, or HCI

3.1.1 Alignment in FtF

Early reports of linguistic alignment were predominantly informal observations

through the study of large corpora. For example, alignment at the lexical, syntactic,

and discoursal levels in natural dialog was observed and discussed often without the

support of statistics (Schenkein, 1980; Tannen, 1984, 1989; Weiner & Labov, 1983).

Estival (1985) analyzed a corpus of six interviews to examine the alignment of

passive voice in dialog. She tried to prove the existence of syntactic priming

independent of some other interfering factors such as lexical priming, but the results

were not conclusive. After that, there seemed to be a quiet period without another

similar report until Gries (2005) once again used the corpus-based approach to extend

the research on syntactic alignment to the placement of particles in phrasal verbs. The

author analyzed the British component of the ICE (ICE-GB) and found alignment of

datives as well as the placement of particles in phrasal verbs. Most importantly, the

author suggests that the degree of alignment of dative structures and particle

placements may vary across the verbs used in these structures.

Page 35: Side Participation and Linguistic Alignment

24

On the experimentalist side, Levelt and Kelter (1982) were among the first to

reported syntactic alignment in FtF using experimental methods. They conducted a

series of experiments and confirmed other researchers’ (e.g., Schenkein, 1980)

informal observations of linguistic alignment in dialog. More specifically, they

examined the repetition of prepositions in the answer to a question. For example, they

found that shopkeepers who answered experimenter’s phone calls tended to reply to

“What time do you close?” or “At what time do you close?” (in Dutch) with a

corresponding answer of “Five o’clock” or “At five o’clock.” Additionally, in their

final experiment, participants rated aligned question-answer turns as more “natural”

than non-aligned turns. However, the authors observed that such alignment would

disappear when an additional clause was added to the question, which suggests the

transience of the priming effect (Levelt & Kelter, 1982).

Street (1984) analyzed a collection of recorded interviews to examine speech

behaviors of interviewers and interviewees. Using time-series analysis, the author

found prosodic alignment in terms of speech rate and response latency (i.e., pauses

between speaker turns) between paired interviewers and interviewees. Furthermore,

alignment of turn duration was observed in male-male dyads. Finally, the author found

evidence that the degree of these alignment behaviors was positively related to the

perception of speakers’ competence and social attractiveness (Street, 1984). Similar

alignment behavior was also observe in children who engaged in dyadic conversations

with an adults (Street & Cappella, 1989).

Page 36: Side Participation and Linguistic Alignment

25

Clark & Wilkes-Gibbs (1986) used the Tangram game (see 2.4.2) to explore

the process in which interlocutors reach alignment in terms of referring expressions.

They observed an iterative process during which participants presented, repaired,

expanded on, or replaced noun phrases to describe abstract Tangram figures before

they found a version that was accepted by both. Similar alignment behavior through

such iterative process was also reported by Brennan & Clark (1996). They propose a

historical account of the process in which interlocutors establish a “conceptual pact”

and reach a consensus on referring expressions throughout iterations.

In an experimental study, Branigan, Pickering, & Cleland (2000) demonstrated

that participants syntactically aligned with their dialog partner played by a confederate

even though they could not see the confederate. More specifically, in a picture-

description and –matching task (See 2.4.2), participants tended to repeat the PO or DO

structure (i.e., “the cowboy offering the banana to the burglar” or “the cowboy

offering the burglar the banana”) immediately after hearing the confederate use a

corresponding structure. This tendency was significant even when participants

produced their descriptions using verbs different from those used by the confederate

(Branigan, et al., 2000).

Using a game with a similar setup, Cleland & Pickering (2003) designed three

different experiments to examine the priming effects on alignment of noun phrase

structures. They observed alignment of the relative clause (RC) structure as a result of

syntactic priming. Moreover, they found that when naïve participants repeated the

head nouns used by the confederate in the immediately precedent utterances, the

Page 37: Side Participation and Linguistic Alignment

26

tendency toward syntactic alignment would increase. That is to say, when a participant

is asked to describe a green diamond or a green square after the confederate says “the

diamond that’s red,” the chance for him/her to say “the diamond that’s green” is

higher than to say “the square that’s green.” Such tendency was also observed when

the head nouns used by the naïve participants were semantically related to those used

by the confederate. For example, after hearing the confederate say “the goat that’s red,”

participants were more likely to use the same RC structure to describe a red sheep than

a red knife. However, no increase in alignment was found when the head nouns used

by the naïve participants were phonologically related to those used by the confederate

(Cleland & Pickering, 2003). These findings provide evidence that alignment at one

level may lead to similar behavior at another level.

In another related study, Hartsuiker, Pickering, & Veltkamp (2004) found

syntactic alignment across languages. Specifically, bilingual participants of English

and Spanish were more likely to use English passives in their verbal descriptions of

pictures immediately after their partner (a confederate) used Spanish passives. The

authors posit that syntactic alignment between speakers of two different languages can

be obtained through priming.

3.1.2 Alignment in CMC

Garrod & Anderson (1987) reported a study in which participants played a

maze game in pairs via computer (see 2.4.2 for details). The authors examined a

corpus of 56 dialogs between participants, and found that paired participants adopted

very similar forms of spatial descriptions. In other words, paired participants lexically

Page 38: Side Participation and Linguistic Alignment

27

aligned with each other throughout the game. Most importantly, the authors point out

that no explicit negotiation was involved before pairs of participants adopt each

other’s description schemes.

Similar alignment behavior was also observed among children. Garrod & Clark

(1993) analyzed a large corpus of 80 dialogs between paired schoolchildren between 7

and 12 years of age who played the same maze game via computer. Overall, all

children displayed lexical alignment behavior in terms of spatial descriptions, but the

coordination between older children was more in-depth. In other words, younger

children sometimes adopted each other’s description without real understanding. The

authors propose that coordination in dialog is a default response (Garrod & Clark,

1993).

Using the same game, Garrod & Doherty (1994) conducted another two studies

to explore how groups establish linguistic convention. They found that while

participants who repeatedly played with the same partners tended to adopt each others’

descriptions quickly, those who conversed with different partners from the same group

displayed increasingly higher degrees of alignment behavior as the game progressed.

It appears that isolated pairs’ alignment behaviors were based on precedence and

salience, while participants from the group coordinated with each other as if they were

developing a language convention. Still, the authors argue that these alignment

behaviors were built upon a basic mechanism for language production and

comprehension within each individual (Garrod & Doherty, 1994).

Page 39: Side Participation and Linguistic Alignment

28

Even though in the above three groups of studies participants conversed via

computers, the authors’ intention was not to explore the characteristics of lexical

alignment in CMC per se. Instead, all discussions were about lexical alignment in

general.

3.1.3 Alignment in HCI

The development of natural language interface has been the major catalyst for

the increase of linguistic alignment research in HCI. Both text- and speech-based

interactions in various contexts have been studied using different research methods.

Leiser (1989) conducted a WOZ experiment to explore the potential of

leveraging linguistic alignment in HCI to improve speech recognition. During the

experiment, each time after participants finished typing a request into a natural

language database querying system, the system confirmed with a paraphrase of the

query prior to offering the answer. All paraphrases were constructed with particular

terms and syntactic structures. Results show that participants spontaneously repeated

these terms and structures in subsequent queries. Similar lexical alignment in Swedish

was also reported in two other WOZ experiments in which participants repeated words

used in questions from the spoken dialog system (Gustafson, Larsson, Carlson, &

Hellman, 1997).

Zoltan-Ford (1991) observed some stylistic alignment behaviors in a WOZ

study involving the use of both speech and text. Specifically, while interacting with an

inventory system, participants adapted the length of their queries to match that of the

system’s responses. More importantly, this kind of desirable adaptation was elicited

Page 40: Side Participation and Linguistic Alignment

29

more easily with explicit shaping (by rejecting input outside a limited vocabulary and

particular grammars used by the system) than relying on participants’ natural

inclination to alignment.

Coulston, Oviatt, & Darves (2002) examined acoustic alignment in human-

computer dialog. In their WOZ study, 7-to-10-year-old children interacted with a

voice interface where animated characters used text-to-speech (TTS) output to answer

questions about marine biology. Analysis of voice data indicates that the majority of

children in the study aligned with their partner’s TTS voice in terms of vocal intensity.

More specifically, increases in amplitude were observed when the partner’s voice

shifted from introvert to extrovert, and the shift from extravert to introvert elicited

decreases in amplitude (Coulston, et al., 2002). Probably in the same study but

reported separately, alignment of response latency was also observed (Darves &

Oviatt, 2002).

Pearson et al. (2006) found that not only do users lexically align with their

computer partners, they may also (nonconsciously) vary the degree of such alignment

depending the hints about the sophistication of the computer. In a lab experiment,

participants played a picture-naming and –matching game with a computer partner via

keyboard; the computer used either the preferred or dispreferred (but acceptable) term

to name objects that can be name in two or more ways. While the computer program

was loading, a screen-holder displayed a bogus PC magazine review of the program.

The review either implies the sophistication of the program (i.e., the “advanced

computer” condition), or suggests that the program has limited capability (i.e., the

Page 41: Side Participation and Linguistic Alignment

30

“basic computer” condition). Results show that in both conditions participants tended

to repeat the terms (preferred or dispreferred) previously used by the computer, but

participants in the “basic computer” condition displayed significantly more such

tendency than those in the “advanced computer” condition. The authors suggest that

attention should be paid to non-functional aspects of computer systems in order to

optimize system performance (Pearson, et al., 2006b).

3.2 Comparison of Alignment in FtF, CMC, and HCI With the growing number of experimental studies, more and more researchers

have compared linguistic alignment behaviors displayed in CMC vs. FtF, and HCI vs.

CMC. Although linguistic alignment was observed in all conditions, only a number of

researchers attempted to make head-to-head comparisons with statistical analysis.

3.2.1 CMC vs. FtF

Niederhoffer & Pennebaker (2002) conducted two experiments and one

corpus-based research to examine stylistic alignment between paired interlocutors.

The authors used a self-developed, computer-based text analysis program called

Linguistic Inquiry and Word Count (LIWC) for data analysis. The two lab

experiments involved college students who carried out one-on-one dialog in web-

based chat rooms; the corpus-based study analyzed parts of the official transcripts of

the Watergate tapes where President Richard Nixon had one-on-one discussions with

his three aides. Results indicate that in both CMC and FtF, interlocutors in dialogs

align with each other on both the conversation level and on a turn-by-turn level.

Moreover, such alignment does not seem to be related to the quality of interaction

Page 42: Side Participation and Linguistic Alignment

31

perceived by both interlocutors and judges. In conclusion, the authors proposed that

the degree of stylistic alignment is positively related to the level of engagement

between interlocutors. While the authors used the term “linguistic style matching

(LSM),” their LIWC program only analyzes linguistic features at the lexical level.

3.2.2 CMC vs. HCI

The Media Equation suggests that people interact with computers as if they are

social actors like other human beings (Reeves & Nass, 1996). Still, it is unclear

whether or not the Computers as Social Actors (CASA) paradigm applies to language

use. Some researchers argue that people display different language behaviors in HCI

than in FtF or CMC (Bhatt, Argamon, & Evens, 2004; Brennan, 1991; Dahlback &

Jonsson, 1989; Guindon, 1991; Guindon, Shuldberg, & Conner, 1987; Shechtman &

Horowitz, 2003). Nevertheless, linguistic alignment has been observed in both CMC

and HCI.

Using a WOZ study that simulated a train schedule inquiry system via

telephone, Richards and Underwood (1984) observed stylistic alignment in that

politeness and informality of a system’s introductory messages elicited similar

responses from participants. This was especially true when the system’s demands to

the participants were relatively unspecific. The authors concluded that even though

users tend to adapt their utterances in a way that facilitates speech recognition of voice

interfaces, special attention has to be paid to the composition of system utterances

including introductory messages (Richards & Underwood, 1984).

Page 43: Side Participation and Linguistic Alignment

32

In the context of a database query task, Brennan (1991) conducted an

experiment in an attempt to compare linguistic dynamics in HCI vs. CMC and to

examine the effect of the style of responses on user input. Participants carried out

keyboard conversations with a partner; the WOZ method was used for the HCI

condition. Results suggest that people may or may not treat a computer partner and a

human partner in the same way depending on the specific linguistic features. For

example, the author found no difference in the number of third-person pronouns (e.g.,

he, she, him, her, his, and it) in HCI vs. CMC. In the meantime, the number of first-

person and second-person pronouns (e.g., I, you, and me) were significantly smaller in

HCI than in CMC. Also, participants used less ellipses (e.g., “what about Ellen’s?”)

and acknowledgements in HCI than in CMC. As for specific alignment behavior,

participants’ overall question length tended to become shorter when the partner always

responded with short answers, and to grow longer when the partner replied with longer

sentences. This stylistic alignment did not seem to differ between HCI and CMC.

Fais (1996) conducted three experiments and observed lexical alignment in

three different settings. Her first experiment involves two native speakers of English

who performed an information retrieval task via telephone and then via a computer

interface equipped with audio, video, drawing, and text chat. Although lexical

alignment behavior was found in all participants, a naïve participant who played the

role of information-seeking “client” was more likely than the “agent” to repeat the

words used by his/her interlocutor. However, the author did not make it clear whether

or not the “agent” was a confederate, and if so, whether or not the “agent” used pre-

scripted utterances.

Page 44: Side Participation and Linguistic Alignment

33

In the second experiment based on the first one, native English speakers (as

“clients”) were paired up with native Japanese speakers (as “agents”), and they

conversed via two interpreters—a native English speaker for English-Japanese

translation and a native Japanese speaker for Japanese-English translation. Analysis of

the dialog between native English-speaking participants and the Japanese-English

interpreter revealed symmetric alignment behavior in terms of word choice. That is to

say, when a native English speaker and a non-native English speaker were engaged in

a conversation, both of them made the effort to accommodate each other by displaying

similar degrees of lexical alignment.

The third experiment was a modified version of the second in that the two

interpreters were told to be “computer interpreters” and their utterances were made

machine-sounding. In this WOZ study, the two interpreters were believed to be

computer interpreters by all participants. The author again found lexical alignment in

both English-speaking participants (i.e., “clients”) and the Japanese-English interpreter

(i.e., one of the Wizards). As observed in the first experiment, “clients” displayed

higher degrees of alignment than did the Wizard. The author compared results from

these three experiments and found that the second experiment (i.e., with human

interpreters) yielded the most frequent alignment behaviors, followed by the third (i.e.,

with “computer” interpreters). She attempted to explain the observed differences as a

function of concern over “social standing” and “communicative efficiency” (Fais,

1996).

Page 45: Side Participation and Linguistic Alignment

34

While developing and testing several NL-based voice interfaces, L. Bell (2003)

analyzed corpora of conversations between users and the interfaces. At the prosodic

level, she found that even though users would increase their speech rates when the

speech recognition seemed to be good, their overall speech rates were positively

correlated with those of the systems. That is to say, users would speed up or slow

down so that their speech rates were similar to those of the systems. In the meantime,

at the lexical level, users consistently displayed alignment behavior by adopting the

vocabulary used by the system. For the syntactic level, although the researcher

reported adaptive behaviors from the users, alignment of specific syntaxes or

structures was not discussed. This is understandable because identifying and analyzing

specific structures used in large corpora can be extremely difficult and inconclusive.

During a relatively theoretical exploration, Branigan et al. (2003) compared

syntactic alignment in CMC and HCI with a simple experiment. A computer version

of the picture-description and –matching task (see 2.4.2 for details) was programmed

using E-Prime by Psychology Software Tools, Inc. Participants played this game by

typing on a keyboard; the RWOZ method was used in the CMC condition. Dative

structures (see 2.2.2 for details) were the syntax of interest in this study. Results show

that participants syntactically aligned with their interlocutors even when participants

used dative verbs different from those of their interlocutors’. That is to say, in both

HCI and CMC, participants were more likely to produce a PO structure than a DO

structure after seeing their interlocutor’s use of the PO structure, or to produce a DO

structure instead of PO structure after their interlocutor used the DO structure. The

authors noted that the degree of syntactic alignment in the CMC condition was

Page 46: Side Participation and Linguistic Alignment

35

equivalent to that observed in an earlier study (Branigan, et al., 2000) where two

interlocutors were co-present but could not see each other. Results of this study

provide evidence supporting the strong and pervasive presence of an automatic

component that leads to syntactic alignment in dialog.

Using similar experimental setup, Nass, Hu, Pickering, Pearson, & Branigan

(2005) conducted two studies to further explore syntactic and lexical alignment in

CMC and HCI. In the syntactic alignment study, naïve participants displayed

alignment of datives (i.e., PO vs. DO structures) when the conversation was text-based,

voice-based (synthetic speech was used for the interlocutor while participants spoke to

a microphone), in matched modality, or mismatched modalities. In the lexical

alignment study, naïve participants played a picture-naming and –matching game (see

2.4.2 for details) with an interlocutor that was told to be either a “computer” (HCI) or

a “person” (CMC). Results indicate that participants consistently displayed lexical

alignment behavior throughout the keyboard-based conversation. For the first time, in

both studies, the authors found significantly higher degrees of alignment in HCI than

in CMC (Nass, Hu, Pickering, Pearson, & Branigan, 2005). These results provide

alternative ways to explain the alignment phenomenon.

Page 47: Side Participation and Linguistic Alignment

36

CHAPTER 4

Beyond Dialog: Linguistic Alignment in Polylogue

The literature review in the previous two chapters might give the impression

that research work on linguistic alignment is quite extensive and in-depth, yet all

studies reviewed in Chapters 2, 3, and 4 focused on linguistic dynamics within dyads.

In a typical dialog, two inter locators take turns to be the speaker and the addressee.

When conversations take place in groups of three or more persons, the simple

categorization of speaker and addressee becomes insufficient: Another two types of

roles—side-participant and overhearer—have to be added to the categorization (Clark,

1996; Clark & Carlson, 1982; Schober & Clark, 1989). Press conferences and

classroom question and answer (Q&A) sessions are typical situations where these four

types of roles are present. Moreover, these role-based labels are not only applicable to

FtF and CMC, but may also be extended to HCI.

Take the classroom Q&A session as an example. Imagine that Students 1 and 2

are engaged in a Q&A session with Teacher 3 while Technician 4 is repairing the

projector in the classroom. When 1 poses a question on 3 and 3 subsequently offers an

answer to 1, 2 is a side-participant; when 2 asks a question and 3 answers 2, 1 is a

side-participant of this particular dialog. If Students 1 and 2 take turns to ask questions,

they take turns to side-participate in the dialog between the other student and Teacher

3, and keep track of the progress of the Q&A session. Technician 4, on the other hand,

is always an overhearer or eavesdropper.

Page 48: Side Participation and Linguistic Alignment

37

Wilkes-Gibbs and Clark (1992) suggest that speakers collaborate with side-

participants as they do with addressees; speakers assume that side-participants also

share common ground with them as addressees do. They speculate that participants

(i.e., speaker, addressee, and side-participant) in multi-party conversation are all

responsible for the orderly accumulation of the conversation record. If so, a current

speaker who was previously a side-participant should be under the influence of

previous utterances especially the most recent ones. Linguistic alignment behavior is

thus expected. In fact, some researchers have found evidence supporting this

prediction.

Branigan, Pickering, McLean, & Cleland (2007) were probably the first to

experimentally explore linguistic alignment beyond dyads. They conducted a series of

four experiments to examine the effect of side-participation on syntactic alignment, to

compare such alignment to that observed in dyadic groups, and to explore how role

change in multi-party conversation affects syntactic alignment. The syntax of concern

was dative structures: prepositional object (PO) structure as in “the maid selling the

book to the teacher” versus double object (DO) structure as in “the maid selling the

teacher the book.” Due to the complex interaction within triadic groups, in the

following discussion the term “naïve participant” is used to refer to human subject

participated in the experiments, while a “side-participant” is a role played by a naïve

participant, an experimenter, or a confederate.

In their first experiment, a naïve participant and a confederate took turns to

describe picture cards to an experimenter. The material and setup were based on the

Page 49: Side Participation and Linguistic Alignment

38

picture-description and –matching game discussed under Section 2.4.2, but the

experimenter was added as the common addressee of the naïve participant and the

confederate. Results indicate that naïve participants displayed syntactic alignment

behavior as a result of side-participation in the immediately previous dialog between

the confederate and the experimenter. That is to say, immediately after a naïve

participant heard the confederate use a PO or DO structure to describe a picture to the

experimenter, s/he is likely to repeat that same structure in her/his description to the

experimenter. In the second experiment, the authors compared the degree of syntactic

alignment observed in naïve participants engaged in dyadic speaker-addressee

interactions against that found in naïve participants who alternately played the roles of

side-participant and speaker in triadic groups. Not surprisingly, the degree of

alignment due to previous participation in triadic groups was smaller than that found

in dyadic speaker-address groups.

The third experiment had two confederates plus a naïve participant interact

with each other; the findings were similar to those from the second experiment.

Specifically, naïve participants were more likely to align with a previous speaker

(confederate A) when that speaker had directly addressed them, than when that

speaker had addressed a third person (confederate B). The authors suggest that

alignment could be egocentric because naïve participants’ behavior did not seem to be

affected by whether their current addressee had been the speaker or the addressee of

the immediately previous utterance with which they were aligning. Finally, in the

fourth experiment, the authors further demonstrated that when naïve participants had

been the addressee of the immediately previous utterance, their tendency towards

Page 50: Side Participation and Linguistic Alignment

39

alignment was unaffected by whether the current addressee had previously been the

speaker or the side-participant. Taken together, results from the four experiments

indicate that 1) syntactic alignment is so pervasive that it extends beyond dialogs to

polylogues, 2) syntactic alignment can be elicited as a result of previous side-

participation in conversation, and 3) syntactic alignment may vary according to role

changes in conversation (Branigan, et al., 2007).

With these ground-breaking experimental findings reported, it would be

interesting and logical to see whether or not side-participation leads to alignment of

other linguistic features such as semantics and style in polylogues that take place in

similar context as well as in other conditions (e.g., text-based CMC and HCI), and

whether or not such tendency toward alignment can be manifested after the

conversation ends. Additionally, given that the degree of priming-induced syntactic

alignment as a result of side-participation in polylogues is likely to be lower than that

found in dyadic conversations, it would be worth exploring if the alignment of other

less important linguistic features as a result of side-participation would become

undetectable. Answers to these questions will be helpful for us to better understand the

causes and motivations, if any, that underlie linguistic alignment.

Page 51: Side Participation and Linguistic Alignment

40

CHAPTER 5

Linguistic Alignment Explicated

Why do people align with each other in dialog? A consensus among most

researchers is that linguistic alignment is spontaneous, even though researchers have

in general agreed that there are an unmediated or automatic process and a mediated or

intentional component in it. For the former component, automaticity implies “action

engaging neither the mind nor the emotions,” and connotes “a sense of predictability”;

for the latter, intentionality emphasizes “an awareness of an end to be achieved”

(Merriam-Webster’s Collegiate Dictionary, v. 2.5). It is imperative to differentiate

“intentionality” from “deliberateness,” which denotes careful and thorough

consideration besides the awareness of the consequences.

5.1 Automaticity There is a large amount of evidence supporting the existence and

fundamentality of the automatic component of linguistic alignment. For example,

Harry Whitaker reported that aphasic patients who lost their spontaneous language-

producing capacity (due to damage to the language-producing area of the brain) were

still able to repeat verbatim or repeat with simple transformation such as changes in

tense (as cited in Tannen, 1989, p. 87). This account suggests that repetition is

performed in a different part of the brain that is devoted to automatic functioning. Also,

research on non-verbal behaviors in interaction also suggests that the tendency toward

alignment is an automatic and nonconscious process (Chartrand & Bargh, 1999;

Page 52: Side Participation and Linguistic Alignment

41

Chartrand, et al., 2005; Lakin & Chartrand, 2003; Lakin, et al., 2003; Pickering &

Garrod, 2004).

5.1.1 Priming: Mechanistic Theories

There is a vast amount of empirical findings about linguistic priming of a

variety of features within and between speakers (see Pickering & Garrod, 2004 for an

extensive review). For example, even young children adopted each other’s spatial

descriptions in a maze game without in-depth understanding (Garrod & Clark, 1993);

syntactic features can also be primed across languages (Hartsuiker, et al., 2004;

Loebell & Bock, 2003).

In other communication contexts such as HCI, priming has been repeatedly

proven to be a powerful driver of linguistic alignment at many different levels (see

Branigan, Pickering, Pearson, & McLean, 2010 for an extensive review).

5.1.2 Automatic Social Responses: “Over-Learned” Social Behaviors

Various empirical findings suggest that some social behaviors can become

thoughtless and automatic processes that do not disappear in human-computer

interaction (Reeves & Nass, 1996). For example, in HCI, participants displayed

politeness and reciprocity that were normative to communication between humans

(Nass & Moon, 2000). It is highly plausible that some portions of linguistic alignment

are artifacts of such “over-learned social behaviors” that do not require effortful

decision-making processes.

Page 53: Side Participation and Linguistic Alignment

42

5.2 Intentionality On a very high level, people may choose to adapt their linguistic features for

operational/interactional goals and for affective gain, although a clear line is hard to be

drawn.

5.2.1 For Communication Efficiency

The most paramount short-term interactional goal of human conversation is to

achieve mutual understanding. Some well-known psychologists theorized that there is

this essential “grounding process” in conversation during which shared meanings are

constructed and accumulated; interlocutors “systematically” seek and provide

evidence about what has been said and understood (Brennan, 1990; Clark & Brennan,

1991; Clark & Schaefer, 1989; Clark & Wilkes-Gibbs, 1986). Linguistic alignment is

considered a “grounding technique” for efficient communication (Clark & Brennan,

1991). Also, they propose the principle of the “least collaborative effort,” which

suggests that each interlocutor in dialog often puts extra effort to minimize the total

joint effort required for efficient conversation (Clark & Wilkes-Gibbs, 1986). These

proposals are based on analyses of many empirical findings, and words like “technique”

and “systematically” make it clear that these researchers emphasize the intentionality

of linguistic alignment.

There is also strong evidence to support the idea that alignment is mediated by

a speaker’s beliefs of the addressee. In a process of audience design, speakers tend to

rely on their beliefs of the addressee to choose appropriate expressions (A. Bell, 1984).

Some researchers even found that participants varied the degree to which they

Page 54: Side Participation and Linguistic Alignment

43

lexically aligned with a computer based on their expectations and beliefs about the

computer (Pearson, Hu, Branigan, Pickering, & Nass, 2006a).

5.2.2 For Social Affect

Some alignment processes appear to have a socio-psychological basis. For

example, Natale (1975) found that interviewees in general tended to match the vocal

intensity of the interviewer, and that the social desirability of an individual helped

predict the degree to which he/she aligned with his/her interlocutor in terms of vocal

intensity. In other words, people with high needs for social approval would align more

than those with low needs (Natale, 1975).In Giles’ Accommodation Theory (Giles,

Coupland, & Coupland, 1991), the researchers theorized that individuals may use

alignment as a strategy to create, maintain, or decrease the social distance between

themselves and their communicative targets.

Researchers found that intentional mimicry of non-verbal behaviors could lead

to better persuasiveness and more positive trait ratings of the mimicking

partner/interactant whether or not the person being mimicked is aware of the mimicry

(Chartrand & Bargh, 1999). In HCI research, researchers proved that the same

phenomenon could be observed in immersive virtual environment between naïve

participants and a virtual agent that mimicked them (Bailenson & Yee, 2005). Taken

together, it could be argued that there is an evolutionary basis for intentional mimicry

or alignment to gain social affect.

Many other social factors could potentially play a role in determining our

language behaviors. The next chapter offers a review of the effect of competition and

Page 55: Side Participation and Linguistic Alignment

44

cooperation contexts on performance and attitudinal changes. I predict that the

different social affect associated with these two contexts will translate into varied

linguistic alignment behaviors. Study I (Chapter 8) and Study II (Chapter 9) tested this

hypothesis.

5.4 Automaticity and Intentionality: Working Together? Although the researchers in favor of one component do not explicitly deny the

existence and importance of the other component, only a few describe how these two

components or processes might co-exist and work side by side (Branigan, et al., 2010).

Some researcher had results that suggest the co-existence of the automatic and

intentional components in linguistic alignment, but they did not take the next step to

theorize or to further investigate.

For example, in a text-based CMC context, an experimental study by Scissors,

Gill, and Gergle (2008) found a strong correlation between lexical alignment and the

establishment of trust. In this study, paired participants played a social dilemma

investment game for five rounds and chatted via IM after each five rounds. Results

show that within chat sessions high-trusting pairs were more likely than low-trusting

pairs to repeat each others’ words and IM abbreviations, suggesting that the former

group displayed more alignment behaviors to establish and especially maintain the

trust. At the same time, low-trust pairs displayed more lexical alignment than did high-

trust pairs across chat sessions, but the alignment was mostly common forms of social

responses such as “yeah” and “ok.” The authors proposed that low-trusting pairs might

have to put in a considerable amount of cognitive effort to disguise their “low-trusting.”

Page 56: Side Participation and Linguistic Alignment

45

Consequently, these low-trusting pairs had few cognitive resources left to attune to

their partners’ content-related words and IM abbreviations, and could only repeat less

effortful common social responses (Scissors, et al., 2008). Based on earlier discussions

in this chapter, the first part of the results could be attributable to intentionality, while

the second part was more of the product of the automatic component of linguistic

alignment. Taken together, it appears that the intentional and automatic components

co-exist in shaping alignment in dialogs, and they do not work independently of each

other. More specifically, the effect of the automatic component is more likely to be

salient when cognitive resources are limited; the intentional cultivation and

maintenance of the alignment due to social reasons will likely outdo the automatic

alignment.

My extensive literature review, however, found no experimental study that

purposefully examined how the automatic and intentional components may work

together in shaping linguistic alignment behaviors. With this in mind and the research

ideas stated toward the end of the previous chapter, I designed two experiments to

examine: 1) whether or not a speaker’s writing style and use of a critical noun in

referring expressions would be affected by previous side-participation in CMC and

HCI polylogues, and 2) how the automatic and the intentional processes of linguistic

alignment work together in conversation. The automatic factor is operationalized as

priming of two markedly different linguistic styles (see Chapters 8 and 9 for more

details); the intentional factor is operationalized using a pair of common social

contexts: competition and cooperation, which are discussed in the next chapter.

Page 57: Side Participation and Linguistic Alignment

46

CHAPTER 6

Competition and Cooperation as Social Contexts for

Alignment

Researchers from a wide range of disciplines have been studying competition

and cooperation for many years. In particular, experimental approaches to exploring

these two social contexts have been popular among social psychologists, educators,

and applied behavioral scientists (e.g., Johnson, Maruyama, Johnson, Nelson, & Skon,

1981; Okebukola, 1985; Tauer & Harackiewicz, 2004). This chapter offers an

overview of the conceptualization of competition and cooperation, biological

differences between the two modes of social cognition, and the effects of these two

contexts on both behavioral measures of performance and attitudinal measures of task

enjoyment. Finally, I predict that competition and cooperation will moderate linguistic

alignment found in conversations.

6.1 Conceptualization and Experimental Operationalization There are four types of common goal structures studied by researchers from a

variety of fields: 1) cooperative, 2) cooperative with intergroup competition, 3)

competitive, and 4) individualistic. Among these four structures, cooperation with

intergroup competition was sometimes referred to simply as “intergroup competition”

(e.g., Tauer & Harackiewicz, 2004); it was often the operationalized “cooperation” in

some studies, too (Johnson, et al., 1981).The definition and conceptualization of these

conditions are primarily from two opposite angles: intrinsic motivation and extrinsic

motivation. The former angle is based on the supposition that there is a state of tension

Page 58: Side Participation and Linguistic Alignment

47

within an individual that motivates actions leading to the accomplishment of desired

outcomes (Lewin, 1935); the latter is based on learning theory and suggests that the

reward distribution drives individuals to act competitively, cooperatively, or

individualistically (Kelley & Thibaut, 1969).

From the intrinsic motivation perspective, Deutsch (1949) conceptualized that

a competitive social context is where the success of individuals’ goals are negatively

correlated: An individual’s gain means the loss of the others. A cooperative social

context, conversely, is where the success of all involved individuals is positively

linked: An individual can only be successful when the others achieve their goals. An

individualistic social context, finally, is where no correlation exists between the

statuses of individuals’ goal achievements: The success or failure of an individual has

no impact on the success or failure of any other individuals.

On the other hand, in Kelley and Thibaut’s (1969) definition, a competitive

social context is where a high-achiever gets a maximum reward while the others

receives minimum reward. A cooperative social context is where each individual’s

reward is directly related to the quality of the group’s collective achievements. Finally,

an individualistic context is where each individual is rewarded on the basis of his or

her own achievement; the quality of the other individuals is irrelevant.

In general, cooperation is defined as an intragroup process or end result, but in

many of the research work on competition and cooperation the operationalization of

cooperation included intergroup completion. This approach is utilized in this

Page 59: Side Participation and Linguistic Alignment

48

dissertation as well, so in the following chapters, “cooperation” refers to “cooperation

with intergroup competition.” More details are available in Chapters 8 and 9.

6.2 Effects of Competition and Cooperation In an attempt to resolve some inconsistent conclusions and controversies found

in research work about the relative effect of the above mentioned four social contexts

on achievement and productivity, Johnson and colleagues (1981) conducted a large-

scale meta analysis using three different methods to re-examine the results from 122

previous studies. They concluded that, overall, in terms of promoting achievement and

productivity,

1. Cooperation (including cooperation with intergroup competition) is

better than interpersonal competition. This is true for all subject areas

(e.g., reading, math, social studies), all age groups, and most tasks

covered by the reviewed studies.

2. Cooperation (including cooperation with intergroup competition) is

better than individualistic endeavors. This holds true for all subject

areas and all age groups.

3. Cooperation without intergroup competition is better than cooperation

with intergroup competition, but the conclusion is based on a small

number of studies that directly compared these two conditions.

4. No significant difference exists between interpersonal completion and

individualistic endeavors.

Page 60: Side Participation and Linguistic Alignment

49

Based on findings from this meta-analysis, the authors suggested that educators

in the United States’ public school system adopt a larger number of cooperative

learning procedures for the promotion of higher student achievement, and that

industrial organizations may boost productivity by utilizing group-based reward

systems (Johnson, et al., 1981).

However, some more recent findings contradicted Johnson and colleagues’

(1981) tentative conclusion about cooperation with intergroup competition. For

example, in a field study some researchers found evidence that in cognitive tasks,

intergroup competition can actually improve performance in work environment (Erev,

Bornstein, & Galili, 1993). The same trend holds for group efficacy and productivity

(Mulvey & Ribbens, 1999), and for academic performance (e.g., Okebukola, 1985).

More recently, in a series of field experiments, Tauer & Harackiewicz (2004)

examined the effects of competition and cooperation (with intergroup competition) on

intrinsic motivation and performance in shooting a basketball. They repeatedly found

that intergroup competition led to higher levels of task enjoyment and performance

than pure cooperation (without intergroup competition) and pure competition in a task

with high independence. The authors also found no significant difference between

pure cooperation and pure competition in terms of task enjoyment or performance.

Additionally, they found that pure cooperation led to more interpersonal enthusiasm

approaching the task, pure competition made participants value competence and

perceive greater challenge in task completion, and cooperation with intergroup

competition was capable of doing both (Tauer & Harackiewicz, 2004).

Page 61: Side Participation and Linguistic Alignment

50

Based on the above review, I decided to operationalize cooperation as

cooperation with intergroup competition to compare to interpersonal competition in

two dissertation studies (see Chapters 8 and 9).

6.3 Biological Bases for Competition and Cooperation Based on classical evolutionary theory, the two basic modes of social cognition,

competition and cooperation, evolved from the intricate and dynamic interaction

between two conflicting factors. On the one hand, competition between group

members affords individual members with selective advantages choosing mates and

acquiring food. On the other hand, cooperation among individual group members

often means increased survival fitness with better mate choice, more dependable food

supply, and enhanced protection against predators. As a result, it is logical to reason

that humans have since developed different mechanisms for competition and

cooperation, plus many other egocentric and prosocial behaviors.

Using functional magnetic resonance imaging (fMRI) technology, Decety and

colleagues (2004) found that both competition and cooperation contexts are related to

the activation of a common frontoparietal network subserving executive functions and

the anterior insula involved in autonomic arousal. Most importantly, there are distinct

neural regions activated for competition and for cooperation. Specifically, competition

is associated with an increase in medial prefrontal activity, while cooperation is

associated with right orbitofrontal involvement. Considering their findings plus

evidence from evolutionary psychology and developmental psychology, the authors

argue that both competition and cooperation necessitate monitoring of self and other,

Page 62: Side Participation and Linguistic Alignment

51

but competition requires additional cognitive resources, and cooperation is a socially

rewarding process (Decety, Jackson, Sommerville, Chaminade, & Meltzoff, 2004).

6.4 Competition, Cooperation, and Linguistic Alignment My extensive literature review for this dissertation did not find any study that

explores the effects of competition and cooperation on linguistic alignment. However,

a few studies offered hints that these two social contexts could potentially moderate

linguistic alignment in conversation.

In an early experimental study of the effects of competition and cooperation on

small groups, Grossack (1954) found that participants in the cooperative condition

were more likely than those in the competitive condition to use words that suggested

group cohesiveness (e.g., “group,” “we,” “us”). They communicated with more words

than did their competitive counterparts. Moreover, cooperative-condition participants

were more likely than competitive-condition participants to influence one another and

in the meantime accept pressure toward cohesiveness (Grossack, 1954).

In the same review covered under Section 6.2, the authors noted that type of

task could be one of many mediating factor. There were in fact tasks that favored or

did not favor cooperation (Johnson, et al., 1981). In general, tasks that require division

of labor can be completed most effectively under cooperative conditions (Deutsch,

1949). In other words, if a task involves interdependence, cooperation should help

individuals with their performance than should competition. On the other hand, for

tasks that are high in independence and low in interdependence, cooperation may not

have any advantage over competition (Stanne, Johnson, & Johnson, 1999).

Page 63: Side Participation and Linguistic Alignment

52

Conversation is a task that requires joint effort from interlocutors and with

high interdependence, but this does not mean that conversation can only happen in a

cooperative context. In a competitive context such as negotiation, coordination on

linguistic choices still occurs, yet this coordination may not be comparable to that in a

cooperative context such as a discussion about where to eat between two friends. All

things being equal, the change of social context could potentially sway the level of

alignment to one direction or another.

Page 64: Side Participation and Linguistic Alignment

53

CHAPTER 7

Overview of Studies

Adaptive language behaviors could be manifested in both similarity and

dissimilarity. Social factors such as the desire to be liked could increase the level of

alignment, but an estranged relationship between interlocutors could lead to

intentional dissimilarity.

Priming has been proven to be ubiquitous and powerful: interlocutors exhibited

syntactic alignment behaviors due to previous side-participation in conversation

(Branigan, et al., 2007). But based on our knowledge about social contexts such as

competition and cooperation, is the rather automatic process subject to the moderation

of such social factors? If so, would competition lead to dissimilarity or reduced

alignment, and cooperation effect higher degree of alignment? Would communication

type such as HCI and CMC play a moderating role? Studies I and II in this dissertation

were designed as an attempt to answer these questions.

To begin, Study I examined the effects of three factors on linguistic alignment

side by side: priming, social context, and communication type. The alignment was

focused on linguistic style used in writing (especially typing), so the priming was

operationalized as simple vs. complex sentence style. As mentioned in the previous

chapter, I operationalized the social context as competition vs. cooperation (with

intergroup competition). Finally, two communication types were examined: HCI and

CMC. This lab experiment was a modified online text-based version of the 20

Questions game with two questioners (or learners) and one answerer (or teacher),

Page 65: Side Participation and Linguistic Alignment

54

which served as a perfect context to examine side-participation in a triadic

conversation. The underlying premise was that if interlocutors display alignment of

stylistic features as a result of side-participating in a text-based polylogue, they should

exhibit higher degree of alignment at many other linguistic levels in face-to-face

dialogs. Similarly, if completion and cooperation could play a role in moderating

alignment in this experiment, other social relations and contexts should also be able to

manifest their power on influencing adaptive language behaviors in natural

conversations.

Study II was a follow-up of Study I with some methodological refinements,

but the key difference was that between the two questioners/learners they no longer

shared the same learning subject. This radical change was meant to weaken the

presence of semantic priming (from sharing a conversation subject) and further test the

potency of stylistic priming. Ideally, the weakening of semantic priming would help

make the effect of competition/cooperation and HCI/CMC on alignment relatively

more detectable.

Table 1 provides a side-by-side comparison of the two studies.

Methodologically, Study II was largely automated and required significantly less

human power to run and manage experiment sessions. Key differences between the

two studies are highlighted with italic font. More details are presented in Chapter 9.

Page 66: Side Participation and Linguistic Alignment

55

Table 1 Side-by-Side Comparison of Study I and Study II

Study I (N = 80)

Study II (N = 80)

Experiment Design

2 x 2 x 2 Between-participants factorial

2 x 2 x 2 Between-participants factorial

Context of Task

Performance

Learning through asking Learning through asking

Experiment Setup,

Apparatus, and Modality

Venue: A small lab

Hardware: One laptop computer

Operations: One session at a time by an experimenters, a

confederate, and a wizard

Instructions: Verbal by experimenter & printout on paper

Q&A Session: Yahoo! Messenger on computer

Quiz: Paper-and-pencil

Questionnaire: Paper-based

Venue: A large study area in a library

Hardware: Multiple desktop computers or participants’ laptop

computers

Operations: Multiple sessions at a time by a single experimenter

Instructions: Text in browser displayed for two minutes

Q&A Session: Browser-based chat simulation

Quiz: Browser-based

Questionnaire: Browser-based

Q&A Session Subject(s)

and Clue(s)

One shared subject

Participant: “Mosua” people

Clue: “matriarchal” people

Other learner: “Mosua” people

Two unshared subjects

Participant: a small country’s national sport called “Zoonkaba”

Clue: “team” sport

Other learner: “Nosua” people*

Page 67: Side Participation and Linguistic Alignment

56

“Pedagogical Agent”

Played by a Wizard

Used red, green, and yellow dots to represent “Yes,” “No,” and “Unknown/Wrong question”

Offered answers based on actual research finding

Played by a computer program

Used “Y,” “N,” and “U” to represent “Yes,” “No,” and

“Unknown/Wrong question”

Offered randomized answers with exactly same percentages of the three

types of answers used for canned questions

“Other learner” Played by a confederate Played by a computer program

Experiment Tasks

Q&A Session: Ask 12 “yes-or-no” questions

Quiz: Compile a top-15 list of facts learned for the only subject

Questionnaire: About learning experience, “teaching agent,” “the

other learner,” and self

Q&A Session: Ask 20 “yes-or-no” questions

Quiz: Compile a top-10 list of facts learned for each of the two subjects

Questionnaire: About learning experience, “teaching agent,” “the

other learner,” and self

Wait Times

Before each answer from “pedagogical agent”: 6 seconds

Before each question from “other learner”: 15 seconds (simple-

prime) or 20 seconds (complex-prime)

Before each answer from “pedagogical agent”: 4 seconds

Before each question from “other learner”: 15 seconds (simple-prime)

or 20 seconds (complex-prime)

Independent Variables

Prime style: Simple vs. complex questions

from “other learner”

Work relationship between participant and “other learner”:

Competitive vs. cooperative

Communication type between participant and “other learner”:

HCI vs. CMC

Prime style: Simple vs. complex questions from

“other learner”

Work relationship between participant and “other learner”:

Competitive vs. cooperative

Communication type between participant and “other learner”:

HCI vs. CMC

Page 68: Side Participation and Linguistic Alignment

57

Dependent Variables

Behavioral from Q&A: average question length, overall

question readability, use of critical noun in questions, use of proper

capitalization

Behavioral from one (1) quiz list: average fact length, overall fact

readability, use of critical noun in facts, use of proper capitalization

Attitudinal: Perception of learning experience,

“pedagogical agent,” “other learner,” and self

Behavioral from Q&A: average question length, overall

question readability, use of critical noun in questions, use of proper

capitalization

Behavioral from two (2) quiz lists: average fact length, overall fact readability, frequency of critical

noun in facts, use of proper capitalization, number of facts (for

“Nosua” only)

Attitudinal: Perception of learning experience,

“pedagogical agent,” “other learner,” and self

*“Mosua” was changed to “Nosua” to further reduce the possible familiarity with the actual name of “Mosuo,” and to discourage cheating activities using online search.

Page 69: Side Participation and Linguistic Alignment

58

CHAPTER 8

Study I

Alignment as a Result of Side-Participation:

Shared Learning Subject

Based on the literature review and discussions in Chapters 2 through 6, Study I

was designed to answer the following questions.

First, given the evidence showing that interlocutors are subject to syntactic

priming as side-participant in a polylogue (Branigan, et al., 2007), will the priming of

linguistic style lead to alignment behaviors in the same type of conversation setting?

Instead of using a rather tedious “game” in which participants could feel trapped, I

propose a more natural and motivating context: An interactive learning task via instant

messaging modeled after the 20 Questions game. The added external validity might

increase the difficulty in detecting and measuring alignment. But if priming is

ubiquitous and persistent, I expect to have rich data to answer this first question.

Second, as reviewed and discussed in Chapter 6, I predict that cooperation and

competition could help moderating the degree to which interlocutors align with each

other. However, does cooperation necessarily lead to higher degrees of alignment even

if an interlocutor’s partner adopts rather unusual linguistic features? If certain

language behaviors are deemed superior in quality or class, will a competitor be more

likely than a cooperator to mimic such behaviors? Answers to these questions will

provide support for the existence and function of the intentional component within

Page 70: Side Participation and Linguistic Alignment

59

linguistic alignment, and add more insights into the study of competition and

cooperation.

Third, people are known to apply social rules when interacting with computers

(Nass & Brave, 2005; Reeves & Nass, 1996), but we do also treat computers

differently depending on our expectations of and beliefs about them (Pearson, et al.,

2006a). If competition and cooperation do lead to different alignment behaviors, will

the effect be manifested differently in HCI vs. CMC? There are certain things we do

and do not expect from computers. So do we want to cooperate with a computer that

talks like us, or compete with one that seems to have superior language ability than us?

Important implications to the development of artificial intelligence and natural

language systems could be generated if these questions are answered.

Finally, given the research findings about stylistic adaptation and its social-

psychological reasons, is it possible that the alignment of certain linguistic features are

more likely than others to be under the influence of social factors? Also, priming can

be operationalized and achieved at many different linguistic levels. Is it possible to

explore and see if the effects of priming of different linguistic features vary in terms of

their life spans? With the complexity of human language behaviors in mind, I believe

experimental studies are more suitable to answer questions like these. Answers to this

question should deepen our understanding about linguistic alignment and human

language behaviors as a whole.

Page 71: Side Participation and Linguistic Alignment

60

8.1 Method This lab experiment was advertised as a “learning through questioning” study

to prospective participants. Each naïve participant was given a learning task in which

s/he asked questions by typing in an instant messaging (IM) conference chat room. As

in the Twenty Questions game, participants were instructed to ask questions that could

be answered with a simple “Yes” or “No.” Afterwards, participants took a recall quiz

during which they wrote down what they had learned. They performed all tasks in a

small room equipped with a desk, a chair, and a laptop computer sitting on the desk.

During each experiment session, in the IM conference chat room a “other

learner” (played by a confederate) and a naïve participant took turns to ask questions

and get answers from a “natural-language-based pedagogical computer agent” (played

by a wizard). That is, the confederate and the participant took turns being the speaker

(the wizard being the addressee) and the side-participant. More specifically, the

participant was a side-participant when s/he waited for her/his turns to arrive, and

became a speaker when s/he asked questions. Although participants never interacted

with “the other learner,” they asked questions on the same shared subject: a

matriarchal people called “Mosua.” More details of this subject are available in this

chapter under Section 8.3.

For the IM conference chat session, Yahoo! Messenger was chosen over other

popular IM applications (e.g., MSN and AOL) due to its relatively plain looking,

flexibility in hiding its toolbar, and the fact that its emoticons can be easily replaced

with customized ones. In fact, the user interface of Yahoo! Messenger could be

Page 72: Side Participation and Linguistic Alignment

61

configured to have a generic look especially when it is in the full-screen mode with

toolbar hidden (see Figure 2). Furthermore, when one types in a conference chat room,

his/her status (i.e., typing) is not revealed to others in the status bar of the chat window.

This was an important feature to have because it allowed the confederate to copy and

paste canned questions from a text file into the chat window (instead of typing the

questions one letter at a time). Control of response time was thus made possible.

Figure 2. Screenshot showing chat window of Yahoo! Messenger.

The experiment had a 2 x 2 x 2 between-participants factorial design. The first

factor was the style of questions used by “the other learner”: simple vs. complex.

Those “canned” questions were intended to prime participants in this study. In the

simple-prime condition, all questions raised by the “other learner” were short, simple,

Page 73: Side Participation and Linguistic Alignment

62

and casual. Meanwhile, in the complex-prime condition, the “other learner” inquired

about the same areas by asking long, complex, and formal questions (see Appendix I:

Canned Questions for Study I). This first independent variable is referred to as the

“prime style” factor hereafter.

The second factor was the work relationship between the naïve participant and

“the other learner”: competitive vs. cooperative. Naïve participants were told that there

would be another learner in the conference chat room, and either they would be

competing with the “other learner” or working with a “other learner” as a group to

compete with other pairs. More specifically, in the competitive condition, the naïve

participant was told that his/her score from the quiz would be compared with that of

“the other learner” to see who did better; in the cooperative condition, the naïve

participant was told that his/her score and “the other learner’s” score would be

combined in one and compared against those of other groups/pairs. This second

independent variable is referred to as the “work relationship” factor hereafter.

The third factor was about who “the other learner” was, or the type of

interaction between the participant and “the other learner”: HCI vs. CMC. Naïve

participants were told that either “the other learner” was “another participant” or “a

natural-language-based learning computer agent.” This third independent variable is

hereafter referred to as the “communication type” factor.

To make the learning task more believable, participants were given a

seemingly random IM chat room handle of “student0026” or “student0028.” while

“the other learner” of CMC participants’ was called “student0025” or “student0027,”

Page 74: Side Participation and Linguistic Alignment

63

respectively. The two sets of chat room handles were intended to reduce the chance of

data contamination caused by any possible discussions about the experiment between

outgoing participants and incoming participants. In the meantime, the teaching agent

was literally called “pedagogical_agent” in the conference chat room, while “the other

learner” of HCI participants’ was labeled “learning_agent.”

The fact that the focus of the study was on the influence of an addresser’s (i.e.,

“the other learner’s”) language use on that of a side-participant (i.e., a naïve

participant), required that the utterances from the teaching agent (i.e., the wizard) had

little or no impact on the side-participant. As a result, throughout each entire chat

session, the wizard responded to questions using three colored dots: green for “Yes,”

red for “No,” and yellow for “No Answer” or “Wrong Question.” These dots were

made with Adobe Photoshop and had exactly the same dimensions as Yahoo!

Messenger emoticons. The green, red, and yellow dots were used to replace three of

the basic emoticons: “smiling,” “sad,” and “winking,” respectively. Doing so made it

easy for the wizard to offer prompt responses by typing three combinations: colon +

right parenthesis, colon + left parenthesis, and semicolon + right parenthesis.

8.2 Participants Eighty (80) undergraduate students at Stanford University, all native speakers

of American English, participated in the study and received research credit for an

introductory communication class. Each participant was randomly assigned to one of

the eight (8) conditions; gender was balanced so that there were equal numbers of

male and female participants in all conditions.

Page 75: Side Participation and Linguistic Alignment

64

8.3 Materials Participants were instructed to ask questions so as to learn some important

facts about the family structure and lifestyle of a fictitious matriarchal “Mosua” tribe

living on an island near Australia. In reality, these “facts” were about an ethnic

minority group called “Mosuo” in southwestern China. A total of 21 facts were

extracted from anthropological descriptions and research findings posted on several

research-oriented websites (Forney, 2002; Mosuo, n.d.; Vonier, n.d.). These facts were

then transformed into yes-or-no type of questions to be used by “the other learner”.

Each question had a simple version and a complex version. In general, the complex

version 1) had many words drawn from an advanced vocabulary, 2) always adopted

proper capitalization, and 3) used referring expressions (Res) containing the word

“Mosua” as a noun or adjective (average 66.7% of all questions used by “the other

learner”). The simple version, on the contrary, 1) had words drawn from a basic

vocabulary, 2) used no capitalization, and 3) never used the word “Mosua.” For

example, to inquire about how children are named, the “co-leaner” would ask a

question in one of the following two ways:

Complex version: “Do offspring of a Mosua woman bear her surname?”

Simple version: “do kids use their mother’s last name?”

The 21 questions were grouped into two general categories: family structure

(Category I) and lifestyle (Category II). The former category contained 14 questions

while the latter category had seven (7). Please refer to Appendix I for a complete

listing of these prime questions.

Page 76: Side Participation and Linguistic Alignment

65

8.4 Procedure Upon arrival, a female experimenter greeted the participant and got his/her

consent in writing. The participant was then seated in front of a desk on which there

was a laptop computer. There the experimenter gave verbal instructions to the

participant before leaving a sheet of paper containing the same instructions for the

participant to review. The participant learned that there would be three major sessions:

a Q&A session in a group chat room, a paper-and-pencil quiz, and a paper-based

questionnaire.

Prior to the arrival of the participant, a Yahoo! Messenger conference chat

room had been opened on the laptop computer, and the teaching agent and two

learners (i.e., the confederate and the participant) had already been logged in. By the

time a participant had finished reading the instruction sheet and turned his/her

attention to the conference chat room, s/he would see that the teaching agent had

instructed that 1) “the other learner”/confederate should ask the first question, and 2)

the confederate and the participant should indicated their readiness to begin the Q&A

session by typing the word “ready.”

In the simple-prime condition, the confederate would send out the first

question 15 seconds after the participant typed “ready” in the chat window. After that,

the confederate would send out the next question 15 seconds after the wizard had

answered a preceding question from the participant. The wizard would respond to each

question six (6) seconds later. In the complex-prime condition, the wait would

increase to 20 seconds for the confederate and 8 seconds for the wizard. This

Page 77: Side Participation and Linguistic Alignment

66

difference in timing was necessary to ensure that the production and processing of

complex questions was believable.

A typical Q & A session (simple-prime condition) went like this:

pedagogical_agent: Welcome to the study. You should take turns to ask

questions. Student0025, please ask the first question. When you are ready,

please type “Ready.”

student0025: ready

student0026: ready

(15 seconds later)

student0025: are there a dad, a mom, and kids in a family?

(6 seconds later)

pedagogical_agent: (red dot for “no”)

student0026: is there a chosen leader of society?

(6 seconds later)

pedagogical_agent: (green dot for “yes”)

(15 seconds later)

student0025: do children live with their mom?

Using the list of canned questions (see Appendix I), “the other learner” asked a

total of 12 questions among which eight (8) were from Category I and four (4) from

Category II. In some cases where the participant asked questions identical or similar to

the ones the confederate (i.e., “the other learner”) planned to ask next, the confederate

Page 78: Side Participation and Linguistic Alignment

67

had to skip them and use backup questions from the list. Fortunately, the occurrences

were few so that the confederate never had to exhaust all canned questions on the list.

The confederate and the participant took turns and each asked their 12

questions. Six seconds after the wizard responded to the participant’s last question, the

wizard announced the end of the Q&A session. The experimenter re-entered the lab to

close the lid of the laptop and give the participant a blank exam book to write down, in

descending order of importance, a top-15 list of facts learned during the Q&A session.

The participant worked in the lab alone, and there was no time limit to finish this task.

After the participant finished compiling the top-15 list, s/he went out of the lab

to turn in the exam book and get a paper-based questionnaire, with which s/he would

evaluate the learning experience, the “teaching agent,” “the other learner,” and the

participant herself/himself. Finally, upon completing the questionnaire, the participant

was debriefed, thanked, and sent away.

8.5 Measures

8.5.1 Behavioral Measures

Participants’ typed questions were captured in a chat log and later transferred

to a Microsoft Word document for analysis. Handwritten lists of learned facts by

participants were entered into the same document for analysis. The questions and facts

were aggregated into two paragraphs and analyzed separately in Microsoft Word.

Average question length is the average number of words participants used in

each question they asked during the Q & A session.

Page 79: Side Participation and Linguistic Alignment

68

Overall question vocabulary simplicity score is the Flesch Reading Ease score

(Flesch, 1948) of the aggregation of each participant’s 12-questions but controlled for

average question length. The Flesch Reading Ease score was acquired by running

“Spelling and Grammar…” on each aggregated paragraph in Microsoft Word. A

summary was provided after this operation and it included the Flesch Reading Ease

score, number of words per sentence, and other statistics of the selected paragraph.

Flesch Reading Ease score takes into consideration of average number of syllables per

word and average number of words per sentence, and is considered a reliable and valid

tool for measuring the readability of Word documents (Paasche-Orlow, Taylor, &

Brancati, 2003; Stockmeyer, 2009). Microsoft Word offers Flesch Reading Ease score

on a 0-100 scale. Higher scores indicate better readability. For example, Reader’s

Digest scores about 65, while the score for the Harvard Law Review is in the low 30s

(Flesch-Kincaid, 2006).

As for the canned questions used by the confederate, participants did not

always receive identical sets of questions, but across all sessions the average of

average sentence length was 6.29 words (SD = 0.26) for the simple primes and 11.82

words (SD = 0.65) for the complex ones; the simple primes were significantly shorter

than the complex ones, t(1, 78) = -50.25, p < .001. The average Flesch Reading Ease

score was 90.79 (SD = 2.23) for the simple primes and 42.89 (SD = 1.75) for the

complex ones; the difference was highly significant, t(1, 78) = 106.99, p < .001.

Use of critical noun in questions is a binary variable. A true value (1) was

assessed when a participant used the critical noun “Mosua” and its variations (e.g.,

Page 80: Side Participation and Linguistic Alignment

69

“Mosuan”) two times or more in 12 questions. The value was otherwise false (0).

Within the canned questions used for the study, “Mosua” was never used in the

simple-prime condition, but in the complex-prime condition it had a frequency of 0.66

appearance per canned question (SD = 0.04). The extremely small standard deviation

was due to the fact that almost exactly the same 12 canned questions were used for all

40 complex-prime participants.

Use of capitalization in questions has two values: 1 and 0. A value of 1 was

assessed when a participant used proper capitalization in more than half of his/her

typed questions. Otherwise, the participant got a value of 0. In reality, participants in

this study were consistent in terms of using capitalization in the IM chat room: those

who scored “1” used proper capitalization all the time; among those who scored “0,”

only two participants properly capitalized one and two questions, respectively. Partial

capitalization in a question was treated as non-capitalization. Only four questions of

all participants fell into this category.

Similar measures were taken for the “top-15” list of learned facts of each

participant. They are average fact length, overall fact vocabulary simplicity score, and

use of critical noun in facts. A few participants wrote less than 15 facts, but this did

not affect the calculation of average fact length or any other measures. Finally,

capitalization score was not applicable because all participants used proper

capitalization when writing with paper and pencil.

Page 81: Side Participation and Linguistic Alignment

70

8.5.2 Attitudinal Measures

Several indices were generated from questionnaire data. Ten-point Likert

scales were used throughout the questionnaire.

Positivity of learning. The index about learning experience was comprised of

six items: 1) enjoyable, 2) rewarding, 3) creative, 4) useful, 5) pleasant, and 6)

efficient; the reliability was very high (Cronbach’s α = .83).

Easiness of learning. This index was comprised of two items: simple and easy,

and was reliable (α = .66).

Teaching agent competency. This index was comprised of two items:

competent and knowledgeable (α = .70).

Other learner ability. For perceptions of “the other learner,” this index was

comprised of four items: 1) intelligent, 2) knowledgeable, 3) successful, and 7) in

control. The index was very reliable (α = .85).

Self ability. This index was comprised of the same four items used in the other

learner ability index. The index was reliable (α = .79).

Vocabulary similarity. Toward the end of the questionnaire, participants rated

how similar they are to “the other learner” in terms of the word and syntax choices.

Vocabulary similarity was a single item based on a 10-point Likert scale.

Syntax similarity. Finally, participants also rated syntax similarity between

their questions and those from the confederate. This is another single item based on a

10-point Likert scale.

Page 82: Side Participation and Linguistic Alignment

71

8.6 Results SPSS univariate ANOVAs were conducted for all but binary measures.

Because the canned questions used to prime participants were extremely simple or

extremely complex, the average lengths and overall complexity scores of all

participants’ typed questions fall between the respective two extremes.

8.6.1 Behavioral Measures from Q&A Session

Average question length. There was a main effect of prime style on average

question length, F(1,72) = 70.23, p < .001. In general participants aligned with the

“other learner” in terms of number of words per question: Simple-prime participants

generated questions with significantly fewer words (M = 7.40, SD = 1.08) than those

from complex-prime participants (M = 9.98, SD = 1.59). There was no main effect of

work relationship, F(1,72) = 0.48, p > .49, or communication type, F(1,72) = 0.68,

p > .41. No interaction was significant: interaction type by work relationship, F(1,72)

= 0.20, p > .65; interaction type by prime style, F(1,72) = 0.15, p > .69; work

relationship by prime style, F(1,72) = 0.68, p > .41; three-way interaction, F(1,72) =

0.28, p > .59. Table 2 shows the means and standard deviations of all conditions.

Page 83: Side Participation and Linguistic Alignment

72

Table 2 Means and Standard Deviations for Average Question Length

Communication Type Work Relationship Prime Style Mean Standard Deviation

CMC Competitive Simple 7.58 0.71

Complex 9.62 2.28

Cooperative Simple 7.08 1.20

Complex 9.96 1.74

HCI Competitive Simple 7.68 1.22

Complex 10.29 1.00

Cooperative Simple 7.24 1.00

Complex 10.03 1.22

Overall question vocabulary simplicity score. There was a main effect of prime

style, F(1,71) = 25.13, p < .001: The vocabulary used in questions generated by

simple-prime participants was significantly simpler than that used in questions

produced by complex-prime participants. There was also a main effect of relationship,

F(1,71) = 4.27, p < .05, but this was an artifact of a two-way interaction. The artifact

came from the cross-over interaction effect of prime style and work relationship,

F(1,71) = 8.87, p < .01. Specifically, post hoc comparisons show that this main effect

was an artifact of the difference observed within simple-prime participants:

cooperative-simple-prime participants had significantly higher question vocabulary

simplicity scores than did competitive-simple-prime participants, t(38) = 7.99, p

< .001. The other main and interaction effects were not significant: interaction type,

F(1,71) = 0.30, p > .58; interaction type by work relationship, F(1,71) = 0.15, p > .70;

Page 84: Side Participation and Linguistic Alignment

73

interaction type by prime style, F(1,71) = 2.54, p > .11; three-way interaction, F(1,71)

= 0.12, p > .73. Table 3 shows the means and standard deviations of all conditions.

Table 3 Means and Standard Deviations for Overall Question Vocabulary Simplicity Score

Communication Type Work Relationship Prime Style Mean Standard Deviation

CMC Competitive Simple 67.03 7.79

Complex 61.03 5.88

Cooperative Simple 76.77 11.01

Complex 57.66 6.90

HCI Competitive Simple 70.79 8.65

Complex 57.09 9.25

Cooperative Simple 80.63 6.30

Complex 68.82 14.35

Use of critical noun in questions. An SPSS Binary Logistic Regression was

performed to assess the impact of the three independent variables on the likelihood

that participants would use the critical noun “Mosua” and its variations in their typed

questions. The full model containing all predictors was statistically significant, χ2(3, N

= 80) = 57.997, p < .001, indicating that the model was able to distinguish between

participants who used and did not use “Mosua” in their questions. The model correctly

classified 88.8% of cases. However, analysis shows that only prime style made a

statistically significant contribution to the model, recording an odds ratio of 102.936

(see Table 4 and Table 5).

Page 85: Side Participation and Linguistic Alignment

74

Table 4 Observed and Predicted Frequencies for the Use of the Critical Noun “Mosua” in Questions with the Cutoff of 0.50

Observed

Predicted use of critical noun

Once or none Twice or more % Correct

Once or none 33 2 94.3

Twice or more 7 38 84.4

Overall % correct 88.8

Table 5 Logistic Regression Analysis of the Use of Critical Noun in Typed Questions by SPSS 19

Predictor β SE β Wald’s

χ2 df p eβ

(odds ratio)

Constant -1.337 .641 4.345 1 .037 0.263

Communication type (1 = HCI, 0 = CMC) 0.266 .733 0.132 1 .716 1.305

Relationship (1 = competitive, 0 = cooperative)

-0.811 .761 1.134 1 .287 0.445

Prime type (1 = complex, 0 = simple) 4.634 .876 27.964 1 .000 102.936

Use of proper capitalization in questions. An SPSS Binary Logistic Regression

was performed to assess the impact of the three independent variables on the

likelihood that participants would use proper capitalization in their typed questions.

Page 86: Side Participation and Linguistic Alignment

75

The full model containing all predictors was statistically significant, χ2(3, N = 80) =

29.13, p < .001, indicating that the model was able to distinguish between participants

who used and did not use proper capitalization in their questions. The model correctly

classified 78.8% of cases, but the only prime style made a statistically significant

contribution to the model, recording an odds ratio of 14.620 (see Table 6 and Table 7).

Table 6 Observed and Predicted Frequencies for the Use of Proper Capitalization in Questions with the Cutoff of 0.50

Observed

Predicted

Lowercased Capitalized % Correct

Lowercased 33 10 76.7

Capitalized 7 30 81.1

Overall % correct 78.8

Table 7 Logistic Regression Analysis of Proper Capitalization of Typed Questions by SPSS 19

Predictor β SE β Wald’s

χ2 df p eβ

(odds ratio)

Constant -1.874 .601 9.714 1 .002 .154

Communication type (1 = HCI, 0 = CMC) 0.456 .556 .673 1 .412 1.578

Relationship (1 = competitive, 0 = cooperative)

0.152 .552 .076 1 .783 1.164

Page 87: Side Participation and Linguistic Alignment

76

Prime type (1 = complex, 0 = simple) 2.682 .561 22.824 1 .000 14.620

8.6.2 Behavioral Measures from Paper-And-Pencil Quiz

Statistical analyses show that most of the same effects were carried over to the

second stage of the study—the paper-and-pencil quiz portion, where participants

worked alone to compile their “top-15” lists of learned facts.

Average fact length. There were a main effect of prime style on average fact

length, F(1,72) = 22.58, p < .001, and a cross-over interaction effect of prime style and

communication type, F(1,72) = 4.96, p < .05. Post hoc comparisons show that in the

CMC condition, participants who were primed with simple questions by the

confederate used significantly fewer words in their listing of top-15 learned facts (M =

5.87, SD = 1.06) than did those who were exposed to complex questions from the

confederate (M = 7.92, SD = 1.73), t(38) = -4.51, p < .001. The same main effect was

also observed in the HCI condition: HCI-simple-prime participants had a mean fact

length of 6.46 words per sentence (SD = 1.20), while HCI-complex-prime participants

had a mean of 7.20 words per sentence (SD = 1.08), but the significance level was

much lower, t(38) = -2.04, p < .05. The stylistic priming appeared to have a more

lasting effect on CMC participants than on HCI participants. At the same time, no

other main effect or interaction was significant: communication type, F(1,72) = 0.05,

p > .83; work relationship, F(1,72) = 1.48, p > .22; communication type by work

relationship, F(1,72) = 0.06, p > .80; work relationship by prime style, F(1,72) = 0.98,

Page 88: Side Participation and Linguistic Alignment

77

p > .32; three-way interaction, F(1,72) = 0.05, p > .83. Table 8 shows the means and

standard deviations for average fact length in all eight conditions.

Table 8 Means and Standard Deviations for Average Fact Length

Communication Type

Work Relationship Prime Style Mean Standard Deviation

CMC Competitive Simple 6.26 1.11

Complex 7.96 1.92

Cooperative Simple 5.48 0.89

Complex 7.87 1.62

HCI Competitive Simple 6.71 1.18

Complex 7.23 1.26

Cooperative Simple 6.20 1.23

Complex 7.17 0.94

Overall fact vocabulary simplicity score. The main effect of prime style was

significant, F(1,71) = 29.62, p < .001, suggesting that simple-prime participants used

considerably more basic words in their facts than did complex-prime participants. No

other main effect or interaction was significant: communication type, F(1,71) = 0.22,

p > .63; work relationship, F(1,71) = 0.62, p > .43; communication type by work

relationship, F(1,71) = 0.50, p > .48; communication type by prime style, F(1,71) =

0.94, p > .33; work relationship by prime style, F(1,71) = 1.59, p > .21; three-way

interaction, F(1,71) = 0.22, p > .64. Table 9 shows the means and standard deviations

for overall fact vocabulary simplicity score in all eight conditions.

Page 89: Side Participation and Linguistic Alignment

78

Table 9 Means and Standard Deviations for Overall Fact Vocabulary Simplicity Score

Communication Type

Work Relationship Prime Style Mean Standard Deviation

CMC Competitive Simple 69.47 11.08

Complex 60.26 5.40

Cooperative Simple 71.32 11.35

Complex 58.78 5.96

HCI Competitive Simple 70.10 7.10

Complex 58.66 6.79

Cooperative Simple 76.82 5.38

Complex 58.17 15.50

Use of critical noun in facts. An SPSS Binary Logistic Regression was

performed to assess the impact of the three independent variables on the likelihood

that participants would use the critical noun “Mosua” and its variations in their

handwritten learned facts. The full model containing all predictors was statistically

significant, χ2(3, N = 80) = 12.213, p < .01, indicating that the model was able to

distinguish between participants who used and did not use “Mosua” in their written

lists of learned facts at least twice. The model correctly classified 67.5% of cases.

However, analysis shows that only prime style was significant contributor to the

model, recording an odds ratio of 4.588 (see Table 10 and Table 11). In short, the

priming of the use of critical noun was carried over from typed questions to

handwritten facts. The change of modality did not appear to interfere with the priming

effect in significant ways in the short term.

Page 90: Side Participation and Linguistic Alignment

79

Table 10 Observed and Predicted Frequencies for the Use of “Mosua” in Learned Facts from Quiz with the Cutoff of 0.50

Observed

Predicted use of critical noun

Once or none Twice or more % Correct

Once or none 29 15 65.9

Twice or more 11 25 69.4

Overall % correct 67.5

Table 11 Logistic Regression Analysis of the Use of Critical Noun “Mosua” in Learned Facts by SPSS 19

Predictor β SE β Wald’s

χ2 df p eβ

(odds ratio)

Constant -0.646 .480 1.816 1 .178 0.524

Communication type (1 = HCI, 0 = CMC) 0.000 .487 0.000 1 1.000 1.000

Relationship (1 = competitive, 0 = cooperative)

-0.701 .490 2.045 1 .153 0.496

Prime type (1 = complex, 0 = simple) 1.524 .492 9.595 1 .002 4.588

Finally, all participants adopted proper capitalization rules to compile their

“top-15” lists, so there was no main effect or interaction of any factor at all.

Page 91: Side Participation and Linguistic Alignment

80

8.6.3 Post-Task Attitudinal Measures

Positivity of learning. There was a main effect of prime style on positivity of

learning, F(1,72) = 9.58, p < .01. Simple-prime participants rated the learning

experience as significantly more positive (M = 6.34, SD = 1.06) than did complex-

prime participants (M = 5.54, SD = 1.20). No other main effect or interaction was

significant: communication type, F(1,72) = 0.57, p > .45; work relationship, F(1,72) =

0.24, p > .62; communication type by work relationship, F(1,72) = 0.43, p > .51;

communication type by prime style, F(1,72) = 1.09, p > .30; work relationship by

prime style, F(1,72) = 0.49, p > .48; three-way interaction, F(1,72) = 0.08, p > .78.

Refer to Table 12 for means and standard deviations.

Table 12 Means and Standard Deviations for Positivity of Learning

Communication Type

Work Relationship Prime Style Mean Standard Deviation

CMC Competitive Simple 6.21 0.95

Complex 5.43 1.54

Cooperative Simple 6.00 1.20

Complex 5.72 1.27

HCI Competitive Simple 6.77 1.14

Complex 5.59 1.29

Cooperative Simple 6.37 0.95

Complex 5.41 0.70

Easiness of learning. There was a main effect of work relationship, F(1,72) =

4.58, p < .05. Competitive-condition participants were more likely than cooperation-

Page 92: Side Participation and Linguistic Alignment

81

condition participants to perceive the learning process as easy. No other main effect

was significant: communication type, F(1,72) = 0.13, p > .72; prime style, F(1,72) =

1.67, p > .20. There was a cross-over interaction of prime style and communication

type, F(1,72) = 7.28, p < .01. Post hoc comparisons identified only one significantly

different group: In the HCI condition, simple-prime participants rated the learning

process significantly easier than did their complex-prime counterparts, t(38) = 2.59, p

< .05. Another cross-over interaction between work relationship and prime style was

also significant, F(1,72) = 4.48, p < .05. Post hoc analysis indicated that among

simple-prime participants, competitive-condition participants were more likely than

cooperation-condition participants to rate the learning experience as easy, t(38) = 2.73,

p < .05. No other interaction was significant: communication type by work

relationship, F(1,72) = 0.38, p > .54; three-way interaction, F(1,72) = 3.28, p > .07.

Table 13 shows means and standard deviations for all eight conditions.

Table 13 Means and Standard Deviations for Easiness of Learning

Communication Type

Work Relationship Prime Style Mean Standard Deviation

CMC Competitive Simple 6.38 1.17

Complex 6.75 1.03

Cooperative Simple 5.75 2.25

Complex 6.33 1.55

HCI Competitive Simple 8.00 1.03

Complex 5.30 1.64

Cooperative Simple 5.72 1.73

Complex 5.70 1.40

Page 93: Side Participation and Linguistic Alignment

82

Teaching agent competency. A main effect of prime style was observed,

F(1,72) = 4.85, p < .05. Specifically, participants in the complex-prime condition

thought the “teaching agent” was more competent (M = 7.16, SD = 1.32) than did their

simple-prime counterparts (M = 6.41, SD = 1.62). No other main effect or interaction

was significant: communication type, F(1,72) > 0.65, p = .42; work relationship,

F(1,72) = 0.13, p > .71; communication type by work relationship, F(1,72) = 0.01,

p > .94; communication type by prime style, F(1,72) = 0.34, p > .56; work relationship

by prime style, F(1,72) = 0.54, p > .46; three-way interaction, F(1,72) = 0.20, p > .65.

Refer to Table 14 for means and standard deviations.

Table 14 Means and Standard Deviations for Teaching Agent Competency

Communication Type

Work Relationship Prime Style Mean Standard Deviation

CMC Competitive Simple 6.06 1.34

Complex 7.10 1.37

Cooperative Simple 6.30 1.36

Complex 7.15 1.20

HCI Competitive Simple 6.40 1.29

Complex 7.35 1.75

Cooperative Simple 6.90 2.38

Complex 7.05 1.07

Other learner ability. There was a main effect of prime style on other learner

ability, F(1,72) = 6.34, p < .05. Complex-prime participants were more likely to think

highly of “the other learner” (M = 6.92, SD = 1.23) than simple-prime participants

Page 94: Side Participation and Linguistic Alignment

83

were (M = 6.22, SD = 1.24). This is expected because the confederate’s use of

advanced vocabulary and expressions in the complex prime condition could be

perceived as high verbal capability that is often required to succeed in standard tests

such as SAT and GRE. However, high ability does not necessarily translate into

likability and other positive feelings: The persistent and rather unnatural use of an

advanced vocabulary by the confederate might have intimidated and/or annoyed the

participants. As stated earlier, complex-prime participants were less likely than

simple-prime participants to rate the learning experience positively. No other main

effect or interaction was significant: communication type, F(1,72) = 0.97, p > .32;

work relationship, F(1,72) = 0.27, p > .60; communication type by work relationship,

F(1,72) = 3.23, p > .07; communication type by prime style, F(1,72) = 1.90, p > .17;

work relationship by prime style, F(1,72) = 0.00, p > .97; three-way interaction,

F(1,72) = 0.54, p > .46. Refer to Table 15 for means and standard deviations.

Page 95: Side Participation and Linguistic Alignment

84

Table 15 Means and Standard Deviations for Other Learner Ability

Communication Type

Work Relationship Prime Style Mean Standard Deviation

CMC Competitive Simple 6.17 0.84

Complex 7.05 1.22

Cooperative Simple 5.63 1.49

Complex 6.89 0.70

HCI Competitive Simple 6.13 1.20

Complex 6.65 1.60

Cooperative Simple 6.92 1.15

Complex 7.08 1.38

Self ability. There was a main effect of prime style, F(1,72) = 5.98, p < .05.

Whereas complex-prime participants tended to think highly of “the other learner,”

when asked to rate themselves for the performance during the Q&A session, simple-

prime participants (M = 5.96, SD = 1.55) rated their own ability slightly higher than

did complex-prime participants (M = 5.21, SD = 1.32). No other main effect was

found: communication type, F(1,72) = 0.95, p > .33; work relationship, F(1,72) = 0.39,

p > .53. There was also an interaction of prime style and communication type, F(1,72)

= 9.58, p < .01. Post hoc comparisons revealed that HCI-complex-prime participants

rated their own ability significantly lower than HCI-simple-prime participants, t(38) =

-2.59, p < .05, and CMC-complex-prime participants, t(38) = -2.35, p < .05. This

could mean that when a person interact with an annoyingly smart computer equipped

with a GRE vocabulary, the person is likely to doubt her/his own ability, than if s/he

communicates with a computer that adopts a basic vocabulary. No other interaction

Page 96: Side Participation and Linguistic Alignment

85

was significant: communication type by work relationship, F(1,72) = 1.07, p > .30;

work relationship by prime style, F(1,72) = 0.91, p > .34; three-way interaction,

F(1,72) = 0.91, p > .34. Table 16 shows the means and standard deviations for this

index.

Table 16 Means and Standard Deviations for Self Ability

Communication Type

Work Relationship Prime Style Mean Standard Deviation

CMC Competitive Simple 5.58 1.43

Complex 5.78 1.42

Cooperative Simple 5.70 1.71

Complex 5.90 1.17

HCI Competitive Simple 6.84 1.20

Complex 4.55 1.03

Cooperative Simple 5.74 1.70

Complex 4.63 1.17

Vocabulary similarity. There was a main effect of prime style on vocabulary

similarity, F(1,72) = 25.98, p < .001: Simple-prime participants were more likely to

think that their word choices were similar to those of “the other learner” (M = 5.38, SD

= 2.25) than were complex-prime participants (M = 3.23, SD = 1.44). The

communication type between participants and “the other learner” also had a main

effect on vocabulary similarity, F(1,72) = 4.54, p < .05. Participants in the CMC

condition (M = 3.85, SD = 2.09) were less likely than HCI participants (M = 4.75, SD

= 2.17) to think that their vocabulary was similar to that used by the confederate. No

Page 97: Side Participation and Linguistic Alignment

86

main effect was found for work relationship, F(1,72) = 1.16, p > .29. There was no

interaction effect of any kind: communication type by work relationship, F(1,72) =

0.12, p > .72; communication type by prime type, F(1,72) = 0.01, p > .91; work

relationship by prime style, F(1,72) = 0.06, p > .80; three-way interaction, F(1,72) =

0.52, p > .47. See Table 17 for means and standard deviations.

Table 17 Means and Standard Deviations for Vocabulary Similarity Perceived by Participants

Communication Type

Work Relationship Prime Style Mean Standard Deviation

CMC Competitive Simple 5.00 2.31

Complex 2.40 1.08

Cooperative Simple 4.90 2.42

Complex 3.11 0.99

HCI Competitive Simple 5.40 2.01

Complex 3.50 1.84

Cooperative Simple 6.20 2.35

Complex 3.90 1.45

Syntax similarity. There was no main effect of any factor: communication type,

F(1,72) = 0.18, p > .67; work relationship, F(1,72) = 1.11, p > .29; prime style, F(1,72)

= 3.04, p > .08. However, there was a cross-over interaction between communication

type and work relationship, F(1,72) = 4.76, p < .05. Post hoc comparisons showed

that in CMC, cooperative-condition participants were significantly more likely than

competitive-condition participants to rate the syntax similarity high, t(38) = 2.42, p

< .05; in HCI, this trend was reversed, but the difference was not statistically

Page 98: Side Participation and Linguistic Alignment

87

significant, t(38) = 0.75, p > .45. No other interaction was found: communication type

by prime style, F(1,72) = 0.04, p > .84; work relationship by prime style, F(1,72) =

1.14, p > .29; three-way interaction, F(1,72) = 0.22, p > .64. Refer to Table 18 for

means and standard deviations.

Finally, there was a significant positive correlation between vocabulary

similarity and syntax similarity, r(78) = .44, p < .01.

Table 18 Means and Standard Deviations for Overall Syntax Similarity Perceived by Participants

Communication Type

Work Relationship Prime Style Mean Standard Deviation

CMC Competitive Simple 4.67 1.63

Complex 4.20 2.53

Cooperative Simple 6.56 2.06

Complex 5.50 2.07

HCI Competitive Simple 5.40 2.84

Complex 5.20 2.35

Cooperative Simple 5.60 2.32

Complex 3.89 1.52

8.7 Discussion A strong and persistent main effect of prime style was found on all behavioral

measures from the chat room Q&A session and the paper-and-pencil quiz. The effect

of priming did not dissipate immediately after the polylogue ended, suggesting that the

tendency to align is registered at least in short-term memory and may be elicited even

if the modality changes from typing with a keyboard to writing with a pencil. These

Page 99: Side Participation and Linguistic Alignment

88

results provide new evidence that interlocutors display linguistic alignment behaviors

as a result of previous side-participation: When a person is one of two persons who

take turns to be the speaker/side-participant in a triad, he/she is likely to align with the

other person in terms of sentence length, vocabulary difficulty, and the use of critical

nouns and proper capitalization. Speakers’ utterances may be “primed” through

indirect interaction such as side-participation in conversation. The pervasive alignment

behaviors as a result of priming in a polylogue context provide strong support to the

existence and dominance of the automatic component in linguistic alignment.

Most importantly, the results show that social factors such as work relationship

could also contribute to the variance in some aspects of linguistic alignment. While it

appears that the alignment of sentence length, vocabulary complexity, the use of a

critical noun in referring expressions, and proper (or the lack of) capitalization could

all be achieved through priming, the alignment of vocabulary might also be subject to

the nature of the relationship between two interlocutors. More specifically, all things

being equal, a participant was more likely to align with the “other learner” in terms of

vocabulary simplicity when their work relationship was cooperative than when the

relationship was competitive. It is very likely that the effects observed in Study I will

be even more pronounced with other types of social relationships in traditional, dyadic,

and face-to-face settings.

However, the effect of work relationship on the alignment of vocabulary

difficulty did not last long enough to affect how participants compiled their top-15

lists. It could be inferred that the alignment of certain stylistic dimensions due to social

Page 100: Side Participation and Linguistic Alignment

89

factors could be an on-line process (i.e., the processing of data was carried out

simultaneously with its production) only and thus short-lived, while the alignment as a

result of priming is registered in relatively longer-term memory.

Communication type did not have any effect on behavioral measures collected

from the Q&A session, but had a cross-over interaction effect on average fact length:

CMC participants appeared to have displayed higher degree of alignment than HCI

participants. In other words, communication type moderated alignment due to priming.

Page 101: Side Participation and Linguistic Alignment

90

CHAPTER 9

Study II

Alignment as a Result of Side-Participation: Unshared

Learning Subjects

Participants in Study I never interacted with “the other learner” in any ways,

yet they exhibited alignment of linguistic style as a result of priming by the question

styles used by the “other learner.” One may argue that there was only one shared

subject, so they needed to monitor the progress in order to avoid repeating questions

raised by the “other learner.” Depending on the work relationship, they might even

purposefully follow the “other learner” to cover related aspects of the subject. Whether

or not a participant was instructed to cooperate with “the other learner,” there was

clearly no gain from repeating any questions.

Following this line of argument, it is intuitive to suggest that the power of

priming should be weakened if the need to monitor the progress and specifics of the

“other learner” decreases. For example, if participants have a drastically different

Q&A subject to work on, details of each canned question from the “other learner”

should not be a major concern for them. They still need to monitor the progress of the

whole chat session for turn-taking purposes, but the “common ground” between

participants and the “other learner,” if any, would be minimized to the least. With the

introduction of a second learning subject, semantic priming is expected to be reduced.

Alignment due to priming received through side-participation, is expected to be

substantially weaker than that found in Study I.

Page 102: Side Participation and Linguistic Alignment

91

As a follow-up, Study II is very similar to Study I in many aspects. The key

difference is that Study II participants did not share the same learning subject with

“the other learner”; instead, they were told to ask questions about a fictitious “national

sport of an African country.” There were also some methodological refinements,

which I report below in Sections 9.1 through 9.4. For a quick summary of the changes

and refinements made for Study II, refer to Table 1 in Chapter 7.

9.1 Method Similar to Study I and largely modeled after the Twenty Questions game,

Study II was also advertised as a “learning through questioning” experiment to

prospective participants. Methodologically, however, there is a fundamental difference

between these two studies: Whereas Study I was mostly a Wizard-of-Oz (WOZ) lab

experiment, Study II used the reversed Wizard of Oz (RWOZ) approach with a more

realistic user environment –a large study area in a library.

Specifically, Study II used a browser-based simulation of group chat room

where computer programs played not the “pedagogical agent” but also the “learning

agent” in the HCI condition and the “other learner” in the CMC condition. The quiz

and post-task questionnaire were also changed from paper-and-pencil to online, so

participants did not have to interact with the experimenter multiple times. Most

changes were made to cut operational costs and improve efficiency. Additional

differences between Study II and Study I as follows:

Experiment venue and hardware. Study II took place in a large open study area

in a library. Participants were instructed to use any of the public dual-boot iMacs with

Page 103: Side Participation and Linguistic Alignment

92

Mac OS X or Windows 7 in the study area; they were also allowed to use their own

laptops. In Study I, participants had to work with one small laptop running Windows

OS.

Software for group text chat. A browser-based conference chat simulation was

developed for Study II to automate chat. As shown in Figure 3, in the right-side pane

three (3) chat room handles used by the “teaching agent,” the “other learner,” and the

naïve participant. To enforce turn-taking, input box was disabled when the status bar

at the bottom of the indicated that the participant should be waiting for his or her turn.

At the conclusion of the Q&A session, the Web browser was redirected to an online

survey site where the participant took a quiz and filled out a post-task questionnaire.

Page 104: Side Participation and Linguistic Alignment

93

Figure 3. Screenshot showing user interface of simulated conference chat room.

Operations. The change made to the method and technology made it possible

for one single experimenter to run multiple sessions simultaneously. For each session

in Study I, three researchers were needed to be the experimenter, play the “other

learner,” and the “pedagogical agent,” respectively. In addition to improving

efficiency, this change of operation also offered more flexibility for participants to

sign up and change their time slots if necessary.

Q&A session learning subjects. Whereas participants and the “other learner”

worked on the same subject in Study I, a second fictitious learning subject was

Page 105: Side Participation and Linguistic Alignment

94

introduced to Study II and assigned to participants. The “other learner” still used the

same learning subject but the name of the matriarchal group was changed to “Nosua.”

More details are available in the next section.

Experimental tasks. Participants were instructed to ask 20 “yes-or-no”

questions during the Q&A session in Study II. For the quiz following the Q&A

session, participants were asked to compile one top-10 list of facts for each learning

subject, so the total was 20. In Study I, these numbers were 12 and 15, respectively.

Answers from “teaching agent.” In Study I a wizard played the “teaching

agent” and always offered correct or appropriate answers to participants using red,

green, and yellow dots to indicated “Yes,” “No,” and “Unknown / Wrong question.”

For Study II, the “teaching agent” controlled by a computer program offered

randomized answers with exactly same portions of the three types of answers found in

the dialog between the “other learner” and the “teaching agent.” The fact that the focus

of the study was on the influence of an addresser’s (i.e., “the other learner’s”)

language use on that of a side-participant (i.e., a naïve participant), required that the

utterances from the teaching agent (i.e., the wizard) had little or no impact on the side-

participant. The randomization offered better control over equal treatment and was not

a problem because the fictitious learning subject had no correct answers. Instead of

using colored dots, three letters “Y,” “N,” and “U” were used to represent “Yes,” “No,”

and “Unknown / Wrong question.” The possibility of having color-blind participants

was no longer a concern.

Page 106: Side Participation and Linguistic Alignment

95

Quiz and questionnaire modality. In Study II, both the quiz and the post-task

questionnaire were changed from paper-and-pencil to browser-based. In fact, the quiz

became part of the questionnaire to take advantage of the survey tool (i.e.,

surveymonkey.com).

Dependent variables. Due to the introduction of the second learning subject,

an additional set of behavioral measures were used for the “top-10” list on the “other

learner’s” subject. It was the same set used for the “top-10” list on participants’

subject, but added a new measure of number of facts (on “Nosua”).

Benefits of this automation included drastically reduced requirement for

human power and improved efficiency. It also allowed for easier data collection and

extraction. But the simulation and automation had disadvantages, too. For example,

the wizard in Study I could respond appropriately to participant inquiries meant to test

the “pedagogical agent,” in Study II the competency of the computer program could

be in doubt when it failed to answer the same question twice with exactly the same

answer.

Study II used exactly the same 2 x 2 x 2 between-participants factorial design

that was used in Study I. Briefly, the first factor was the style of question asked by

“the other learner”: simple vs. complex; the second factor was the work relationship

between the naïve participant and “the other learner”: competitive vs. cooperative; the

third factor was whom “the other learner” was, or the type of interaction between the

participant and “the other learner”: HCI vs. CMC. For details of these three factors,

see Chapter 8.

Page 107: Side Participation and Linguistic Alignment

96

Each participant was assigned a unique handle and a password; questions typed

by the participant were captured and stored under that unique handle in a database. In

the HCI condition, “the other learner” was called “learning_agent” in the simulated

chat room. For CMC participants, the handle used by “the other learner” was

randomly drawn from 20 handles stored in a computer program. The coupling of

unique participant chat handles and randomly drawn “other learner” handles was

meant to reduce potential data contamination in case any finished participants talked

about the study with incoming participants.

9.2 Participants Eighty (80) undergraduate and graduate students at Stanford University, all

native speakers of American English, were recruited from several communication

classes and participated in the study for research credit. Each participant was randomly

assigned to one of the eight (8) conditions; gender was balanced so that there were

equal numbers of male and female participants in all conditions.

9.3 Materials Participants were instructed to ask questions so as to learn some important

facts about a fictitious African country’s national sport called “Zoonkaba.” On the

other hand, the “other learner” ostensibly worked on the same subject used in Study I,

but the name was changed from “Mosua” to “Nosua” to discourage online search. The

learning subjects read:

Page 108: Side Participation and Linguistic Alignment

97

Subject #1 and clue. Anthropologists conducted research on the relatively

unknown Nosua*, an indigenous tribe living on a small Pacific island. They

are a matriarchal society.

Subject #2 and clue. Recently, a small African country proposed to

UNESCO that their national sport, Zoonkaba*, be considered “intangible

cultural heritage of humanity.” It is a team sport.

*These names were intentionally altered to put all participants on equal

footing.

A total of 20 canned questions about the matriarchal group were used in Study

II. Each question had a simple version and a complex version. In general, the complex

version 1) had many words drawn from an advanced vocabulary, 2) always adopted

proper capitalization, and 3) used the critical noun “Nosua” with a high frequency.

The simple version, on the contrary, 1) had words drawn from a basic vocabulary, 2)

used no capitalization, and 3) never used the word “Nosua.” For example, to inquire

about how children are named, the “other learner” would ask a question in one of the

following two ways:

Complex version: “Do offspring of a Nosua woman bear her surname?”

Simple version: “do kids use their mother’s last name?”

As a result of the automated chat room simulation, participants in each prime

style conditions (i.e., simple or complex) were exposed to exactly the same 20

questions. For each participant, the average question length was 6.60 words (SD =

2.80) for the simple primes and 12.55 words (SD = 4.59) for the complex ones; the

Page 109: Side Participation and Linguistic Alignment

98

simple primes were significantly shorter than the complex ones, t(1, 38) = -4.95, p

< .001. The average Flesch Reading Ease score was 91.10 for the simple primes and

40.40 for the complex ones.

Within the canned questions used for the study, the critical noun “Nosua” was

never used in the simple-prime condition, but in the complex-prime condition it was

used in 12 of the 20 canned questions or had a .60 time per question frequency of

appearance.

9.4 Procedure Upon arrival at the study area in a library, the participant was greeted by a

male experimenter and received a small tent card on which the URL of the study, a

username, and a password were printed. The participant was told to use any

unoccupied computers or his/her own laptop. Once the participant was seated and

logged in, he or she had two minutes to read instructions online. Depending on the

login credential, instructions would change accordingly (four sets of instruction for a

2x2 of communication type by work relationship).

Once the two-minute instruction screen went away, HCI participants entered

the chat room immediately. CMC participants, however, saw a loading screen that

lasted for one minute: The “system” was ostensibly locating and connecting to the

“other learner.”

Upon entering the chat room, the participants saw that his or her username was

already listed under those of the “teaching agent” and the “other learner” (see Figure

5). The “teaching agent” subsequently welcomed the participant and the “other

Page 110: Side Participation and Linguistic Alignment

99

learner,” before reminding both of their learning subjects and instructing the “other

learner” to ask the first question.

In the simple-prime condition, the “other learner” would send out the first

question 15 seconds after the last line of instruction from the “teaching agent.” After

that, the “other learner” would send out the next question 15 seconds after the

“teaching agent” had answered a preceding question from the participant. The

“teaching agent” would respond to each question four (4) seconds later (in Study I this

was six seconds; change was made after pilot testing of chat simulation). In the

complex-prime condition, the wait would increase to 20 seconds for the “other learner”

and 8 seconds for the “teaching agent.” This difference in timing was needed to ensure

that the production and processing of complex questions by the “other learner” and by

the “teaching agent,” respectively, was believable.

A typical Q&A session (complex-prime condition) went like this:

pedagogical_agent: Welcome to the 20 Questions interactive learning

session!

pedagogical_agent: student1028’s subject is the Nosua matriarchal tribe;

student2618’s subject is the team sport of Zoonkaba.

pedagogical_agent: Take turns to ask questions. Student1028, please ask the

first question.

(20 seconds later)

Student1028: Do the Nosua typically exist in conventional-type families,

consisting of a male parent, a female parent, and multiple children?

(4 seconds later)

Page 111: Side Participation and Linguistic Alignment

100

pedagogical_agent: N

student2618: Does the team sport of Zoonkaba consist of more than 5 people

per team

(4 seconds later)

pedagogical_agent: Y

(20 seconds later)

student0025: Do the offspring sojourn in the same residence as their mother?

The “other learner” and the participant took turns and each asked their 20

questions. Six seconds after the wizard responded to the participant’s last question, the

wizard announced the end of the Q&A session. The web browser was automatically

redirected to a survey website where the participant took the quiz before filling out the

questionnaire. Finally, upon completing the questionnaire, the participant returned the

card to the experimenter, was debriefed (sometimes in a group), thanked, and sent

away.

9.5 Measures

9.5.1 Behavioral Measures

Participants’ questions and lists of learned facts were compared against the

canned questions in terms of average sentence length, overall discourse complexity,

the frequency of the critical noun in referring expressions, and proper capitalization in

sentences. Each participant’s questions and quiz results were aggregated into two

paragraphs and analyzed separately in Microsoft Word.

Page 112: Side Participation and Linguistic Alignment

101

Average question length is the average number of words participants used in

each question they asked during the Q&A session.

Overall question vocabulary simplicity score uses the Flesch Reading Ease

score (Flesch, 1948) controlling for average question length, of the aggregated

paragraph of each participant’s 20-questions.

Use of critical noun in questions is a binary variable. A true value was assessed

when a participant used the critical noun “Zoonkaba” two times or more in her/his 20

questions. The value was otherwise false.

Proper capitalization in questions is also a binary variable. A value of 1 is

assessed when a participant used proper capitalization in more than half of his/her

typed questions. Otherwise, the participant got a value of 0. Compared to Study I

participants, Study II participants were not very consistent in terms of choosing to use

proper capitalization or lowercase every word. Many participants used proper

capitalization only for part of their 20 questions, so the 50% cutoff was necessary.

Similar measures for the top-10 list of learned facts about “Zoonkaba” of each

participant were also taken. They are average fact length, overall fact readability

score, overall fact vocabulary simplicity score, use of critical noun in facts, and use of

proper capitalization in facts. This first top-10 list is hereafter referred to as the “self-

subject list.”

For the other top-10 list of facts about “Nosua,” number of facts was added to

average fact length, overall fact readability score, overall fact vocabulary simplicity

Page 113: Side Participation and Linguistic Alignment

102

score, use of critical noun in facts, and use of proper capitalization in facts. This

second top-10 list is hereafter referred to as the “other-learner-subject list.”

9.5.2 Attitudinal Measures

All attitudinal measures were taken and generated from an online post-task

questionnaire that asked about the participant’s perceptions of the learning experience,

the “teaching agent,” the “other learner,” and the participant himself or herself. Ten-

point Likert scales were used consistently throughout the questionnaire. The same

indices generated in Study I were used for this part of the data analysis.

Positivity of learning was an index about learning experience and was

comprised of six items: 1) enjoyable, 2) rewarding, 3) creative, 4) useful, 5) pleasant,

and 6) efficient; the index was highly reliable, Cronbach’s α = .84.

Easiness of learning was comprised of two items: simple and easy. The index

was reliable, α = .59. Although the reliability was low, this index was kept for

comparison between Studies II and I.

Teaching agent competency was comprised of two items: competent and

knowledgeable (α = .56). This low-reliability index was also used for direct

comparison between Studies II and I.

Other learner ability was an index about “the other learner,” comprised of four

items: 1) intelligent, 2) knowledgeable, 3) successful, and 7) in control. The index was

highly reliable, α = .80.

Page 114: Side Participation and Linguistic Alignment

103

Self ability was an index comprised of the same four items used in the other

learner ability index, and was highly reliable, α = .79.

Finally, vocabulary similarity, and syntax similarity were two single items

from two separate questions about participants’ perception of how similar their

vocabulary and that of the other learner were. Both items used 10-point Likert scales.

9.6 Results SPSS Univariate ANOVAs were conducted for all continuous variables;

Logistic Regressions were performance for all binary measures.

9.6.1 Behavioral Measures from Q&A Session

There was a strong and persistent main effect of prime style on most

behavioral measures from the chat room Q&A session. A main effect of

communication type was only found for average question length. No main effect of

work relationship was found, but there was an interaction effect of work relationship

and prime style on overall question vocabulary simplicity score, and a three-way

interaction on capitalization score.

Average question length. There was a main effect of prime style, F(1,72) =

23.25, p < .001. Simple-prime participants generated questions that were significantly

shorter (M = 7.05, SD = 1.15) than those from complex-prime participants (M = 8.51,

SD = 1.59). The main effect of communication type was also significant, F(1,72) =

5.49, p < .05: CMC participants asked shorter questions (M = 7.43, SD = 1.56) than

did HCI participants (M = 8.13, SD = 1.50). This latter main effect was not observed

in Study I. At the same time, there was no other main effect or interaction effect of any

Page 115: Side Participation and Linguistic Alignment

104

kind: work relationship, F(1,72) = 1.03, p > .31; communication type by work

relationship, F(1,72) = 2.66, p > .10; communication type by prime type, F(1,72) =

0.76, p > .38; work relationship by prime type, F(1,72) = 0.45, p > .50; three-way

interaction, F(1,72) = 0.01, p > .91. Table shows the means and standard deviations

for this measure.

Table 19 Means and Standard Deviations for Average Question Length by Condition

Communication Type

Work Relationship Prime Style Mean Standard Deviation

CMC Competitive Simple 7.79 1.15

Complex 9.28 1.77

Cooperative Simple 6.76 0.60

Complex 8.71 0.95

HCI Competitive Simple 6.82 1.25

Complex 7.85 1.65

Cooperative Simple 6.84 1.30

Complex 8.20 1.72

Overall question vocabulary simplicity score. There was a main effect of prime

style of overall question vocabulary score, F(1,71) = 19.23, p < .001: Simple-prime

participants were more likely to use a basic vocabulary while complex-prime

participants tended to adopt a relatively advanced vocabulary. The interaction effect of

work relationship and prime style was also significant, F(1,71) = 4.40, p < .05. Post

hoc comparisons show that work relationship influenced participants’ word choices in

the complex-prime condition t(38) = -2.45, p < .05: competitive-complex-prime

Page 116: Side Participation and Linguistic Alignment

105

participants generated questions with lower readability than did cooperative-complex-

prime participants. In Study I, it was in the simple-prime condition where work

relationship had an effect on the overall question vocabulary simplicity score. No

other main effect or interaction was significant: communication type, F(1,71) = 1.94,

p > .16; work relationship, F(1,71) = 2.37, p > .12; communication type by work

relationship, F(1,71) = 0.05, p > .82; communication type by prime style, F(1,71) =

0.14, p > .71; three-way interaction, F(1,72) = 1.08, p > .30. Refer to Table 20 for

means and standard deviations.

Table 20 Means and Standard Deviations for Overall Question Vocabulary Simplicity Score by Condition

Communication Type

Work Relationship Prime Style Mean Standard Deviation

CMC Competitive Simple 86.84 12.16

Complex 71.32 9.16

Cooperative Simple 88.70 7.65

Complex 77.05 10.47

HCI Competitive Simple 92.04 8.51

Complex 74.3 11.29

Cooperative Simple 89.16 6.79

Complex 84.19 8.65

Use of critical noun in questions. An SPSS Binary Logistic Regression was

performed to assess the impact of the three independent variables on the likelihood

that participants would use the critical noun “Zoonkaba” in their typed questions. The

full model containing all predictors was statistically significant, χ2(3, N = 80) = 25.77,

Page 117: Side Participation and Linguistic Alignment

106

p < .001, indicating that the model was able to distinguish between participants who

used and did not use “Zoonkaba” in their questions. The model correctly classified

76.3% of cases, but the only prime style made a statistically significant contribution to

the model, recording an odds ratio of 11.542 (see Table 21 and Table 22). Even

though participants were not expected to have any interest in whether or not the “other

learner” used or omitted the critical noun “Nosua,” they nonetheless displayed

alignment behavior in terms of using their own critical noun “Zoonkaba” when the

“other learner” used “Nosua.”

Table 21 Observed and Predicted Frequencies for the Use of “Zoonkaba” in Questions with the Cutoff of 0.50

Observed

Predicted use of “Zoonkaba”

Once or none (0) Twice or more (1) % Correct

Once or none (0) 29 8 78.4

Twice or more (1) 11 32 74.4

Overall % correct 76.3

Page 118: Side Participation and Linguistic Alignment

107

Table 22 Logistic regression analysis of the use of critical noun “Zoonkaba” in questions by SPSS 19

Predictor β SE β Wald’s

χ2 df p eβ

(odds ratio)

Constant -0.866 .515 2.825 1 .093 .421

Communication type (1 = HCI, 0 = CMC) 0.430 .539 0.636 1 .425 1.537

Relationship (1 = competitive, 0 = cooperative) -0.715 .545 1.719 1 .190 .489

Prime type (1 = complex, 0 = simple) 2.446 .551 19.676 1 .000 11.542

Proper capitalization in questions. An SPSS Binary Logistic Regression was

performed to assess the impact of the three independent variables on the likelihood

that participants would use proper capitalization in their typed questions. The full

model containing all predictors was statistically significant, χ2(3, N = 80) = 11.33, p

< .05, indicating that the model was able to distinguish between participants who used

and did not use proper capitalization in their questions. The model correctly classified

65.0% of cases, but the only prime style made a statistically significant contribution to

the model, recording an odds ratio of 4.251 (see Table 23 and Table 24).

Page 119: Side Participation and Linguistic Alignment

108

Table 23 Observed and Predicted Frequencies for the Use of Proper Capitalization in Questions with the Cutoff of 0.50

Observed

Predicted

Lowercased Capitalized % Correct

Lowercased 10 18 35.7

Capitalized 10 42 80.8

Overall % correct 65.0

Table 24 Logistic Regression Analysis of Proper Capitalization of Typed Questions by SPSS 19

Predictor β SE β Wald’s

χ2 df p eβ

(odds ratio)

Constant -.630 .489 1.657 1 .198 1.878

Communication type (1 = HCI, 0 = CMC) -.754 .510 2.186 1 .139 .471

Relationship (1 = competitive, 0 = cooperative) -.506 .507 .997 1 .318 .603

Prime type (1 = complex, 0 = simple) 1.447 .521 7.726 1 .005 4.251

9.6.2 Behavioral Measures from Browser-Based Quiz

Participants produced two top-10 lists for the two learning subjects. They

started with the self-subject list (i.e., the “Zoonkaba” list) before moving on to the

Page 120: Side Participation and Linguistic Alignment

109

other-learner-subject list (i.e., the “Nosua” list). In the instructions participants were

not told that they would be quizzed on the “Nosua” subject, so this second list was

used to determine to what degree a participant paid attention to the primes.

9.6.2.1 The Self-Subject List

For measures taken from the self-subject list, a series of SPSS Univariate

ANOVAs were Performed.

Average fact length. There was no main effect or interaction at all:

communication type, F(1,72) = 1.04, p > .31; work relationship, F(1,72) = 0.10,

p > .75; prime style: F(1,72) = 2.59, p > .11; communication type by work relationship,

F(1,72) = 0.00, p > .94; communication type by prime style, F(1,72) = 0.00, p > .99;

work relationship by prime style, F(1,72) = 0.21, p > .64; three-way interaction,

F(1,72) = 1.14, p > .28. Refer to Table 25 for means and standard deviations.

Table 25 Means and Standard Deviations for Average Fact Length in Self-Subject List

Communication Type

Work Relationship Prime Style Mean Standard Deviation

CMC Competitive Simple 6.06 1.81

Complex 6.09 1.52

Cooperative Simple 5.50 0.98

Complex 6.49 0.84

HCI Competitive Simple 6.07 1.33

Complex 6.76 1.23

Cooperative Simple 6.14 1.56

Complex 6.45 1.67

Page 121: Side Participation and Linguistic Alignment

110

Overall Fact Vocabulary Simplicity Score. There were two main effects:

communication type, F(1, 71) = 7.83, p < .01, and prime style, F(1,71) = 4.67, p < .05.

CMC-participants adopted a vocabulary that was more advanced than that used by

HCI-participants; the vocabulary used by complex-prime participants to compile their

self-subject lists was more complex than that used by simple-participants. No other

main effect or interaction was significant: work relationship, F(1,71) = 0.78, p > .38;

communication type by work relationship, F(1,71) = 0.07, p > .79; communication

type by prime style, F(1,71) = 0.59, p > .44; work relationship by prime style, F(1,71)

= 0.65, p > .42; three-way interaction, F(1,72) = 0.22, p > .64. Refer to Table 26 above

for means and standard deviations.

Table 26 Means and Standard Deviations for Overall Fact Readability Score in Self-Subject List

Communication Type

Work Relationship Prime Style Mean Standard Deviation

CMC Competitive Simple 84.61 12.75

Complex 78.30 10.04

Cooperative Simple 87.29 9.38

Complex 78.67 14.45

HCI Competitive Simple 87.74 12.92

Complex 86.86 11.40

Cooperative Simple 95.51 5.20

Complex 86.79 5.22

Use of critical noun in facts. An SPSS Logistic Regression analysis identified

two strong predictors for this measure. The full model containing all three predictors

Page 122: Side Participation and Linguistic Alignment

111

was statistically significant, χ2(3, N = 80) = 13.374, p < .01, and correctly classified

83.8% of cases (Table 27). Prime style made a statistically significant contribution to

the model, recording an odds ratio of 8.123; work relationship also contributed to the

model with an odds ratio of 4.761 (Table 28). Complex-prime participants were more

likely than simple-prime participants to use the word “Zoonkaba” in their top-10 lists,

and competitive-condition participants showed the same tendency over cooperative-

condition participants.

Table 27 Observed and Predicted Frequencies for the Use of the Critical Noun “Zoonkaba” in Learned Facts with the Cutoff of 0.50

Observed

Predicted use of “Zoonkaba”

Once or none (0) Twice or more (1) % Correct

Once or none (0) 67 0 100.0

Twice or more (1) 13 0 .0

Overall % correct 83.8

Table 28 Logistic Regression Analysis of the Use of “Zoonkaba” in Learned Facts by SPSS 19

Predictor β SE β Wald’s

χ2 df p eβ

(odds ratio)

Constant -3.875 .987 15.425 1 .000 .021

Communication type (1 = HCI, 0 = CMC) -.222 .668 .111 1 .739 .801

Page 123: Side Participation and Linguistic Alignment

112

Relationship (1 = competitive, 0 = cooperative)

1.560 .737 4.484 1 .034 4.761

Prime type (1 = complex, 0 = simple) 2.095 .829 6.384 1 .012 8.123

User of proper capitalization in facts. An SPSS Logistic Regression analysis

on the impact of the three independent variables on the use of proper capitalization in

facts indicates that none of the factors was a reliable predictor for the use of proper

capitalization in the self-subject list (see Table 29). For this part, the priming effect

observed in the Q&A session seemed to have worn out even though the modality (i.e.,

keyboard entry) did not change. In other words, the priming of the use of proper

capitalization was only effective during the interactive text chat.

Table 29 Logistic Regression Analysis of the Use of Proper Capitalization in Facts from the Self-Subject List

Predictor β SE β Wald’s

χ2 df p eβ

(odds ratio)

Constant -0.034 .463 0.005 1 .941 0.966

Communication type (1 = HCI, 0 = CMC) 0.677 .481 1.981 1 .159 1.968

Relationship (1 = competitive, 0 = cooperative)

0.455 .479 0.899 1 .343 1.575

Prime type (1 = complex, 0 = simple) 0.228 .478 0.228 1 .633 1.256

Page 124: Side Participation and Linguistic Alignment

113

9.6.2.2 The Other-Learner-Subject List

In the task instructions, participants were not specifically told that they would

be quizzed on the “Nosua” subject, so measures from this list could potentially

indicate the degree to which participants paid attention to questions from the “other

learner.” For this second top-10 list, eight participants (10% of all) did not produce a

single fact. The average was 5.29 facts per participant; there were also many

“fragmented” sentences such as “is matriarchal” and “eat meat.”

Number of facts. There was a highly significant main effect of work

relationship, F(1, 72) = 14.96, p < .001. Participants who were told to cooperate with

the “other learner” were able to list significantly more facts about “Nosua” (M = 6.55,

SD = 3.04) than those who were instructed to compete against the “other learner” (M =

4.02, SD = 2.81). This suggests that cooperative-condition participants were more

attentive than their competitive-condition counterparts to what the “other leaner”

generated. No other main effect or interaction was found: interaction type, F(1,72) =

2.01, p > .16; prime style, F(1,72) = 2.47, p > .12; communication type by work

relationship, F(1,72) = 0.12, p > .73; communication type by prime style, F(1,72) =

0.12, p > .73; work relationship by prime style, F(1,72) = 0.12, p > .73; three-way

interaction, F(1,72) = 1.41, p > .23. Refer to Table 30 for means and standard

deviations.

Page 125: Side Participation and Linguistic Alignment

114

Table 30 Means and Standard Deviations for Number of Facts in Other-Learner-Subject List

Communication Type

Work Relationship Prime Style Mean Standard Deviation

CMC Competitive Simple 5.50 3.21

Complex 3.70 2.00

Cooperative Simple 6.80 3.36

Complex 7.00 2.36

HCI Competitive Simple 3.80 2.57

Complex 3.10 3.11

Cooperative Simple 7.10 2.77

Complex 5.30 3.62

Average length of facts. For the 72 participants who compiled this second list,

there was a main effect of prime style on average length of facts, F(1, 64) = 14.06, p

< .001. Simple-prime participants used significantly fewer words in their listed facts

(M = 5.35, SD = 1.19) and did complex-prime participants (M = 6.59, SD = 1.64). No

other main effect or interaction was found: interaction type, F(1,64) = 0.14, p > .71;

work relationship, F(1,64) = 1.48, p > .22; communication type by work relationship,

F(1,64) = 2.69, p > .10; communication type by prime style, F(1,64) = 1.19, p > .28;

work relationship by prime style, F(1,64) = 0.07, p > .78; three-way interaction,

F(1,64) = 3.60, p > .06.

Refer to Table 31 for means and standard deviations.

Page 126: Side Participation and Linguistic Alignment

115

Table 31 Means and Standard Deviations for Average Length of Facts in Other-Learner-Subject List

Communication Type

Work Relationship Prime Style Mean Standard Deviation N

CMC Competitive Simple 5.53 1.01 9

Complex 6.42 1.98 10

Cooperative Simple 4.96 1.10 9

Complex 7.28 1.29 10

HCI Competitive Simple 5.69 1.30 8

Complex 7.11 1.48 7

Cooperative Simple 5.28 1.37 10

Complex 5.62 1.37 9

Overall fact vocabulary simplicity score. There was a main effect of prime

style, F(1, 63) = 11.91, p < .01. Simple-prime participants used a significantly more

basic vocabulary than did complex-prime participants. No other main effect or

interaction was found: interaction type, F(1,63) = 0.07, p > .79; work relationship,

F(1,63) = 1.22, p > .27; communication type by work relationship, F(1,63) = 0.11,

p > .73; communication type by prime style, F(1,63) = 0.22, p > .64; work relationship

by prime style, F(1,63) = 0.59, p > .44; three-way interaction, F(1,63) = 1.62, p > .20.

Refer to Table 32 above for means and standard deviations.

Page 127: Side Participation and Linguistic Alignment

116

Table 32 Means and Standard Deviations for Overall Fact Vocabulary Simplicity Score in Other-Learner-Subject List

Communication Type

Work Relationship Prime Style Mean Standard Deviation N

CMC Competitive Simple 74.70 7.98 9

Complex 62.17 15.74 10

Cooperative Simple 77.06 18.43 9

Complex 70.47 8.38 10

HCI Competitive Simple 72.19 14.16 8

Complex 67.73 11.48 7

Cooperative Simple 81.59 19.04 10

Complex 59.73 16.63 9

Use of critical noun. An SPSS Logistic Regression analysis indicated that none

of the factors was a reliable predictor for the use of critical noun “Nosua” in this

second list of fact (see Table 33).

Table 33 Logistic Regression Analysis of Use of Proper Capitalization in Facts from Self-Subject List

Predictor β SE β Wald’s

χ2 df p eβ

(odds ratio)

Constant -22.843 6217.218 0.000 1 .997 0.000

Communication type (1 = HCI, 0 = CMC) 1.938 1.194 2.635 1 .105 6.942

Relationship (1 = competitive, 0 = cooperative)

19.573 6217.218 0.000 1 .997 316534122.042

Page 128: Side Participation and Linguistic Alignment

117

Prime type (1 = complex, 0 = simple) 0.637 1.040 0.375 1 .540 1.890

Use of proper capitalization. An SPSS Logistic Regression analysis suggested

that none of the three factors was a reliable predictor for the use of proper

capitalization in the other-learner-subject lists (see Table 34). It is reasonable to

believe that the priming effect on capitalization was time-sensitive and did not last

beyond the interactive Q&A session.

Table 34 Logistic Regression Analysis of Use of Proper Capitalization in Facts from Other-Learner-Subject List

Predictor β SE β Wald’s

χ2 df p eβ

(odds ratio)

Constant 0.112 .489 0.053 1 .819 1.119

Communication type (1 = HCI, 0 = CMC) 0.491 .508 0.935 1 .333 1.634

Relationship (1 = competitive, 0 = cooperative)

0.483 .507 0.908 1 .341 1.621

Prime type (1 = complex, 0 = simple) 0.153 .502 0.093 1 .761 1.165

9.6.3 Post-Task Attitudinal Measures

Positivity of learning. Unlike what was found in Study I, there was no main

effect of prime style on this index, F(1, 72) = 0.56, p > .45. The other two factors had

Page 129: Side Participation and Linguistic Alignment

118

no significant main effect either: communication type, F(1, 72) = 0.51, p > .47; work

relationship, F(1, 72) = 1.91, p > .17. But an interaction of communication type and

work relationship was significant, F(1, 72) = 7.19, p < .01. Post hoc comparisons

suggest that HCI-participants who were instructed to compete against “the other

learner” rated their learning experience as significantly more negative than did those

who were told to cooperate with “the other learner,” t(38) = -3.05, p < .01. No other

interaction was significant: communication type by prime style, F(1,72) = 2.02,

p > .15; work relationship by prime style, F(1,72) = 0.05, p > .82; three-way

interaction, F(1,72) = 0.64, p > .42. Refer to Table 35 above for means and standard

deviations.

Table 35 Means and Standard Deviations for Positivity of Learning in Study II

Communication Type Work Relationship Prime Style Mean Standard Deviation

CMC Competitive Simple 4.85 1.56

Complex 4.88 2.12

Cooperative Simple 4.25 1.35

Complex 4.65 0.77

HCI Competitive Simple 4.42 1.47

Complex 4.05 1.16

Cooperative Simple 6.05 1.24

Complex 5.03 1.42

Easiness of learning. There was a main effect of work relationship, F(1,72) =

4.73, p < .05. Competitive-condition participants rated the Q&A session as easier than

cooperative-condition participants did. There was no other main effect or interaction

Page 130: Side Participation and Linguistic Alignment

119

effect on this index: communication type, F(1,72) = 0.03, p > .86; prime style, F(1,72)

= 1.92, p > .17; communication type by work relationship, F(1,72) = 0.00, p > .98;

communication type by prime style, F(1,72) = 0.00, p > .96; work relationship by

prime style, F(1,72) = 0.00, p > .98; three-way interaction, F(1,72) = 0.35, p > .55.

Table 36 shows the means and standard deviations.

Table 36 Means and Standard Deviations for Easiness of Learning in Study II

Communication Type

Work Relationship Prime Style Mean Standard Deviation

CMC Competitive Simple 5.43 2.22

Complex 4.60 2.09

Cooperative Simple 4.30 2.08

Complex 3.95 1.85

HCI Competitive Simple 5.10 2.20

Complex 4.80 1.38

Cooperative Simple 4.45 1.12

Complex 3.65 1.42

Teaching agent competency. There was no main effect or interaction effect on

this index: communication type, F(1,72) = 1.25, p > .26; work relationship, F(1,72) =

0.45, p > .50; prime style, F(1,72) = 3.49, p > .06; communication type by work

relationship, F(1,72) = 2.73, p > .10; communication type by prime style, F(1,72) =

0.00, p > .96; work relationship by prime style, F(1,72) = 0.56, p > .45; three-way

interaction, F(1,72) = 0.23, p > .63. Table 37 shows the means and standard deviations

of this index.

Page 131: Side Participation and Linguistic Alignment

120

Table 37 Means and Standard Deviations for Teaching Agent Competency in Study II

Communication Type

Work Relationship Prime Style Mean Standard Deviation

CMC Competitive Simple 6.70 2.18

Complex 5.94 1.74

Cooperative Simple 5.76 0.48

Complex 5.20 2.21

HCI Competitive Simple 5.89 1.45

Complex 4.75 1.77

Cooperative Simple 5.80 1.42

Complex 5.55 0.93

Other learner ability. There was a main effect of prime style on other learner

ability, F(1,72) = 14.75, p < .001. Consistent with what was found in Study I,

complex-prime participants were more likely to think highly of “the other learner” (M

= 7.03, SD = 1.18) than simple-prime participants were (M = 5.90, SD = 1.27). No

other main effect or interaction was found: communication type, F(1,72) = 0.37,

p > .54; work relationship, F(1,72) = 0.02, p > .88; communication type by work

relationship, F(1,72) = 0.96, p > .33; communication type by prime style, F(1,72) =

0.25, p > .61; work relationship by prime style, F(1,72) = 0.01, p > .94; three-way

interaction, F(1,72) = 0.57, p > .45.

See Table 38 for means and standard deviations.

Page 132: Side Participation and Linguistic Alignment

121

Table 38 Means and Standard Deviations for Other Learner Ability in Study II

Communication Type

Work Relationship Prime Style Mean Standard Deviation

CMC Competitive Simple 6.01 1.78

Complex 7.08 1.37

Cooperative Simple 5.48 1.44

Complex 6.95 1.04

HCI Competitive Simple 5.83 1.40

Complex 7.05 1.26

Cooperative Simple 6.31 0.77

Complex 7.05 1.21

Self ability. There was no main effect or interaction: communication type,

F(1,72) = 0.38, p > .53; work relationship, F(1,72) = 0.38, p > .54; prime style, F(1,72)

= 0.01, p > .92; communication type by work relationship, F(1,72) = 0.47, p > .49;

communication type by prime style, F(1,72) = 0.07, p > .79; work relationship by

prime style, F(1,72) = 1.75, p > .19; three-way interaction, F(1,72) = 0.24, p > .62. See

Table 39 for means and standard deviations. The main effect of prime style and the

interaction of prime style and communication type found in Study I were not repeated.

Page 133: Side Participation and Linguistic Alignment

122

Table 39 Means and Standard Deviations for Perceived Self Ability in Study II

Communication Type

Work Relationship Prime Style Mean Standard Deviation

CMC Competitive Simple 5.19 2.01

Complex 5.43 1.97

Cooperative Simple 5.03 1.43

Complex 4.67 1.46

HCI Competitive Simple 4.90 1.99

Complex 5.67 1.11

Cooperative Simple 5.57 1.38

Complex 5.05 1.02

Vocabulary similarity. There was a main effect of prime style, F(1,72) = 48.51,

p < .001. Simple-prime participants were more likely than complex-prime participants

to think that their word choices were similar to those of “the other learner.” No other

main effect or interaction was significant: communication type, F(1,72) = 0.70, p > .40;

work relationship, F(1,72) = 1.36, p > .24; communication type by work relationship,

F(1,72) = 2.06, p > .15; communication type by prime style, F(1,72) = 0.02, p > .89;

work relationship by prime style, F(1,72) = 0.21, p > .64; three-way interaction,

F(1,72) = 1.20, p > .27.

See Table 40 for means and standard deviations.

Page 134: Side Participation and Linguistic Alignment

123

Table 40 Means and Standard Deviations for Perceived Vocabulary Similarity in Study II

Communication Type

Work Relationship Prime Style Mean Standard Deviation

CMC Competitive Simple 4.60 2.41

Complex 2.38 0.81

Cooperative Simple 5.33 2.11

Complex 1.86 0.56

HCI Competitive Simple 5.90 1.97

Complex 2.90 2.28

Cooperative Simple 4.60 2.07

Complex 2.11 1.10

Syntax similarity. There was a main effect of prime style on this measure,

F(1,72) = 16.16, p < .001. Simple-prime participants were more likely than complex-

prime participants to perceive syntax similarity between them and the “other learner.”

There was no other main effect or interaction effect on these two scales. No other

main effect or interaction was significant: communication type, F(1,72) = 2.27, p > .13;

work relationship, F(1,72) = 0.00, p > .97; communication type by work relationship,

F(1,72) = 0.02, p > .89; communication type by prime style, F(1,72) = 0.03, p > .87;

work relationship by prime style, F(1,72) = 0.32, p > .57; three-way interaction,

F(1,72) = 1.84, p > .17.

See Table 41 for means and standard deviations.

Page 135: Side Participation and Linguistic Alignment

124

Table 41 Means and Standard Deviations for Perceived Syntax Similarity in Study II

Communication Type

Work Relationship Prime Style Mean Standard Deviation

CMC Competitive Simple 4.30 2.45

Complex 3.44 1.64

Cooperative Simple 5.22 2.10

Complex 2.67 0.77

HCI Competitive Simple 5.70 0.82

Complex 3.50 2.72

Cooperative Simple 5.30 2.00

Complex 3.80 2.35

An SPSS bivariate correlations analysis shows a significant positive correlation

between vocabulary similarity and syntax similarity, r(78) = .66, p < .01.

9.7 Discussion Results from Study II further confirmed the potency of priming on linguistic

alignment. Most importantly, the introduction of a second learning subject and the fact

that participants did not share their subject with the “other learner” had almost no

impact on the priming effect observed. The main effect of prime style was found on all

behavioral measures from the Q&A session. This main effect weakened once the

interactive chat session ended. It did not have any impact on average fact length and

proper use of capitalization for the self-subject list. For the other-learner-subject list,

the main effect of prime style was not found on the use of critical noun and proper use

of capitalization. Taken together, it appears that priming had a relatively lasting effect

Page 136: Side Participation and Linguistic Alignment

125

on vocabulary simplicity scores, plus an inconsistent impact on average sentence

length and the use of critical nouns. Priming effect on the use of proper capitalization

was short-lived, but this is understandable: After all, the omission of most or all

capitalization is a well-known phenomenon in text-based CMC. This is especially true

for synchronous communication such as IM, but may also extend to asynchronous

domain such as e-mail.

The main effect of prime type on the use of the critical noun “Zoonkaba” was

particularly interesting. In Study II, the “Zoonkaba” subject was solely owned by

participants. On a semantic level, whether or not the “other learner” used the word

“Nosua” should have no impact on participants. Nonetheless, the use of “Nosua” by

the “other learner” turned out to be a strong predictor of the use of “Zoonkaba” by

participants. It is clear that this dimension of linguistic style can be primed together

with sentence length and vocabulary complexity. At the same time, the use of “Nosua”

did not have an impact on participants’ use of “Nosua” in their other-learner-subject

list. There are at least two possible explanations that are worth further exploring and

analyzing. First, because participants compiled the other-learner-subject list after

finishing the self-subject list, the priming effect of the use of the critical noun might

have dissipated to a level that is no longer detectable. Second, participants were not

told upfront that they needed to compile the other-learner-subject list. The large

amount of fragmented entries could have contributed to the non-use of the critical

noun “Nosua.”

Page 137: Side Participation and Linguistic Alignment

126

Prime style also had some impact on post-task attitudinal measures. This was

not surprising because the two prime styles used in the two studies were relatively to

the extremes. While the student-athlete-sounding simple primes might be too casual

but still acceptable to participants, the complex primes filled with a GRE vocabulary

were at least unnatural for an IM chat setting. Results from this part were not fully in

line with what came out of Study I, however. It is possible that distractions associated

with working on one’s own in the noisy library study area helped dampen some of the

main effect of prime style.

Effects of work relationship, or competition and cooperation, were not fully in

line with results from Study I. For example, in Study II competitive-condition

participants displayed a higher degree of alignment in terms of question vocabulary

simplicity score than their cooperative-condition counterparts, and this was largely

driven by those who were exposed to complex primes. This might be explainable by

the fact that the “other learner’s” ability was rated higher by participants when

complex primes were used. In Study I, stronger alignment for the same measure was

observed within cooperative-condition participants who were exposed to simple

primes. The change of task interdependency brought by the addition of the other-

learner subject could have contributed to this difference between Studies I and II.

Finally, communication type, or CMC and HCI, also had some mixed effects

on a few measures. For example, during the Q&A session, CMC participants asked

shorter questions than those in the HCI condition, but this was not observed in Study I.

While compiling their self-subject lists, HCI participants were more like than CMC

Page 138: Side Participation and Linguistic Alignment

127

participants to use a more advanced vocabulary. Clearly, more studies are needed

before any solid conclusion could be drawn.

Page 139: Side Participation and Linguistic Alignment

128

CHAPTER 10

General Discussion and Conclusion

The primary goal of my research was to experimentally examine and prove the

co-existence of the automatic component and the intentional component of linguistic

alignment and see how they might interact with each other to shape adaptive language

behaviors. For the automatic component, I used stylistic priming in the context of side-

participation in polylogue; for the intentional component, I focused on two closely

related social contexts—competition and cooperation, to see if social factors could

directly contribute to alignment and how they help moderate alignment due to priming.

The two dissertation studies also had a third dimension: HCI vs. CMC. On one

hand, future development of computing technologies needs guidelines and

recommendations generated in domains such as social psychology, communication,

and linguistics. On the other hand, methodologies used in HCI and CMC research

could provide social science researchers with new ways to construct studies with more

realistic contexts, tasks, and in a more efficient manner.

10.1 Summary of Findings Study I involved a shared learning subject—hence some degree of

coordination was needed and appropriate independent of competition and cooperation.

This first experiment was designed to explore 1) whether or not stylistic alignment

could be observed on interlocutors exposed to priming as a side-participant in a

polylogue, 2) whether or not competition and cooperation could contribute to or help

Page 140: Side Participation and Linguistic Alignment

129

shape stylistic alignment, and 3) whether or not the aforementioned alignment

behaviors, if any, would be manifested differently in HCI and CMC.

Modeled after the 20 Questions game with an IM twist, Study I generated a

large amount of behavioral and attitudinal data. The findings showed that stylistic

priming could lead to alignment behaviors in many dimensions of linguistic style, and

both competition/cooperation and HCI/CMC appeared to have some impact on certain

important behavioral measures. Results from Study I not only added more empirical

evidence of linguistic research in terms of stylistic priming and side-participation in

polylogue, but also identified stylistic features (e.g., vocabulary complexity) that are

prone to the influence of social contexts. Furthermore, there was evidence showing

that priming effect could be long-lasting and cross-modality.

Study II was largely built on top of Study I but added a second learning subject

to be used by participants, thus much of the content-related “common ground”

between participants and the “other learner” was removed. Still, the effectiveness of

priming was found on all behavioral measures taken from the interactive Q&A session,

and on most other behavioral measures from a short-term memory test. There were a

few interesting findings: 1) effects of cooperation/competition not observed during

polylogue surfaced in short-term memory test, 2) form and style could be primed

independent of content, and 3) some IM-specific stylistic features could be primed

during interaction but would not last. Effects of work relationship and communication

type were not fully in line with those from Study I. See Table 42 for side-by-side

comparisons of all major results.

Page 141: Side Participation and Linguistic Alignment

130

The two experiments described above proved that, like addressees in dyadic

groups, a person who alternately takes on the roles of side-participant and speaker in a

triadic group can be linguistically primed by and consequently align with another

person with similar roles. In keyboard-based conversations, such alignment was

observed in terms of linguistic style features including sentence length, lexical

complexity, the use of critical nouns in referring expressions, and the use of proper

capitalization. Alignment resulted from priming could be subject to the nature of work

relationship and communication type; some priming effects last longer than others and

cross modality; and some effects of social factors may have a latency in manifestation.

Table 42 Comparison of key results from Studies I and II

Prime Style

Work Relation.

Comm. Type

Prime*Work Prime*Comm. Work*Comm.

Study I II I II I II I II I II I II Question length

*** *** n/s n/s n/s * n/s n/s n/s n/s n/s n/s

Question vocab. simplicity

*** *** * n/s n/s n/s ** * n/s n/s n/s n/s

Critical noun in questions

*** *** n/s n/s n/s n/s n/s n/s n/s n/s n/s n/s

Capitalization in questions

*** ** n/s n/s n/s n/s n/s n/s n/s n/s n/s n/s

Fact length for self subject

*** n/s n/s n/s n/s n/s n/s n/s * n/s n/s n/s

Fact vocab. Simplicity for self subject

*** * n/s n/s n/s ** n/s n/s n/s n/s n/s n/s

Critical noun in facts for self subject

** * n/s * n/s n/s n/s n/s n/s n/s n/s n/s

Capitalization in facts for self subject

n/a n/s n/a n/s n/a n/s n/a n/s n/a n/s n/a n/s

# of facts for other-learner subject

n/a n/s n/a *** n/a n/s n/a n/s n/a n/s n/a n/s

Fact length for other-learner

n/a ** n/a n/s n/a n/s n/a n/s n/a n/s n/a n/s

Page 142: Side Participation and Linguistic Alignment

131

Prime Style

Work Relation.

Comm. Type

Prime*Work Prime*Comm. Work*Comm.

Study I II I II I II I II I II I II subject Fact vocab. simplicity for other-learner subject

n/a ** n/a n/s n/a n/s n/a n/s n/a n/s n/a n/s

Critical noun in facts for other-learner subject

n/a n/s n/a n/s n/a n/s n/a n/s n/a n/s n/a n/s

Capitalization in facts for other-learner subject

n/a n/s n/a n/s n/a n/s n/a n/s n/a n/s n/a n/s

Positivity of learning

** n/s n/s n/s n/s n/s n/s n/s n/s n/s n/s **

Easiness of learning

n/s n/s * * n/s n/s * n/s ** n/s n/s n/s

Teaching agent competency

* n/s n/s n/s n/s n/s n/s n/s n/s n/s n/s n/s

Other learner ability

* *** n/s n/s n/s n/s n/s n/s n/s n/s n/s n/s

Self ability * n/s n/s n/s n/s n/s n/s n/s n/s n/s n/s n/s Vocabulary similarity

*** *** n/s n/s * n/s n/s n/s n/s n/s n/s n/s

Syntax similarity

n/s *** n/s n/s n/s n/s n/s n/s n/s n/s * n/s

*significant at the .05 level

**significant at the .01 level

***significant at the .001 level

n/a = not applicable

n/s = not significant

10.2 Contributions Contributions of this dissertation could be summarized from three major

perspectives.

10.2.1 Theoretical Contributions

Theoretically, this dissertation helps advance our understanding of linguistic

alignment. First, I extended Branigan et al.’s (2007) findings from syntactic alignment

Page 143: Side Participation and Linguistic Alignment

132

to stylistic alignment in polylogues generated in a relatively more natural and realistic

task. Second, I found proof that linguistic alignment could be subject to the influence

of social factors in both CMC and HCI. These two contributions add more and even

stronger evidence in support of the dominance of the automatic component in

linguistic alignment (and alignment behavior in general), and demonstrated the co-

existence of an interaction between the automatic and intentional processes in

linguistic alignment.

Recognizing both the automatic and the intentional components of linguistic

alignment, I propose a new way to look at alignment with a holistic view. From word

choice to discourse style, there exist many levels of alignment. Lower-level surface

features such as speech rate and syntax in general do not bear much social meaning

other than their mere existence or semantic function; higher-level features such as

accent and discourse formality, however, often carry certain connotations (e.g., socio-

economic status). Considering all evidence from previous studies and my dissertation

findings, it can be reasoned that alignment of lower-level surface features is largely

determined by automatic (or mechanistic) processes such as priming, while alignment

of higher-level features may be subject to social factors such as work relationship and

socio-economic status. Still, there may be certain features that are subject to the

influence of both the automatic and the intentional processes of linguistic alignment.

Additionally, considering my proposal of the “automatic social responses”

(Section 5.1.2), I believe that “audience design” could at times be considered an

automatic social response. In the triad setup used for my dissertation studies, it is

Page 144: Side Participation and Linguistic Alignment

133

highly possible that the “automatic audience design” process was activated as soon as

participants side-participated in the dialog between the “other learner” (as speaker)

and the “teaching agent” (as addressee). The mere intake of the addressee could be

used as a basis in this “automatic audience design.” In other words, I propose that

audience design behavior or tendency is so persistent and strong that we automatically

take into consideration of what our addressee hears from other speakers even before

the addressee utters a single word.

10.2.2 Methodological Contributions

For this dissertation, I developed an analytical framework to examine linguistic

alignment in terms of linguistic style used in text-based synchronous communication.

Stylistic dimensions including average sentence length, vocabulary complexity, the

use of critical noun(s), and the use of proper capitalization are essential to determine

the style of text-based discourses. Using Flesch Reading Ease score that comes with

Microsoft Word to analyze discourse and vocabulary complexity is cost-efficient. The

task modeled after the 20 Questions game is far more realistic and interesting (or at

least less boring) than the “language games” reviewed in Chapters 2 and 3. The short-

term memory test provides additional behavioral measures, and attitudinal measures

taken from questionnaire responses could help researchers better understand/interpret

behavioral measures.

10.2.3 Contributions to Research in HCI, CMC, and Social Psychology

Results from the two dissertation studies add new evidence to demonstrate that

humans apply social rules while interacting with computers: HCI participants in

Page 145: Side Participation and Linguistic Alignment

134

general did not differ from CMC participants in terms of their linguistic alignment

behaviors throughout the two experiments. However, there was also some subtleties

related to their longer-term impact: When Study I participants compiled their top-15

lists, although both CMC and HCI participants exhibited alignment behavior as

measured by average fact length, CMC participants showed a higher degree of

alignment than did HCI participants. Also, as shown in Study II’s self-subject learned

fact lists, CMC participants were more likely than HCI participants to use words from

a more advanced vocabulary. These last two results were by no means conclusive, but

they suggest that there are more to learn about both HCI and CMC.

This dissertation also contributes to the research of cooperation and

competition. The ostensible learning task and materials developed for the two

dissertation studies could be re-used by other researchers from different domains.

When one learning subject was shared in Study I, the task was high on

interdependence between the participant and the “other learner”; when a second

learning subject was introduced in Study II, the interdependence decreased as

independence received a boost. Behavioral and attitudinal differences by this change

of interdependence and independence could help other researchers with their

theorization and their determination of future research direction. For example, in

Study I, during the Q&A session, cooperative-condition participants displayed

significantly higher degree of alignment in terms of vocabulary simplicity. In Study II

where task interdependence was low, cooperative-condition participants paid

significantly more attention than their competitive-condition counterparts to what the

“other learner” did: They were able to compile longer lists of facts of a subject that

Page 146: Side Participation and Linguistic Alignment

135

was not assigned to them. On the other hand, when a behavior (e.g., using complex

sentence structures and words) was perceived as high ability, competitive-condition

participants displayed a higher degree of alignment: They were more likely than

cooperation-condition participants to use the critical noun “Zoonkaba” in their short-

term memory test. The rich data I collected could be further analyzed to explore the

effects of cooperation and competition on language use especially in the HCI and

CMC contexts.

10.3 Limitations and Future Work Several limitations within these two dissertation studies could qualify the

interpretation and generalization of the findings.

First, the venues where the two studies took place were drastically different.

Study I used a small and quiet lab that only allows one participant at a time; Study II

took place in a large study area in a library. Apparently the latter place had

considerably more distractions or even interruptions for the participants. Although

behavioral measures taken from the two studies were mostly consistent, there were a

few inconsistent effects on some behavioral and attitudinal measures that could be at

least partially attributable to the noisy environment and the additional freedom

associated with the open environment. The tradeoff of having a natural user

environment was the loosened control over some environmental factors.

Second, only self-guided text instructions were offered for Study II. This

limited offer coupled with a relatively noisy library environment was not conducive to

ensuring the reading and understanding of the details in the instruction page by the

Page 147: Side Participation and Linguistic Alignment

136

participant. In fact, a few participants started their sessions with the wrong subject and

those questions had to be excluded. Most importantly, the manipulation of cooperation

and competition could have failed for those who did not read the instruction page

carefully. The success of manipulation may affect how participants perceive

themselves in terms of cooperation or competition (Grossack, 1954) and subsequently

how they potentially vary the degree of alignment with the “other learner.” An audio

instruction embedded in the webpage could have worked better.

Third, the automation and simulation of the chat room Q&A session was

beneficial in many ways. However, randomized answers from the “teaching agent” led

a couple participants to test the “teaching agent” by asking the same questions

repeatedly. Those who became suspicious could have contaminated the data to some

degree. If the same methodology is used again in the future, I would suggest a line of

instruction be added and read “The teaching agent has a recognition error rate of about

xx%” or a similar message to this effect.

For future research work, one potential social factor to be examined for effects

on alignment would be in- vs. out-group identity. As discussed in Chapter 5, the desire

to gain social affect could lead to alignment behaviors. Seeking, obtaining, and

maintaining of an in-group status require one to purposefully modify or adapt to the

norm of the desired group. Such modification or adaptation could include or influence

language behaviors. A handy way of operationalize this factor would be the

identification with Stanford vs. with Berkeley. Between students from these two

Page 148: Side Participation and Linguistic Alignment

137

universities, Berkeley students appear to be more into the rivalry than Stanford

students, hence should be considered as recruiting targets.

Also, cultural differences could help shape the understanding and execution of

competition and cooperation very differently. One possible pair would be Asian

Americans vs. White Americans. Alternatively, these two studies could be replicated

in another country or culture. It is highly possible that competition and cooperation

will show a different pattern in terms of their effects on moderating linguistic

alignment behaviors.

Finally, the large amount of behavioral data from these two dissertation studies

could be further examined for alignment in many other levels. For example, although

participants did not share the learning subject with the “other learner” in Study I, many

content-related words could still cause semantic priming: When the “other learner”

asked about the use of horse, how likely were participants to ask whether or not

“Zoonkaba” involved the killing of animals; when the “other learner” asked about

clothing, how likely were participants to raise a question about team uniform.

10.4 Conclusion Linguistic alignment is a fascinating yet complex phenomenon to be studied.

Many psychologists and linguists conducted experiments focusing on the priming

effect and examined alignment at many different linguistic levels; other researchers

chose to emphasize the social aspect of language. The two approaches were rarely

taken simultaneously to explore and examine why and how linguistic alignment could

be both automatic and intentional.

Page 149: Side Participation and Linguistic Alignment

138

Based on experimental findings, this dissertation demonstrates the co-existence

and interaction of the automatic component and the intentional component within

linguistic alignment. It further shows that the priming of certain linguistic features can

have a lasting effect, while the priming effect of others could be short-lived. Moreover,

the effect of social factors on alignment of certain features could be strong and

immediate; on other levels this effect could be subdued or delayed. Taken together,

this dissertation proposes a holistic view to examine and understand linguistic

alignment. More experimental studies are needed before more solid and conclusive

theorizations could be developed especially regarding social factors as predictors of

alignment.

Page 150: Side Participation and Linguistic Alignment

139

Appendix I: Canned Questions and Answers Used by the

“Other Learner” in Studies I & II

Simple Primes Complex Primes Answer

Category I are there a dad, a mom, and kids in a family?

Do the Mosua typically exist in conventional-type families, consisting of a male parent, a female parent, and multiple children?

N

do children live with their mom?

Do the offspring sojourn in the same residence as their mother?

Y

do the dads live with the family?

Do the fatherly figureheads reside in the household?

N

does a woman keep her last name after she’s hitched?

After conjoining with her spouse, does a woman retain her original maiden name?

Y

do kids use their mother’s last name?

Do offspring of a Mosua woman bear her surname?

Y

are children taken care of by the mothers?

Do Mosua mothers assume all responsibilities in child rearing?

Y

are baby girls liked better?

Are female offspring deemed more desirable than male offspring?

Y

can men choose their lovers freely?

Are Mosua men afforded the freedom to adjudicate with whom they will copulate?

Y

* can men get divorces?

* If a Mosua man wishes to terminate a relationship with a woman, does he have to go through the typical divorce process?

U

* can men own land?

* Are male family members permitted to own and inherit land and properties?

N

* are there several generations in a

* Does a typical Mosua house consist of multitudinous generations of familial members?

Y

Page 151: Side Participation and Linguistic Alignment

140

family?

* after a couple is married, does the wife move in a with her husband?

* After a couple is officially bound by marriage, does the wife typically depart from her residence and relocate to the residence of her hushand?

U

* are the old people in charge?

* Do the elders of the society function as the principle taskmasters?

U

* are there more than 10 people in a family?

* Is it a representative characteristic of Mosua family-units to exceed ten family members?

U

Category II

do they worship ancestors?

Is ancestral veneration an integral element of the Mosua culture?

Y

do they use money?

Does the Mosua society have an established system of currency exchange?

Y

do they use horses?

Do the Mosua employ equines for labor purposes and as a means of transportation?

Y

do they eat meat? Are the Mosua a carnivorous populous of humanity?

Y

*do they fish? * Does fishery contribute to their foodstuff supply?

N

* do guys and gals wear the same kinds of clothes?

* Does a notably disparity exist between the orthodox male and female attire of the Mosua people?

N

* do they wear normal clothes?**

* Is the clothing attire of the Mosua typical of the mainstream American citizen?

N

* Study I: Used as backups when a participant asks one or more questions similar to the ones without a “*”.

** Study II used all question but this one.

Page 152: Side Participation and Linguistic Alignment

141

Appendix II: Questionnaire Used in Study I (HCI Condition)

For this questionnaire, we would like you to think about your experiences as a

participant in this study.

Please answer every question, even if you feel you cannot say very much.

There are no wrong answers; we are interested in your opinions.

Answer the questions in the order that they appear.

This questionnaire is completely anonymous. The experimenter will only identify you

with a number.

Some of the following questions have a rating scale below them, such as:

What is the weather like today?

Very Cloudy • • • • • • • • • • Very Sunny

For these questions, please circle the point on the scale corresponding to your

judgement. If, for example, you thought it was a very cloudy day, you would circle the

first point. On the other hand, if you thought that it was a quite sunny day, you might

circle the seventh or eighth dot.

Page 153: Side Participation and Linguistic Alignment

142

1. Please circle the dot that best describes your feeling about the learning experience.

enjoyable • • • • • • • • • • not enjoyable

rewarding • • • • • • • • • • not rewarding

boring • • • • • • • • • • fun

difficult • • • • • • • • • • easy

useful • • • • • • • • • • useless

creative • • • • • • • • • • uncreative

simple • • • • • • • • • • complicated

pleasant • • • • • • • • • • unpleasant

unfriendly • • • • • • • • • • friendly

lonely • • • • • • • • • • not lonely

efficient • • • • • • • • • • inefficient

Page 154: Side Participation and Linguistic Alignment

143

2. Please circle the dot that best describes how you felt about pedagogical_agent.

active • • • • • • • • • • passive

negative • • • • • • • • • • positive

bad • • • • • • • • • • good

nonaffiliative • • • • • • • • • • affiliative

pleasant • • • • • • • • • • unpleasant

powerful • • • • • • • • • • powerless

dominant • • • • • • • • • • submissive

unfriendly • • • • • • • • • • friendly

intelligent • • • • • • • • • • unintelligent

fair • • • • • • • • • • unfair

knowledgeable • • • • • • • • • • ignorant

accurate • • • • • • • • • • inaccurate

incompetent • • • • • • • • • • competent

reasonable • • • • • • • • • • unreasonable

Page 155: Side Participation and Linguistic Alignment

144

3. Please circle the dot that best describes how you felt about yourself in the chat room.

tense • • • • • • • • • • relaxed

happy • • • • • • • • • • unhappy

drained • • • • • • • • • • invigorated

in control • • • • • • • • • • not in control

negative • • • • • • • • • • positive

powerless • • • • • • • • • • powerful

unintelligent • • • • • • • • • • intelligent

active • • • • • • • • • • passive

ignorant • • • • • • • • • • knowledgeable

dominant • • • • • • • • • • submissive

excited • • • • • • • • • • calm

successful • • • • • • • • • • unsuccessful

uncooperative • • • • • • • • • • cooperative

skilled • • • • • • • • • • unskilled

unfriendly • • • • • • • • • • friendly

fast • • • • • • • • • • slow

polite • • • • • • • • • • impolite

flexible • • • • • • • • • • inflexible

rigid • • • • • • • • • • not rigid

Page 156: Side Participation and Linguistic Alignment

145

4. Please circle the dot that best describes how you felt about learning_agent.

tense • • • • • • • • • • relaxed

pleasant • • • • • • • • • • unpleasant

drained • • • • • • • • • • invigorated

in control • • • • • • • • • • not in control

negative • • • • • • • • • • positive

powerless • • • • • • • • • • powerful

unintelligent • • • • • • • • • • intelligent

active • • • • • • • • • • passive

ignorant • • • • • • • • • • knowledgeable

dominant • • • • • • • • • • submissive

excited • • • • • • • • • • calm

successful • • • • • • • • • • unsuccessful

uncooperative • • • • • • • • • • cooperative

skilled • • • • • • • • • • unskilled

unfriendly • • • • • • • • • • friendly

fast • • • • • • • • • • slow

polite • • • • • • • • • • impolite

flexible • • • • • • • • • • inflexible

rigid • • • • • • • • • • not rigid

Page 157: Side Participation and Linguistic Alignment

146

How similar or different were your vocabulary and that of learning_agent?

Very Different • • • • • • • • • • Very Similar

How similar or different were your syntax (i.e., sentence structure) and learning_agent’s syntax?

Very Different • • • • • • • • • • Very Similar

Did you do anything special to come up with your questions?

_________________________________________

_________________________________________

_________________________________________

Imagine that someone asked you to describe what you did and what happened in the

study you just participated in. What would you tell them?

_________________________________________

_________________________________________

_________________________________________

Do you have any more comments about this study?

Page 158: Side Participation and Linguistic Alignment

147

_________________________________________

_________________________________________

_________________________________________

Page 159: Side Participation and Linguistic Alignment

148

If you had to assess the learning_agent 's overall ability, what would your rating be?

Low ability • • • • • • • • • • High ability

Page 160: Side Participation and Linguistic Alignment

149

Age: _______

Gender: □ Male □ Female

Are you a native speaker of American English? □ Yes □ No

Page 161: Side Participation and Linguistic Alignment

150

List of References

Bailenson, J. N., & Yee, N. (2005). Digital Chameleons: Automatic assimilation of nonverbal gestures in immersive virtual environments. Psychological Science, 16, 814-819.

Bell, A. (1984). Language style as audience design. Language in Society, 13, 145-204. Bell, L., Gustafson, J., & Heldner, M. (2003). Prosodic adaptation in human-computer

interaction. Paper presented at the 15th ICPhS, Barcelona. Bhatt, K., Argamon, S., & Evens, M. (2004). Hedged responses and expressions of affect in

human/human and human/computer tutorial interactions. Paper presented at the COGSCI 2004, Chicago, IL.

Bosshardt, H.-G., Sappok, C., Knipschild, M., & Holscher, C. (1997). Spontaneous imitation of fundamental frequency and speech rate by nonstutterers and stutterers. Journal of Psycholinguistic Research, 26(5), 425-448.

Branigan, H. P., Pickering, M. J., & Cleland, A. A. (2000). Syntactic coordination in dialogue. Cognition, 75(2), B13-B25.

Branigan, H. P., Pickering, M. J., McLean, J. F., & Cleland, A. A. (2007). Syntactic alignment and participant role in dialogue. Cognition, 104(2), 163-197.

Branigan, H. P., Pickering, M. J., McLean, J. F., & Cleland, A. A. (in press). Participant role and syntactic alignment in dialogue. Cognition.

Branigan, H. P., Pickering, M. J., Pearson, J., & McLean, J. F. (2010). Linguistic alignment between people and computers. [doi: DOI: 10.1016/j.pragma.2009.12.012]. Journal of Pragmatics, 42(9), 2355-2368.

Branigan, H. P., Pickering, M. J., Pearson, J., McLean, J. F., & Nass, C. (2003). Syntactic alignment between computers and people: The role of belief about mental states. Paper presented at the 25th Annual Meeting of the Cognitive Science Society, Boston, Massachusetts, USA.

Brennan, S. (1990). Conversation as direct manipulation: An iconoclastic view. In B. Laurel (Ed.), The art of human-computer interface design (pp. 393-404). Reading, MA: Addison-Wesley.

Brennan, S. (1991). Conversation with and through computers. User Modeling and User-Adapted Interaction, 1, 67-86.

Brennan, S. (1996). Lexical entrainment in spontaneous dialog. Paper presented at the ISSD-96, Philadelphia, PA.

Brennan, S. (1998). The grounding problem in conversations with and through computers. In S. R. Fussell & R. J. Kreuz (Eds.), Social and Cognitive Psychological Approaches to Interpersonal communication. Hillsdale, NJ: Lawrence Erlbaum.

Brennan, S., & Clark, H. H. (1996). Conceptual pacts and lexical choice in conversation. Journal of Experimental Psychology: Learning, Memory and Cognition, 22, 1428-1493.

Cappella, J. (1981). Mutual influence in expressive behavior: Adult-adult and infant-adult dyadic interaction. Psychological Bulletin, 89, 101-132.

Cappella, J., & Planalp, S. (1981). Talk and silence sequences in informal conversations: Interspeaker influence. Human Communication Research, 7(2), 117-132.

Chartrand, T. L., & Bargh, J. A. (1999). The chameleon effect: The perception-behavior link and social interaction. Journal of Personality and Social Psychology, 76, 893-910.

Chartrand, T. L., Maddux, W. W., & Lakin, J. L. (2005). Beyond the perception-behavior link: The ubiquitous utility and motivational moderators of nonconscious mimicry. In R. Hassin, J. Uleman & J. A. Bargh (Eds.), Unintended Thought 2: The New Unconscious (pp. 334-361). New York: Oxford University Press.

Page 162: Side Participation and Linguistic Alignment

151

Clark, H. H. (1996). Using language. Cambridge: Cambridge University. Clark, H. H., & Brennan, S. (1991). Grounding in communication. In L. B. Resnick, J. M.

Levine & S. D. Teasley (Eds.), Perspectives on Socially Shared Cognition (pp. 127-149). Washington: APA Books.

Clark, H. H., & Carlson, T. B. (1982). Hearers and speech act. Language, 58, 332-373. Clark, H. H., & Schaefer, E. F. (1989). Contributing to discourse. Cognitive Science, 13, 259-

294. Clark, H. H., & Wilkes-Gibbs, D. (1986). Referring as a collaborative process. Cognition, 22,

1-39. Cleland, A. A., & Pickering, M. J. (2003). The use of lexical and syntactic information in

language production: Evidence from the priming of noun-phrase structure. Journal of Memory and Language, 49, 214-230.

Coulston, R., Oviatt, S. L., & Darves, C. (2002). Amplitude convergence in children's conversational speech with animated personas. Paper presented at the 7th International Conference on Spoken Language Processing (ICSLP 2002), Denver, CO.

Dahlback, N., & Jonsson, A. (1989). Empirical studies of discourse representations for natural language interfaces. Paper presented at the 4th Conference of the European Chapter of the Association for Computational Linguistics, Manchester, England.

Darves, C., & Oviatt, S. L. (2002). Adaptation of users' spoken dialogue patterns in a conversational interface. Paper presented at the 7th International Conference on Spoken Language Processing (ICSLP 2002), Denver, CO.

Decety, J., Jackson, P. L., Sommerville, J. A., Chaminade, T., & Meltzoff, A. N. (2004). The neural bases of cooperation and competition: An fMRI investigation. NeuroImage, 23(2), 744-751. doi: DOI: 10.1016/j.neuroimage.2004.05.025

Deutsch, M. (1949). A theory of co-operation and competition. Human Relations, 2(2), 129-152.

Eckert, P. (2001). Style and social meaning. In P. Eckert & J. R. Rickford (Eds.), Style and sociolinguistic variation (pp. 119-126). New York: Cambridge University Press.

Erev, I., Bornstein, G., & Galili, R. (1993). Constructive intergroup competition as a solution to the free rider problem: A field experiment

Journal of Experimental Social Psychology, 29(6), 463-478. doi: 10.1006/jesp.1993.1021 Estival, D. (1985). Syntactic priming of the passive in English. Text, 5, 7-22. Fais, L. (1996). Lexical accommodation in machine-mediated interactions. Paper presented at

the 16th conference on Computational linguistics, Copenhagen, Denmark. Ferrara, K., Brunner, H., & Whittemore, G. (1991). Interactive written discourse as an

emergent register. Written communication, 8, 8-34. Flesch-Kincaid. (2006). Flesch-Kincaid readability test (Vol. 2006): Wikipedia, The Free

Encyclopedia. Retrieved from http://en.wikipedia.org/wiki/Flesch-Kincaid_Readability_Test.

Flesch, R. F. (1948). A new readability yardstick. Journal of Applied Psychology, 32, 221-233. Forney, M. (2002, November 4). Minority report: The Mosuo, a small matrilineal tribe in

central China, are preserving their tradition by exploiting them. Time Asia Retrieved October 2, 2005, from http://www.time.com/time/asia/features/china_cul_rev/minorities.html

Garrod, S., & Anderson, A. (1987). Saying what you mean in dialogue: A study in conceptual and semantic co-ordination. Cognition, 27(2), 181-218.

Garrod, S., & Clark, A. (1993). The development of dialogue co-ordination skills in schoolchildren. Language and Cognitive Processes, 8, 101-126.

Garrod, S., & Doherty, G. (1994). Conversation, co-ordination and convention: An empirical investigation of how groups establish linguistic conventions. Cognition, 53, 181-215.

Page 163: Side Participation and Linguistic Alignment

152

Garrod, S., & Pickering, M. J. (2004). Why is conversation so easy? Trends in Cognitive Sciences, 8(1), 8-11.

Giles, H., Coupland, J., & Coupland, N. (1991). Accommodation theory: Communication, context, and consequence. In H. Giles, J. Coupland & N. Coupland (Eds.), Contexts of Accommodation: Development in Applied Sociolinguistics. Cambridge: Cambridge University Press.

Giles, H., & Smith, P. M. (1979). Accommodation theory: Optimal levels of convergence. In H. Giles & R. N. St. Clair (Eds.), Language and social psychology. Baltimore, MD: University Park Press.

Giles, H., & Wiemann, J. M. (1987). Language, social comparison and power. In C. R. Berger & S. H. Chaffee (Eds.), The handbook of communication science (pp. 350-384). Newbury Park, CA: Sage.

Gonzales, A. L., Hancock, J. T., & Pennebaker, J. W. (2010 ). Language style matching as a predictor of social dynamics in small groups Communication Research 37 (1 ), 3-19

Good, M. D., Whiteside, J. A., Wixon, D. R., & Jones, S. J. (1984). Building a user-derived interface. Communication of the ACM, 27(10), 1032-1043.

Gries, S. T. (2005). Syntactic priming: A corpus-based approach. Journal of Psycholinguistic Research, 34(4), 365-399.

Grossack, M. M. (1954). Some effects of cooperation and competition upon small group behavior. The Journal of Abnormal and Social Psychology, 49(3), 341-348. doi: 10.1037/h0054490

Guindon, R. (1991). Users request help from advisory systems with simple and restricted language: Effects of real-time constraints and limited shared context. Human-Computer Interaction, 6(1), 47-75.

Guindon, R., Shuldberg, K., & Conner, J. (1987). Grammatical and ungrammatical structures in user-adviser dialogues: Evidence for sufficiency of restricted languages in natural language interfaces to advisory systems. Paper presented at the 25th Annual Meeting of the ACL, Stanford, CA.

Gustafson, J., Larsson, A., Carlson, R., & Hellman, K. (1997). How do system questions influence lexical choices in user answers? Paper presented at the Eurospeech 1997, Rhodes, Greece.

Hartsuiker, R. J., Pickering, M. J., & Veltkamp, E. (2004). Is syntax separate or shared between languages? Psychological Science, 15(6).

Johnson, D. W., Maruyama, G., Johnson, R., Nelson, D., & Skon, L. (1981). Effects of cooperative, competitive, and individualistic goal structures on achievement: A meta-analysis. Psychological Bulletin, 89(1), 47-62. doi: 10.1037/0033-2909.89.1.47

Kelley, H. H., & Thibaut, J. W. (1969). Group problem solving. In G. Lindsey & E. Aronson (Eds.), Handbook of Social Psychology (Vol. 4). Reading, Mass.: Addison-Wesley.

Labov, W. (1966). The social stratification of English in New York City. Washington, DC: Center for Applied Linguistics.

Labov, W. (2001). The anatomy of style-shifting. In P. Eckert & J. R. Rickford (Eds.), Style and sociolinguistic variation (pp. 85-108). New York: Cambridge University Press.

Lakin, J. L., & Chartrand, T. L. (2003). Using nonconscious behavioral mimicry to create affiliation and rapport. Psychological Science, 14(4), 334-339.

Lakin, J. L., Jefferis, V. E., Cheng, C. M., & Chartrand, T. L. (2003). The chameleon effect as social glue: Evidence for the evolutionary significance of nonconscious mimicry. Journal of Nonverbal Behavior, 27(3), 145-162.

Leiser, R. G. (1989). Exploiting convergence to improve natural language understanding. Interacting with Computers, 1(3), 284-298.

Page 164: Side Participation and Linguistic Alignment

153

Levelt, W., & Kelter, S. (1982). Surface form and memory in question answering. Cognitive Psychology, 14, 78-106.

Lewin, K. (1935). A Dynamic Theory of Personality. New York, NY: McGraw-Hill. Loebell, H., & Bock, J. K. (2003). Structural priming across languages. Linguistics, 41(5),

791-824. McLean, J. F., Pickering, M. J., & Branigan, H. P. (2004). Lexical repetition and syntactic

priming in dialogue. In J. C. Trueswell & M. K. Tanenhaus (Eds.), Processing World-Situated Language: Bridging the Language as Product and Language as Action Traditions. Cambridge, MA: MIT Press.

Moore, E. (2004). Sociolinguistic style: A multidimensional resource for shared identity creation. Canadian Journal of Linguistics, 49(3/4), 375-396.

Mosuo. (n.d.). Mosuo (Vol. 2005): Wikipedia, The Free Encyclopedia. Retrieved from http://en.wikipedia.org/wiki/Mosuo.

Mulvey, P. W., & Ribbens, B. A. (1999). The effects of intergroup competition and assigned group goals on group efficacy and group effectiveness. Small Group Research, 30(6), 651-677.

Nass, C., & Brave, S. B. (2005). Wired for speech: How voice activates and enhances human-computer relationship. Cambridge, MA: MIT Press.

Nass, C., Hu, J., Pickering, M. J., Pearson, J., & Branigan, H. P. (2005). Linguistic alignment in HCI vs. CMC: Do users mimic conversation partners' syntax and word choices?

Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81-103.

Natale, M. (1975). Convergence of mean vocal intensity in dyadic communication as a function of social desirability. Journal of Personality & Social Psychology, 32(5), 790-804.

Niederhoffer, K. G., & Pennebaker, J. W. (2002). Linguistic style matching in social interaction. Journal of Language and Social Psychology, 21(4), 337-360.

Nilsenová, M., & Nolting, P. (2010). Linguistic adaptation in semi-natural dialogues: Age comparison. In P. Sojka, A. Horák, I. Kopecek & K. Pala (Eds.), Text, Speech and Dialogue (Vol. 6231, pp. 531-538): Springer Berlin / Heidelberg.

Okebukola, P. A. (1985). The relative effectiveness of cooperative and competitive interaction techniques in strengthening students' performance in science classes. Science Education, 69(4), 501-509.

Oviatt, S. L., & Adams, B. (2000). Desiging and evaluating conversational interfaces with animated characters. In J. Cassell, J. Sullivan, S. Prevost & E. Churchill (Eds.), Embodied conversational agents (pp. 319-343). Cambridge, MA: MIT Press.

Oviatt, S. L., Bernard, J., & Levow, G. (1999). Linguistic adaptation during error resolution with spoken and multimodal systems. Language and Speech, 41(3-4), 415-438.

Oviatt, S. L., Darves, C., & Coulston, R. (2004). Toward adaptive conversational interfaces: Modeling speech convergence with animated personas. ACM Transactions on Computer-Human Interaction (TOCHI), 11(3), 300-328.

Paasche-Orlow, M. K., Taylor, H. A., & Brancati, F. L. (2003). Readability standards for information-consent forms as compared with actual readability. The New England Journal of Medicine, 348(8), 721-728.

Pearson, J., Hu, J., Branigan, H. P., Pickering, M. J., & Nass, C. (2006a). Adaptive language behavior in HCI: How expectations and beliefs about a system affect users' word choice Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1177-1180). Montréal, Québec, Canada: ACM.

Page 165: Side Participation and Linguistic Alignment

154

Pearson, J., Hu, J., Branigan, H. P., Pickering, M. J., & Nass, C. (2006b). Adaptive language behavior in HCI: How expectations and beliefs about a system affect users' word choice. Paper presented at the CHI 2006, Montreal, Quebec, Canada.

Pearson, J., Pickering, M. J., Branigan, H. P., McLean, J. F., Nass, C., & Hu, J. (2004). The influence of beliefs about an interlocutor on lexical and syntactic alignment: Evidence from human-computer dialogues. Paper presented at the 10th annual Architectures and Mechanisms of Language Processing Conference.

Pickering, M. J., & Garrod, S. C. (2004). Toward a mechanistic psychology of dialogue. Behavioral and Brain Sciences, 27, 169-226.

Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, television, and new media like real people and places. New York: Cambridge University Press/CSLI.

Richards, M. A., & Underwood, K. M. (1984). How should people and computers speak to one another? Paper presented at the Interact '84: First IFIP Conference on Human-Computer Interaction, London.

Schenkein, J. (1980). A taxonomy for repeating action sequences in natural conversation. In B. Butterworth (Ed.), Language Production (Vol. 1, pp. 21-47). London: Academic Press.

Schober, M. F., & Clark, H. H. (1989). Understanding by addressees and overhearers. Cognitive Psychology, 21, 211-232.

Scissors, L. E., Gill, A. J., & Gergle, D. (2008). Linguistic mimicry and trust in text-based CMC Proceedings of the 2008 ACM conference on Computer supported cooperative work (pp. 277-280). San Diego, CA, USA: ACM.

Shechtman, N., & Horowitz, L. M. (2003). Media inequality in conversation: How people behave differently when interacting with computers and people. Paper presented at the Conference on Human Factors in Computing Systems, Fort Lauderdale, FL, USA.

Smith, M., & Wheeldon, L. (2001). Syntactic priming in spoken sentence production - An online study. Cognition, 78(2), 123-164.

Stanne, M. B., Johnson, D. W., & Johnson, R. T. (1999). Does competition enhance or inhibit motor performance: A meta-analysis. Psychological Bulletin, 125(1), 133-154. doi: 10.1037/0033-2909.125.1.133

Staum Casasanto, L., Jasmin, K., & Casasanto, D. (2010). Virtually accommodating: Speech rate accommodation to a virtual interlocutor. Paper presented at the The 32nd Annual Conference of the Cognitive Science Society, Austin, TX.

Stockmeyer, N. O. (2009). Using Microsoft Word's readability program. Michigan Bar Journal, 88(1), 46-47.

Street, R. L. (1984). Speech convergence and speech evaluation in fact-finding interviews. Human Communication Research, 11(2), 139-169.

Street, R. L., & Cappella, J. (1989). Social and linguistic factors influencing adaptation in children's speech. Journal of Psycholinguistic Research, 18(5), 497-519.

Street, R. L., & Giles, H. (1982). Speech accommodation theory: A social cognitive approach to language and speech behavior. In M. Roloff & C. R. Berger (Eds.), Social cognition and communication (pp. 193-226). Beverly Hills, CA: Sage.

Tannen, D. (1984). Conversational style: Analyzing talk among friends. Norwood, NJ: Ablex. Tannen, D. (1989). Talking voices: Repetition, dialogue and imagery in conversational

discourse. Cambridge, England: Cambridge University Press. Tauer, J. M., & Harackiewicz, J. M. (2004). The effects of cooperation and competition on

intrinsic motivation and performance. Journal of Personality and Social Psychology, 86(6), 849-861.

Taylor, P. J., & Thomas, S. (2008). Linguistic style matching and negotiation outcome. Negotiation and Conflict Management Research, 1(3), 263-281.

Page 166: Side Participation and Linguistic Alignment

155

van Baaren, R. B., Holland, R. W., Kawakami, K., & Knippenberg, A. V. (2004). Mimicry and prosocial behavior. Psychological Science, 15(1), 71-74.

Vonier, H. (n.d.). Land of virtue. Weiner, E. J., & Labov, W. (1983). Constraints on the agentless passive. Journal of

Linguistics, 19, 29-58. Zoltan-Ford, E. (1991). How to get people to say and type what computers can understand.

International Journal of Man-Machine Studies, 34, 527-547.