debugging event-driven programming - virginia techexplosion of the event-driven node.js server-side...

12
Debugging Event-Driven Programming James Davis Virginia Tech Blacksburg, VA, USA [email protected] ABSTRACT The importance of the event-driven programming (EDP) paradigm is on the rise. While EDP has been been common in client-side applications since around 2000 (e.g. GUI de- velopment, JavaScript-driven web pages), it has only recently entered the mainstream in the settings of mobile computing (Android, 2008 [15]) and server-side programming (Node.js, 2009 [18]). The EDP paradigm is also a natural way to struc- ture applications written for the burgeoning Internet of Things. However, little effort has been invested in understanding the processes programmers use to develop and debug EDP ap- plications or in developing effective educational approaches for EDP. This research proposes to “debug event-driven pro- gramming,” exploring the mental processes EDP developers use to grok their code and considering contextualized educa- tional approaches based on the students’ backgrounds. As a result, educators and systems researchers will be able to make informed decisions about the design of curricula, languages, development environments, and debugging aids. ACM Classification Keywords K.3.2. Computer and Information Science Education: Com- puter science education; D.2.5. Testing and Debugging: Test- ing tools (e.g., data generators, coverage testing) Author Keywords IoT; Event-driven programming; Education; Debugging; Tools INTRODUCTION The Internet of Things (IoT) holds tremendous promise for human society. Thanks to low-cost sensors and ubiquitous internet connectivity, IoT devices can make smart decisions based on real-time sensor input and advised by intelligent services in the cloud [35]. Nascent IoT systems are already transforming cities like Barcelona and Singapore, and more advanced IoT systems could revolutionize areas as diverse as homes, transportation, and health care [6]. Contact [email protected] for permission to make copies of all or part of this work. CS5724’16, Fall 2016, Blacksburg, VA. Copyright © 2016 James Davis Event-driven programming (EDP) is a natural way to struc- ture IoT applications [4]. However, little research has been invested in EDP pedagogy or process. If we don’t understand how to teach the IoT in general and EDP in particular, or how developers conceptualize EDP programming, the IoT vi- sion may not be realized to its fullest potential. Techniques from Human-Computer Interaction can inform these questions facing EDP. A brief introduction to event-driven programming Many universities dedicate much of their computer science curricula to procedural (imperative) programming, a sequential or multi-threaded programming paradigm in which programs follow a procedure (sequence of steps, “recipe”) in a relatively linear fashion until the completion of their algorithm [60]. The EDP paradigm [26] differs significantly, being non-sequential (asynchronous) and relational (webs of event handlers). I anticipate that these differences make EDP different to teach, learn, and practice than procedural programming. The EDP paradigm is coming into its own. Though historically EDP has played second fiddle to procedural programming, the explosion of the event-driven Node.js server-side JavaScript framework [18] has demonstrated its relevance to applications running on the modern Internet. In an EDP system, the sys- tem defines the set of possible events (e.g. “mouse click” or “incoming network traffic on port 1234”), and the developer supplies a set of “event handlers” (also known as “callbacks”) to be executed when particular events occur [60, 24]. During execution in an EDP application, events are generated and then routed to the appropriate handlers. EDP has been used at the level of hardware interrupts since the early days of computing, but to the best of our knowledge it first saw high-level use in the Mesa programming environment. Mesa was an early GUI system, and application developers could register event handlers for events like the re-sizing of a window [63]. EDP still holds sway in GUI environments, for example in the obsolete Java/AWT GUI framework [69] and the now popular Java/Swing and JavaFx GUI frameworks [20, 59]. EDP was later applied in early web servers, as researchers realized that it was particularly well suited to I/O-bound work like that of an FTP server. In this setting, network traffic is translated into client requests, which are “emitted” as events to be handled. For an FTP server, client requests are for files stored in the servers file system, so the handler would submit request(s) to the file system, registering yet another event handler to be executed once the file system completed the

Upload: others

Post on 15-Jul-2020

14 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Debugging Event-Driven Programming - Virginia Techexplosion of the event-driven Node.js server-side JavaScript framework [18] has demonstrated its relevance to applications running

Debugging Event-Driven Programming

James DavisVirginia Tech

Blacksburg, VA, [email protected]

ABSTRACTThe importance of the event-driven programming (EDP)paradigm is on the rise. While EDP has been been commonin client-side applications since around 2000 (e.g. GUI de-velopment, JavaScript-driven web pages), it has only recentlyentered the mainstream in the settings of mobile computing(Android, 2008 [15]) and server-side programming (Node.js,2009 [18]). The EDP paradigm is also a natural way to struc-ture applications written for the burgeoning Internet of Things.However, little effort has been invested in understanding theprocesses programmers use to develop and debug EDP ap-plications or in developing effective educational approachesfor EDP. This research proposes to “debug event-driven pro-gramming,” exploring the mental processes EDP developersuse to grok their code and considering contextualized educa-tional approaches based on the students’ backgrounds. As aresult, educators and systems researchers will be able to makeinformed decisions about the design of curricula, languages,development environments, and debugging aids.

ACM Classification KeywordsK.3.2. Computer and Information Science Education: Com-puter science education; D.2.5. Testing and Debugging: Test-ing tools (e.g., data generators, coverage testing)

Author KeywordsIoT; Event-driven programming; Education; Debugging;Tools

INTRODUCTIONThe Internet of Things (IoT) holds tremendous promise forhuman society. Thanks to low-cost sensors and ubiquitousinternet connectivity, IoT devices can make smart decisionsbased on real-time sensor input and advised by intelligentservices in the cloud [35]. Nascent IoT systems are alreadytransforming cities like Barcelona and Singapore, and moreadvanced IoT systems could revolutionize areas as diverse ashomes, transportation, and health care [6].

Contact [email protected] for permission to make copies of all or part of this work.CS5724’16, Fall 2016, Blacksburg, VA.Copyright © 2016 James Davis

Event-driven programming (EDP) is a natural way to struc-ture IoT applications [4]. However, little research has beeninvested in EDP pedagogy or process. If we don’t understandhow to teach the IoT in general and EDP in particular, orhow developers conceptualize EDP programming, the IoT vi-sion may not be realized to its fullest potential. Techniquesfrom Human-Computer Interaction can inform these questionsfacing EDP.

A brief introduction to event-driven programmingMany universities dedicate much of their computer sciencecurricula to procedural (imperative) programming, a sequentialor multi-threaded programming paradigm in which programsfollow a procedure (sequence of steps, “recipe”) in a relativelylinear fashion until the completion of their algorithm [60]. TheEDP paradigm [26] differs significantly, being non-sequential(asynchronous) and relational (webs of event handlers). Ianticipate that these differences make EDP different to teach,learn, and practice than procedural programming.

The EDP paradigm is coming into its own. Though historicallyEDP has played second fiddle to procedural programming, theexplosion of the event-driven Node.js server-side JavaScriptframework [18] has demonstrated its relevance to applicationsrunning on the modern Internet. In an EDP system, the sys-tem defines the set of possible events (e.g. “mouse click” or“incoming network traffic on port 1234”), and the developersupplies a set of “event handlers” (also known as “callbacks”)to be executed when particular events occur [60, 24]. Duringexecution in an EDP application, events are generated andthen routed to the appropriate handlers.

EDP has been used at the level of hardware interrupts since theearly days of computing, but to the best of our knowledge itfirst saw high-level use in the Mesa programming environment.Mesa was an early GUI system, and application developerscould register event handlers for events like the re-sizing of awindow [63]. EDP still holds sway in GUI environments, forexample in the obsolete Java/AWT GUI framework [69] andthe now popular Java/Swing and JavaFx GUI frameworks [20,59]. EDP was later applied in early web servers, as researchersrealized that it was particularly well suited to I/O-bound worklike that of an FTP server. In this setting, network traffic istranslated into client requests, which are “emitted” as eventsto be handled. For an FTP server, client requests are for filesstored in the servers file system, so the handler would submitrequest(s) to the file system, registering yet another eventhandler to be executed once the file system completed the

Page 2: Debugging Event-Driven Programming - Virginia Techexplosion of the event-driven Node.js server-side JavaScript framework [18] has demonstrated its relevance to applications running

request. Until the file system finished its request, the servercould process other requests in a similar fashion. In this way,a single-threaded event server can handle multiple clients atonce, with minimal overhead and high performance undercertain workloads [52].

WHAT WE KNOW ABOUT PROGRAMMERSThere is a rich literature on programmers: the kinds of bugsthey produce, how they debug, and the differences betweennovice and expert programmers. Much attention has been paidto the kinds of bugs produced by novices (e.g. [56]), as thesebug classifications can be used as a feedback mechanism torefine the content presented in a course. Other work has goneinto debugging techniques, examining debugging both as aspecific form of troubleshooting [40] and as a skill to be taughtin its own right [5].

Expert programmersResearch into how programmers debug began in the 1970s,beginning a long tradition of work on expert programmers.Gould and Drongowski initiated the field, assessing expertprogrammers in industry [28, 29]. Their employer, IBM, wasunderstandably interested in understanding how its profes-sional programmers did their work. They “seeded” one-pageFORTRAN programs with errors and studied 30 and 10 ex-perts as they debugged them. Using a combination of notesand think-aloud, they found that certain types of bugs wereharder to identify than others.

Ten years later, Vessey compared the debugging processes of“novice” and “expert” professional programmers, using multi-page COBOL programs seeded with errors. Vessey foundthat experts outperform novices, noting that experts seemedto be better at “chunking” a program: experts could divide aprogram into its related components more readily than novices,allowing them to focus on the chunks relevant to the bugs [65].We note that these “novices” are professional programmers,not to be confused with the research on true novices in theuniversity setting. This research is discussed below.

Eisenstadt provided a qualitative description of debuggingin industry [22], collecting “debugging war stories” from 78professionals. He reported commonalities in the stories: manyof these experts employed data gathering approaches (e.g.printf) and hand simulation to better understand the programand the cause of the bug, and half of the bugs were cases inwhich the professional tools (e.g. debuggers) available to thedevelopers were useless due to significant spatial or temporalgaps between a bug’s root cause and its symptoms.

Inspired by Eisenstadt’s discussion of the importance of datagathering, and the use of information foraging in earlier workby Russell et al. [58], Lawrance et al. studied the way expertsinteracted with a large and unfamiliar software project from aninformation foraging theory perspective [42]; they found thatprogrammer behavior could be modeled using informationforaging without needing a cognitive model. This model wasalso confirmed in a study of end-user programmers as theyevaluated a large, unfamiliar spreadsheet [30].

Novice programmersResearch on professional programming practice has beenparalleled by research on the practices of true novice pro-grammers. This line of research considers novice program-mers, which are students in their first or second computerscience course (CS1 or CS2). Research in this area stud-ies the ways novice programmers interact with source codeand programming aids while programming and debugging,typically relying on deep investigations of a small group ofstudents. Though the programming languages have changedfrom COBOL and BASIC to Java, the findings of the earliergenerations of researchers have been replicated by more recentwork.

The study of the bugs created by novice programmers wasspearheaded by Spohrer and Soloway, who argued that an em-pirical understanding of novice bugs would lead to improvedcourses. They structured their paper around two pieces of folkwisdom: that most errors are due to a small number of rootcauses, and that most errors are due to misunderstandings ofthe programming language. Their data agree that a few rootcauses explain most errors, but basing their work in GOMSphilosophy [38] they argue that programmer errors are due tothe programmer’s incorrect goals and methods unrelated tolanguage misunderstandings [61].

The oft-cited work of Katz and Anderson discussed four ex-periments with between 10 and 40 participants enrolled in aCS2 course. The most common bug location strategies thesestudents used were “simple recognition”, “forward-looking”approaches like hand simulation, and “backward-looking” ap-proaches like causal reasoning (reason backward from output).Of particular interest were the findings of their third experi-ment, which showed that students take different bug locationstrategies depending on whether or not they are familiar withthe code. Students took forward-looking approaches whenworking with unfamiliar code, preferring backward-lookingapproaches for code they understood [40].

Perkins and Martin studied 20 high school students in theirsecond semester of a programming course [55]. By monitoringeach student’s performance as they completed a set of asterisk-printing problems, they found that these novice programmersoften ran into problems attributed to their “fragile knowledge”.Correct programming relies on the successful integration of arich set of concepts, and partial, hard to access, or mis-appliedknowledge often tripped up these novices.

Ahmadzadeh et al. innovated a two-pronged approach to study-ing CS1 students: they assessed both programming practiceover the course of a semester and debugging skills in a 2-hourexam. From this rich dataset they showed that good program-mers (as measured by course grades) are not always gooddebuggers! This finding shows that students don’t alwayslearn debugging by themselves by working through homeworkassignments, and that explicitly teaching debugging might benecessary [5]. Murphy et al. agreed with this finding in theirstudy of 21 students enrolled in CS2 across 7 universities:their students had difficulties applying debugging techniquesconsistently and effectively [46], smacking of the fragile stateof novice knowledge identified by Perkins and Martin [55].

Page 3: Debugging Event-Driven Programming - Virginia Techexplosion of the event-driven Node.js server-side JavaScript framework [18] has demonstrated its relevance to applications running

Though many authors characterize the bugs introduced by theirstudents on a small scale, two particularly significant effortshave been made to identify the issues encountered by studentsin CS1. The first was a “critical incidents” [27] style approachthat surveyed instructors at 50 colleges for the most commonkinds of bugs introduced by CS1 students [33], while the sec-ond painstakingly recorded each interaction between roughly400 students and their teaching assistants at the University ofOtago over 2 years. In doing so, Robins et al. were able toidentify (year 1) and corroborate (year 2) the most commontypes of bugs introduced by students in CS1 [56]. Hristovaet al. classified programming errors as syntax, semantic, orlogic, while Robins et al. favored the higher-level labels ofbackground, general and specific to account for the more open-ended questions asked of the teaching assistants. Hristovaet al.’s error categories fall into Robins et al.’s “general” and“specific” categories, while Robins et al.’s approach identifiedthe additional category of “background” issues like problemswith the operating system or understanding the assignment.While working in pairs, Hanks’s students replicated the prob-lem distribution described by Robins et al., although each pairexperienced fewer issues overall [31].

Other lines of research on programmer behaviorOther avenues of research have pursued predictors of studentsuccess in computer science coursework. Researchers haveinvestigated the relationship between programming ability anda variety of psychological measures of programmers. Themost popular psychological measures to attempt are field de-pendence/field independence [67] and the Myers-Briggs per-sonality types (MBTI) [48]. Though we presume the reader’sfamiliarity with the MBTI, field dependence and field indepen-dence describe how readily a person can perceive a complexobject as composed of individual parts; field dependence im-plies difficulty in breaking down the whole into its parts, orto perceive an object outside of the context in which it is ob-served. While much of the research looks for relationshipsbetween psychological measure and course outcome ( [11, 17,47, 68]), a more promising approach seems to be to assesssmaller aspects of the process: individual assignments [11] oreven the individual stages of problem-solving in the contextof programming [10]. The effort to correlate programmingability and psychological measures has been largely unsuc-cessful, as the most consistent finding has been that academicsuccess begets academic success (e.g. [47]). However, somepromising research demonstrated that field independent pro-grammers are better at algorithmic design [10] and locatinglogical errors [14].

Researchers have considered still other aspects of program-mers, including verbal ability and gender. For example, in astudy of novice debugging strategies, Romero et al. found norelationship between verbal ability (as measured by GRE testquestions) and debugging skill [57]. Others have investigatedgender, e.g. finding in end-user programmers that femalesintroduce more bugs than males while debugging a programseeded with errors [8], and that males tinker more than femalesduring end-user debugging, sometimes to their detriment [9].From another angle, Woszczynski et al. observed no statistical

difference in the performance of males and females measuredby final grade in a CS1 course [68].

WHAT WE DON’T KNOW ABOUT PROGRAMMERSDespite decades of research into the practice of programmingby experts and novices, many questions remain unanswered.We observe two fundamental gaps in the literature: the ef-fect of programming paradigm and the effect of programmerbackground.

Programming paradigmsFirst, almost all of the research on novices discussed in Sec-tion has focused on the procedural programming paradigm,in which students give the computer a sequence of steps toaccomplish the desired task. This decision is presumably anartifact of history: these are the languages used in the CS1 andCS2 courses taught by the researchers, and therefore are theeasiest for the researchers to study. There is no reason to sus-pect, however, that the conclusions of these researchers aboutissues encountered by novices can be generalized to otherprogramming paradigms. Indeed each paradigm is different,with its own strengths and weaknesses [64]. Exceptions to the“procedural club” are LISP [40] (functional paradigm, thoughparadigm was not emphasized in this study) and Excel (logicparadigm) [30]. The rest use strictly procedural languagesor are heavily reliant on the procedural components of lan-guages: Pascal [61], BASIC [55], and Java [56, 5, 57, 31, 33,46, 45]. What impact might the use of different programmingparadigms have on novice performance?

We were able to locate only two papers that explicitly considerthe impact of different paradigms in some way. Myers et al.compared cognitive style to ability in programming paradigmbased on non-novice students taking two post-CS2 coursesconcurrently, with inconclusive results [47], while White andSivitanides proposed a theory predicting performance in lan-guages falling into different paradigms as a function of cogni-tive development and brain hemisphere preference, but madeno attempt to validate their theory [66]. We propose to enhancethe research methodologies in the field of novice programmingerrors with techniques to give us a deeper sense of the students’cognitive processes, and to apply these methodologies in thesetting of disparate programming paradigms.

Programmer backgroundSecond, the research on programming errors in Sec-tions and assumes the programmers are homogeneous inevery respect other than amount of programming experience.Though some list the genders or ages of the participants, noneof the analyses distinguish between participants on the basis ofthese differences. This is a curious state of affairs. We suggestthat this lack of investigation is due to the relatively smallsample sizes of many of the studies, perhaps due in turn tothe man-hours required to follow the verbal protocol method-ologies on which they rely. Does a programmer’s backgroundhave an effect on his ability or on the kinds of problems heexperiences?

One aspect of programmer background of particular importin the modern era is cultural background. In addition to me-chanical labor like manufacturing, Western companies large

Page 4: Debugging Event-Driven Programming - Virginia Techexplosion of the event-driven Node.js server-side JavaScript framework [18] has demonstrated its relevance to applications running

and small have outsourced intellectual work like computerprogramming. We have observed this phenomenon first-handwhile working at IBM: American employees were occasion-ally replaced by Indian or Chinese employees. Understandingthe ramifications of changing the cultural background of theirprogrammers could help companies determine which projectsare better suited to outsourcing. To understand the issues inthis area, we draw on the pioneering work of Nisbett et al. andNisbett summarized in [50, 49].

Cultural backgroundNisbett et al.’s argument is complex, drawing relationshipsbetween societal structure and cognitive processes, and wesummarize it here. First, the social organization of ancientGreek and Chinese cultures was markedly different. Greekculture was focused on the individual, while Chinese culturewas focused on the group. This difference affected their per-spective on how the natural world worked (indeed, the Chinesebarely acknowledged the concept of “nature as distinct fromhuman or spiritual entities”. Due to their focus on the indi-vidual, the Greeks were prone to systematizing everything,looking for rules and categorizations in the nature. In contrast,the Chinese did not construct formal models of nature. Nisbettet al. argue that the social elements of life in these ancientcultures affected their systems of thought, in turn influencingtheir cognitive processes. Of particular relevance to our pro-posed research, the ancient Greeks became in general moreoriented to individual parts, while the Chinese considered partsinseparably from the whole.

While descriptive of life in ancient Greece and China, are thesedifferences relevant today? Nisbett et al. cite anthropologicalevidence that the social order of ancient Greece is similar to thesocial order of modern North America and Europe (Western-ers), while that of ancient China is reflected in the social orderof Eastern Asian countries (China, Japan, South Korea, etc.)(Easterners). Nisbett et al. contend that “the original differ-ences in cognitive orientations were due to the social psycho-logical ones”, and cite a broad array of evidence demonstratingthese cognitive differences between Westerners and Easternersin the modern age. We are principally concerned with theancient distinction between the individual (Greece) and thewhole (China) and its implications for computer programming.Whether a person is more attuned to individual components orto the whole can be measured using Witkin’s concept of fielddependence/independence (FD/FI) [67]: Westerners are morefield independent, identifying individual objects as they com-pose a more complex whole, while Easterners are more fielddependent, considering the whole composed of inseparableparts [50].

We propose to apply Nisbett’s research on the cultural basis ofcognitive processes to the area of computer program debug-ging, which will help educators and employers understand thepotential weaknesses of their students and employees basedon their cultural background. Despite over 3000 citations ofthe work of Nisbett et al. [50], it has made few inroads intocomputer science research. Though Pauleen et al. appealedfor this research in 2005 [54], the computer science educationcommunity does not seem to have heeded their call. We will.

RESEARCH QUESTIONS1. RQ1 Do the problems encountered by novice programmers

vary across different programming paradigms? Are theproblems and their distributions a function of the program-ming paradigm?

2. RQ2 Does a student’s background affect the kinds of issuesthey encounter in different programming paradigms?

To answer these research questions, we design our experimentsto investigate the following hypotheses:

1. H1 Novice programmers will experience different kindsof issues when being taught using different programmingparadigms. For issues common to all paradigms, we donot expect the distribution of issues to vary significantly byparadigm. This hypothesis will be tested by comparing theevent-driven paradigm to procedural programming.

2. H2 Culturally Western programmers and culturally Easternprogrammers will differ in their ability to perform differentkinds of activities in event-driven programming.

Our first hypothesis is open-ended, mirroring the fact thatthe literature on novice errors is exclusively focused on theprocedural programming paradigm. This aspect of the studywill be exploratory because the literature contains no hints asto the types of errors expected in other paradigms.

We base the cultural aspect of H2 on the differences betweenEasterners and Westerners described by Nisbett et al. [50] andNisbett [49], discussed earlier in more detail (Section ). Wepredict, in line with previous research [14, 10] that Westernerswill be more effective at the intra-function aspects of EDP,while Easterners will be more effective at dealing with theinter-function relationships necessary in EDP.

What is a novice programmer?We will study novice programmers in answering both of theseresearch questions. It is difficult to consistently define a“novice programmer” across different institutions and differentcurricula. For example, the novices of Katz and Andersonwere described as only making apparently-accidental “slips”,while students at other universities at a similar point in theiracademic careers are described as struggling with much morebasic concepts at a similar point in their academic careers [33,25, 31]. Many factors can influence student ability, includingthe skill of the educator, the caliber of the students, and anyprior programming experience they might have. A strong ex-perimental design must control for these factors to obtain asound definition of a novice programmer. For ease of compar-ison, we will study only students at Virginia Tech who havetaken the same set of computer science courses.

RESEARCH QUESTION 1We wish to understand whether novices experience differentissues when working in different programming paradigms. Wehypothesize that they will indeed do so, and will establish thisby comparing the issues they encounter when being trained inevent-driven programming compared to procedural program-ming. This research question relies on concepts from HCIparadigms I and II [32].

Page 5: Debugging Event-Driven Programming - Virginia Techexplosion of the event-driven Node.js server-side JavaScript framework [18] has demonstrated its relevance to applications running

Experimental designStudies characterizing student issues are typically run for theentire duration of a programming course. We will thereforestudy students taking a programming course emphasizingthe procedural paradigm and students taking a programmingcourse emphasizing the event-driven paradigm. The Java pro-gramming language can be used for either paradigm, by em-phasizing the existing introductory material (procedural, withlight training in object-oriented programming) or by orientingthe course towards GUI programming. We propose focusingVirginia Tech’s Java-based CS1054 course [2] (CS1) in thesetwo directions in two consecutive semesters, comparing theerrors experienced by these novices under each paradigm. Thecurrent course focuses on the procedural paradigm, but thesame material can be presented in the event-driven paradigmas demonstrated by previous CS1 courses taught in the event-driven paradigm [13, 16]. As a programming-intensive course,CS1054 is more suitable for studying novice errors than othernon-major courses like CS1014 [1].

Data collection protocolsIn this section we discuss the methodological techniques usedby researchers in this area. Though other techniques are men-tioned in the literature, the primary approaches whose data iscited during the discussion of results are verbal protocols andon-line protocols. After discussing the use of these protocols,we propose to combine them in a novel way.

Verbal protocolsResearchers frequently use verbal protocols [23] to get insideprogrammers’ heads and understand their cognitive processes.Ericsson and Simon observe that verbal protocols won’t sig-nificantly interfere with participants’ cognitive processes pro-vided that the participant encodes the information verbally,and that if the information is encoded in some other form(e.g. visual data, smells) the participant will need to devotemental resources to vocalize that information. Though someresearchers uncritically use verbal protocols for tasks involv-ing visual representations of data [57], the vast majority ofuses seem in keeping with Ericsson and Simon’s guidelines.

On-line protocolsA promising, more data-oriented branch of research makes useof on-line protocols, introduced by Spohrer [61]. On-line pro-tocols require the student to use the researcher’s developmentenvironment, tracking each student’s every move. Researcherstypically capture coding snapshots of each programming as-signment each time the student compiles their code, and re-searchers subsequently categorize the syntax errors made bynovices [5, 36] or identify and explain semantic and logicalerrors based on psychological analysis of the source code [61].On-line data can of course be used in tandem with qualita-tive data [19]. On-line protocols, also known as “educationaldata mining”, have increased in popularity in recent yearsthanks to a shift towards online programming environmentsthat facilitate fine-grained data collection methods [34].

Our proposed protocolsOn-line protocols A shortcoming of on-line protocols is thatthe data do not provide explanatory power. Researchers can

determine whether code contains syntax errors or whetherit fails a particular test case, but they cannot understand theprogrammer’s intent without pseudo-scientific psychologicalanalysis; Spohrer and Soloway’s work acknowledged this con-cern, writing that “the[ir] plausible accounts [of programmer’sthought process]...must be subjected to further scientific in-quiry” [61]. As far as we are aware, no subsequent workhas identified a good quantitative scheme for determining theprogrammer’s thought process. We propose a two-prongedapproach with a novel methodological twist that promises togive deeper insight into the programmer’s thought process.

We agree that on-line protocols promise the best source ofscalable quantitative data about a programmer’s thought pro-cess. However, a novice’s source code alone – often lackingcomments or good variable names – leaves much to be desiredas a window into their mind. To gain this missing insightinto a student’s thought process, we propose modifying theJava grammar to require a comment after every statement andpreceding every block of code. This novel twist on on-lineprotocols will give us a sense of why each line of code exists.

Furthermore, we can combine this scheme with our own appli-cation test suite based on the specifications of each program-ming assignment. We agree with prior work [61, 36, 5] thatcompilation time is a good level of granularity. By filtering forprograms that are syntactically correct (those that pass compi-lation) [61], we can determine the tests failed by each “valid”program, and use recent advances in program-level root causeanalysis (e.g. [39]) to pinpoint the line(s) of code responsiblefor each test failure. Due to our changes to the Java grammar,the compiler will reject any programs without comments onevery line; since these programs are syntactically correct, theywill be highly commented. We can then analyze the commenton each line responsible for the test failure to understand thetrue root cause of the failure. We know that “the [root causeof the] error is between the chair and the computer”, but withthe programmer’s comments in hand we can understand thethought process that led to the failure.

Virginia Tech already employs an online programming en-vironment in the form of the Web-Computer Aided Testing(Web-CAT) environment, used e.g. in CS1114 (CS1 for ma-jors) [3]. Since students do their programming within thisenvironment, we have the opportunity to track their behav-ior at granularities ranging from key strokes to assignmentsubmissions. We will modify the Java compiler in the Web-CAT environment to reject programs without comments asdiscussed above, and to record each compilation attempt forsubsequent analysis.

Example 1: Procedural programming Figures 1 and 2 illus-trate the potential explanatory power of our comment-every-line on-line protocol. Figure 1 is pseudocode for a functionthat doubles the value of each number in an array, returning anarray of doubled numbers. It contains an error: on line 4, theprogrammer computed the doubled value, but fails to insert itinto the doubleNums array defined on line 2. The result is thaton line 5 the programmer always returns an empty array. Wecan only guess at the gap in the programmer’s line of reason-ing – Did they simply forget to add it to doubleNums? Having

Page 6: Debugging Event-Driven Programming - Virginia Techexplosion of the event-driven Node.js server-side JavaScript framework [18] has demonstrated its relevance to applications running

Figure 1. Novice pseudo-code algorithm. Returns an empty dou-bleNums, but why?

Figure 2. Comments associated with each line of the novice’s code. Wenow see the programmer’s thought process, giving us an explanation forthe error.

pseudocode that meets the comment-every-line requirement,shown in Figure 2, gives us concrete evidence for where ourhypothetical novice programmer’s thinking failed: they falselyassumed that calculating doubleNum would as a side effectadd it to the doubleNums array, letting us categorize this errorwith confidence as an S11 in the notation of Robins et al. [56].

Example 2: Event-driven programming No novice errorsspecifically associated with the event-driven programmingparadigm have been identified. In Figures 3 and 4, we illustratea novice error inspired by [12]. The assignment is intendedto allow a user to draw lines on the screen. The expectedbehavior is as follows: when the user clicks (OnMouseDown)and releases (OnMouseUp), a line L1 is drawn between theDown and Up points. When the user clicks again, a lineL2 should connect the endpoint of his previous line to thebeginning of his new line, and the new line should be drawnas already described. However, the hypothetical novice codein Figure 3 will only draw the L1-style between mouse clicks,and won’t draw any L2-style lines connecting the L1-stylelines. From the uncommented code in Figure 3, it is not clearin what way the novice’s thoughts led to the error. At a highlevel, is the problem in comprehending the assignment, or inimplementing it? Without comments or a conversation withthe student, we cannot know.

Figure 4 illuminates the cause. Rather than an issue in im-plementing the assignment using EDP concepts, which mighthave incorrectly inspired an example of a new class of bug,the comment on line 1 provides evidence that the programmersimply misunderstood the assignment: they thought they wereto draw lines only L1-style lines between mouse clicks. Wedo not know whether the programmer would have been ableto implement the assignment correctly had they understood it,but the commented on-line protocol has protected us from anincorrect analysis of the programmer.

Verbal protocols Nearly all of the studies cited earlier re-port the use of a think-aloud verbal protocol analysis. Ourcommented on-line protocol should give us a “source codetranscript” of each student’s thought process, so we do notanticipate the need for a think-aloud verbal protocol. However,we recognize that not all comments are created alike, and feel

that verbal reports can provide some illumination. In additionto our commented on-line protocol, we propose the use ofa retrospective verbal protocol along the lines of Flanagan’scritical incidents [27]. We will ask students for a descriptionof how they fixed one or more of the bugs they introducedwhile working on each assignment. By administering theseprotocols as part of the assignment submission process, thesereports should be fresh enough to provide helpful insights intothe minds of novice programmers.

Data analysisData from our commented on-line protocol will give us awindow into the programmer’s thought process. This will letus understand the kinds of errors to which novice program-mers are prone, in both the procedural and the event-drivenparadigms. By understanding these errors at a cognitive level,we can more carefully restructure the presentation of materialto address common misconceptions or mistakes. We can applytranscript coding techniques [23] to identify the most com-mon types of errors. We anticipate that coding these sourcecode transcripts will be far more feasible than attempting athink-aloud protocol, which requires students to program in alaboratory setting and researchers to carry out a painstakingtranscription and coding of more meandering and gap-filledverbal transcripts.

Something that has troubled us about much of the research onthe novice programming process has been the relatively smallsample size: studies often use N=10-40 and are placed within asingle university. Given the use of a labor-intensive think aloudprotocol, we find these small sample sizes understandable.However, we are excited by the prospect of larger sample sizesafforded by the more compact and more readily-understooddata offered by the commented on-line protocol we propose.

Data from our retrospective verbal protocol should furtherinform improvements in the presentation of course material.Interviews like those we describe have been administered inclinical settings (e.g. [28, 29, 25]), but we are not aware ofstudies using longitudinal per-student interviews [44]. Weanticipate that the interview corpus will consist of a set of“Eureka!” moments from students, which can be used to helpus understand effective debugging strategies and to give futurestudents a sense of what the debugging process feels like froma qualitative perspective.

ThreatsWe have identified four threats to this study. The first is practi-cal, while the remainder are threats to validity.

First, the CS department would need to be convinced to allowus to modify the content of one of their courses. We haveproposed the use of the CS1054 course [2] because it is “CS1for non-majors”. Modifying it will not affect the existingcomputer science curriculum for CS majors, so we conjecturethat the CS department will be more likely to permit us tochange its design. If necessary, we could scale down ourmethod to introduce a 2-week module emphasizing eitherprocedural or event-driven programming, the use of whichmight be more amenable to the course instructors.

Page 7: Debugging Event-Driven Programming - Virginia Techexplosion of the event-driven Node.js server-side JavaScript framework [18] has demonstrated its relevance to applications running

Figure 3. Novice pseudo-code algorithm. Draws disconnected lines. What went wrong in the programmer’s thought process?

Figure 4. Comments associated with each line of the novice’s code. We see the reason for the error: the class declaration does not match the assignment.

Second, students could cheat our modification to the Javagrammar by ending statements with empty comments or com-ments containing gibberish. We believe this tendency wouldbe curbed by two factors. First, the instructor has both a carrotand a stick: he can emphasize the importance of commentsas part of good programming practice, and he can punish thisbehavior by including the quality of comments in his gradingrubric. Second, novices are not asked to write particularly longprograms, so per-line comments should not be particularly ar-duous. However, our commented on-line protocol could betiresome for students in higher-level courses (i.e. those aboveCS2). We suspect that modifying the Java grammar to onlyrequire comments at the block level would still make thisprotocol useful for higher-level courses.

Third, the instructors might not be able to comprehend thecomments added by students. A novice might not be able toarticulate why they are doing something. However, recall thatRomero et al. reported no significant relationship betweenprogramming ability and verbal ability (as measured by GREquestions) [57]. If true, students of high verbal ability shouldbe equally prone to writing code with and without errors, im-plying that at least some of the error-strewn programs willlikely be authored by students with stronger writing abilities.These same students are expected to do a good job of keep-ing their comments in sync with their code. We anticipate,therefore, that at least a subset of the student programs will becomprehensible and suitable for study.

Lastly, it’s conceivable that being forced to justify every lineof code with a comment will change the distribution of errorsexperienced by our participants. The distribution of errors inprocedural programming has been reported and replicated [56,31], and we will compare the error distribution between ourprocedural programming dataset and the expected distributionas reported by these other researchers. If it does not match,we will still be able to make a useful summary of error distri-butions, but we will know that the results will not be directly

comparable to prior work. In the process, however, we willhave inadvertently discovered a relatively easy way to helpnovice programmers write higher-quality code. We suspectmany educators would appreciate this finding.

RESEARCH QUESTION 2In the spirit of the Third Paradigm of HCI [32], we arguethat not all programmers are created equal, and that the cul-tural context of a programmer may be particularly relevant topredicting their ability to debug different kinds of problems.We have predicted, based on the work of Nisbett et al. [50],that Westerners and Easterners will have different experienceswhile working on various aspects of EDP programming.

We expect the event-driven programming errors identified bythe experiment discussed in Section to fall into two majorcategories: intra-function and inter-function. Event-drivenprograms have a procedural component, because they includea set of functions, each of which must work correctly. We referto errors in this aspect of EDP programs are intra-functionerrors. Event-driven programs have a relational componentas well: functions interact with one another as listeners forevents, and must save state between events using techniqueslike closure. For example, in problems like that illustrated inFigure 3, a correct implementation requires the programmer toset the oldPoint variable to point in the OnMouseUp functionfor use in the OnMouseDOwn function. This principal isof course not unique to EDP, but in EDP this inter-functionreasoning is a fundamental aspect rather than an advancedconcept as in the procedural paradigm. Errors in this area willbe referred to as inter-function errors because they requireconceptualizing relationships between different invocations ofthe same or different functions.

Nisbett et al. [50] argued that Westerners are stronger at ana-lytic thought and at considering each part of a whole in isola-tion, while Easterners excel at thinking of the role each pieceplays in an indivisible whole. This difference manifests itself

Page 8: Debugging Event-Driven Programming - Virginia Techexplosion of the event-driven Node.js server-side JavaScript framework [18] has demonstrated its relevance to applications running

in Witkin’s concept of field dependence/field independence(FD/FI) [67], with Westerners matching the description of fieldindependence and Easterners that of field dependence.

We therefore refine our second hypothesis further: we predictthat Westerners will be superior at designing and debuggingintra-function algorithms, while Easterners will excel at de-signing and debugging inter-function relationships.

Experimental designOur interest is in assessing the ability of Westerners and East-erners to design and debug particular aspects of problems inEDP programs. Though the techniques of our first experiment(Section ) will give us a sense of the experience of program-mers in general when working in the EDP paradigm, the detailsof particular programming issues may be difficult to pick out.We therefore propose a clinical experiment in the vein of thestudies of debugging methodology described earlier, patternedparticularly after [10].

Measuring field dependence/independenceTable 1 shows the independent and dependent variables inthis study. Though prior studies on the interaction of FD/FIand computer programming ability used the Group Embed-ded Figures Test to measure FD/FI [14, 10], performance onthis instrument does not appear to be independent of culture.Bagley [7] observes that Easterners register as FI on EFT-styletests in contrast to their FD scores on other measures of FD/FI.He hypothesizes that this is because literacy in the languagesof Eastern countries typically requires the memorization ofand the ability to differentiate between hundreds or thousandsof complex symbols. Nisbett et al. [50] refer to several otherinstruments suitable for measuring FD/FI, and following Jiand her colleagues we will use Witkin’s Rod-and-Frame Test(RFT) to measure FD/FI [37].

Data collection protocolsAs is standard in this community, we will administer a 2-hourtest to students in the EDP course discussed in Section . Inthe first 15 minutes, we will carry out an initial assessment ofprogramming background and then measure the participant’sFD/FI status with the RFT instrument. We anticipate thatstudents from an Eastern background (relatively common inprogramming classes) will score more strongly as FD, whilestudents from a Western background will score as FI.

We will then administer a 20-minute warm-up task, askingstudents to debug two small (one-page) programs, one withan intra-function error and one with an inter-function error.Once the warm-up is concluded, the remainder of the timewill be divided into three portions. First, participants willspend 45 minutes completing two design tasks, one callingfor intra-function skill and the other requiring inter-functionskill. We are not intimately familiar with the abilities of noviceprogrammers or with appropriate expectations for their skillsat EDP, so the particular tasks will need to be determinedthrough an iterative pilot study. Second, participants willbe asked to complete four debugging tasks in 45 minutes,each task featuring a different one-page program with oneerror. Half of the errors will be intra-function, and half will beinter-function. To avoid order effects, we will vary the order

in which participants attempt the “first” and “second” tasks,as well as the order of the intra-function and inter-functionproblems within each task. The remaining time will be spentin a 15-minute post-test interview, using the retrospectiveprotocol discussed in Section .

Data analysisWe will compare the field dependence/independence scoresof the participants to their intra-function and inter-functiondebugging ability, measured by the speed and accuracy withwhich they identify the bugs during the six debugging tasks.We anticipate that introducing cultural background, measuredby field dependence/field independence, will lead to the first(to our knowledge) successful demonstration of the strongprediction of debugging ability as a function of a psychologicalmeasure. The degree of field dependence/field independencevariation across cultures is greater than within a culture [37], sothe differences between Easterners and Westerners should bemore pronounced than those within just the American culture.

It is our hope, therefore, that this experiment will demonstratethat instructors must consider the cultural context of each oftheir students. Computer science educators do not appear tohave considered this possibility, possibly leading to poor learn-ing outcomes for students from cultural backgrounds differentfrom their instructor. Though we have deliberately selectedthe programming paradigm in which these differences seemthe most likely to manifest, the notion of intra-function vs.inter-function errors is relevant in more complex programs,particularly in the popular object-oriented paradigm used inmany CS programs. Software companies could also use theresults of this study to assign employees to culturally appropri-ate tasks, for example asking Western programmers to work onprocedural aspects of projects (e.g. business logic) and Easternprogrammers to work on event-driven aspects of projects (e.g.the GUI).

ThreatsFirst, it’s conceivable that not enough Easterners would enrollin CS1054 to provide a useful sample size. An alternativecourse in which to this study would be a hypothetical graduate-level course that uses concepts from event-driven program-ming, e.g. a course in the IoT. From personal experience, Iknow that the graduate population of the “Systems” arm ofVirginia Tech’s Computer Science department is largely East-ern, and these students would be interested in an IoT course.Since event-driven programming is an unusual paradigm, thereshould not be prior learning effects on the intra-function andinter-function debugging ability of many of these students.Furthermore, White and Sivitanides warned that not all first-year college students (nor indeed all adults) have attained fullformal operational thinking, and as a result using a graduate-level course might actually be preferable for this study [66] ifthe results are to be generalized to professional practitioners.

Second, it’s possible that our predictions about the implica-tions of field dependence and field independence are onlyrelevant to larger programming projects. It is clear that bothWesterners and Easterners can be skillful computer program-mers, presumably because regardless of the programming

Page 9: Debugging Event-Driven Programming - Virginia Techexplosion of the event-driven Node.js server-side JavaScript framework [18] has demonstrated its relevance to applications running

Variable Type Description Prediction

Field dependence or independence Independent Measures how effectively the participant can perceive items independent of context Westerners - FI; Easterners - FDIntra-function debugging ability Dependent Measures how well the participant can debug intra-function errors Positive correlation to FIInter-function debugging ability Dependent Measures how well the participant can debug inter-function errors Positive correlation to FD

Table 1. Independent and dependent variables in Experiment 2

paradigm under consideration, both field dependent and fieldindependent programmers can understand complex softwaresystem. A field dependent programmer would presumablyunderstand a complex software system holistically, seeing therelationships between all of the components indivisibly, whilea field independent programmer might understand it as the sumof the interactions between its constituent parts. The effect offield dependence or independence might not manifest at thescale of a one-page program, and a study like that of Lawranceet al. [42], observing professional programmers as they exam-ine a large open-source code base, might be needed to trulyobserve any differences between individuals with these oppos-ing traits. However, the relative ease of studying students inthe university setting compared to professional programmersmeans that a “students first” approach is desirable.

CONCLUSIONEDP emerged as an important paradigm in the CS curriculumin the 1990s and early 2000s as GUIs became commonplace.Early explorations of EDP coursework focused on GUIs [62,13, 16, 12]. While Stein argued that EDP should be includedas one of several programming paradigms [62], others [13, 16]argued that it could be used as the focal paradigm of a CS1course. There was some debate as to whether EDP was suitablefor “CS1”-style classes, or if the concepts involved were toodifficult. Bruce et al. argued that EDP could be used inCS1 [13], and indeed that doing so simplified teaching severalkey concepts [12]. However, research on EDP pedagogy haslargely fallen silent: EDP did not “catch on” as a core conceptin CS education, remaining a specialized topic for advancedstudents.

Ten years later, the IoT is increasingly imminent, and educa-tors have begun to explore IoT-themed classes for advancedundergraduates and graduate students. These courses are in-tended for lower-level [41] or higher-level undergraduates [21,51] or graduate students [43, 51]. Interestingly, the literatureon undergraduate IoT coursework does not discuss pedagog-ical issues related to EDP, suggesting a potential barrier tostudent success: inviting students trained in one paradigm(e.g. functional or object-oriented) into a new programmingparadigms (EDP) raises the specter of negative knowledgetransfer [53]. Students may be stuck in one mode of thinking,unable to adapt to the new concepts in another paradigm, andmis-applying concepts from the first programming paradigmin the second [55].

We strongly feel that understanding the issues related to theEDP paradigm will inform computer science educators. Thisparticular paradigm may be one of the keys to bringing thepromise of the Internet of Things to fruition, and as a resultunderstanding how to teach it is increasingly imperative. Webelieve the proposed research will inform educators as to thedifficulties novices encounter in EDP, and that more broadly itwill remind educators of the effect that cultural differences may

have on their students’ abilities. These results will be usefulnot only for training students in the IoT, but also for other EDPsettings like mobile development and GUI programming.

ACKNOWLEDGMENTSWe thank Professor Steve Harrison for his guidance on thisproject. We thank our advisor, Dr. Dongyoon Lee, for histolerance of our interest in Human-Computer Interaction andComputer Science Education.

REFERENCES1. 2016a. CS1014: Introduction to Computational Thinking.

(2016). https://sites.google.com/a/vt.edu/computational-thinking/home

2. 2016b. CS1054: Introduction to Programming in Java.(2016).http://www.cs.vt.edu/undergraduate/courses/CS1054

3. 2016c. CS1114: Introduction to Software Design. (2016).http://www.cs.vt.edu/undergraduate/courses/CS1114

4. Asanka Abeysinghe. 2016. Event-Driven Architecture:The Path to Increased Agility and High Expandability.(2016). http://wso2.com/whitepapers/event-driven-architecture-the-path-to-increased-agility-and-high-expandability/

5. Marzieh Ahmadzadeh, Dave Elliman, and Colin Higgins.2005. An Analysis of Patterns of Debugging AmongNovice Computer Science Students. In ITiCSE. 84–88.DOI:http://dx.doi.org/10.1145/1151954.1067472

6. Rajeev Alur, Emery Berger, Ann W Drobnis, Limor Fix,Kevin Fu, Gregory D Hager, Daniel Lopresti, KlaraNahrstedt, Elizabeth Mynatt, Shwetak Patel, JenniferRexford, John A Stankovic, and Benjamin Zorn. 2015.Systems Computing Challenges in the Internet of Things.arXiv preprint arXiv:1604.02980 June (2015), 1–15. DOI:http://dx.doi.org/10.1109/MC.2011.291

7. Christopher Bagley. 1995. Field Independence inChildren in Group-Oriented Cultures: Comparisons fromChina, Japan, and North America. The Journal of SocialPsychology 135, 4 (1995), 523–525. DOI:http://dx.doi.org/10.1080/00224545.1995.9712221

8. Laura Beckwith. 2005. Gender HCI issues inproblem-solving software. CHI ’05 extended abstracts onHuman factors in computing systems - CHI ’05 (2005),1104. DOI:http://dx.doi.org/10.1145/1056808.1056833

9. Laura Beckwith, Cory Kissinger, Margaret Burnett,Susan Wiedenbeck, Joseph Lawrance, Alan Blackwell,and Curtis Cook. 2006. Tinkering and gender in end-userprogrammers’ debugging. Proceedings of the SIGCHIconference on Human Factors in computing systems -CHI ’06 (2006), 231. DOI:http://dx.doi.org/10.1145/1124772.1124808

Page 10: Debugging Event-Driven Programming - Virginia Techexplosion of the event-driven Node.js server-side JavaScript framework [18] has demonstrated its relevance to applications running

10. Catherine Bishop-Clark. 1995. Cognitive style and itseffect on the stages of programming. Journal of Researchon Computing in Education 27, 4 (1995), 373.

11. Catherine Bishop-Clark and Daniel D Wheeler. 1994.The Myers-Briggs personality type and its relationship tocomputer programming. Journal of Research onComputing in Education 26, 3 (1994), 358. DOI:http://dx.doi.org/10.1080/08886504.1994.10782096

12. Kim B. Bruce, Andrea Danyluk, and Thomas Murtagh.2004. Event-driven programming facilitates learningstandard programming concepts. Companion to the 19thannual ACM SIGPLAN conference on Object-orientedprogramming systems, languages, and applications -OOPSLA ’04 (2004), 96–100. DOI:http://dx.doi.org/10.1145/1028664.1028704

13. Kim B. Bruce, Andrea P. Danyluk, and Thomas P.Murtagh. 2001. Event-driven programming is simpleenough for CS1. Proceedings of the 6th annualconference on Innovation and technology in computerscience education - ITiCSE ’01 (2001), 1–4. DOI:http://dx.doi.org/10.1145/377435.377440

14. Thomas P. Cavaiani. 1989. Cognitive styles anddiagnostic skills of student programmers. Journal ofResearch on Computing in Education 21, 4 (1989),411–420.

15. Jacqui Cheng. 2007. It’s official: Google announcesopen-source mobile phone OS, Android. Ars Technica(2007). http://arstechnica.com/gadgets/2007/11/its-official-google-announces-open-source-mobile-phone-os-android/

16. Henrik Baerbak Christensen and Michael E Caspersen.2002. Frameworks in CS1 - a Different Way ofIntroducing Event-driven Programming. In ITiCSE.

17. Larry S Corman. 1986. Cognitive style, personality type,and learning ability as factors in predicting the success ofthe beginning programming student. ACM SIGCSEBulletin 18, 4 (1986), 80–89. DOI:http://dx.doi.org/10.1145/15003.15020

18. Ryan Dahl. Video: Node.js by Ryan Dahl. (????).http://www.jsconf.eu/2009/video

19. Brian Danielak. 2014. How electrical engineeringstudents design computer programs. Ph.D. Dissertation.University of Maryland.http://www.physics.umd.edu/rgroups/ripe/perg/

dissertations/Danielak/DissertationDanielak.pdf

20. Stephen C. Drye and William C. Wake. 1999. Java Swingreference. Manning PUblications Company.

21. Shannon Duvall and Joel Hollingsworth. 2016. Creating acourse on the internet of things for undergraduatecomputer science majors. Journal of Computing Sciencesin Colleges (2016), 97–103.

22. M Eisenstadt. 1993. Tales of Debugging from the FrontLines. Empirical Studies of Programmers, 5th Workshop5365 (1993), 86–112.

23. K. Anders Ericsson and Herbert A. Simon. 1980. VerbalReports as Data. Psychological Review 87, 3 (1980),215–251. DOI:http://dx.doi.org/10.1037/0033-295X.87.3.215

24. Steve Ferg. 2006. Event-driven programming:introduction, tutorial, history. (2006).http://parallels.nsu.ru/

25. Sue Fitzgerald, Gary Lewandowski, RenÃl’e McCauley,Laurie Murphy, Beth Simon, Lynda Thomas, and CarolZander. 2008. Debugging: finding, fixing and flailing, amulti-institutional study of novice debuggers. ComputerScience Education 18, 2 (2008), 93–116. DOI:http://dx.doi.org/10.1080/08993400802114508

26. David Flanagan. 2004. JavaScript: The Definitive Guide,4th. (2004).

27. J. C. Flanagan. 1954. The critical incident technique.Psychological Bulletin 51, 4 (1954), 327–358. DOI:http://dx.doi.org/10.1037/h0061470

28. JD Gould and Paul Drongowski. 1974. An exploratorystudy of computer program debugging. Human Factors:The Journal of the 16, 3 (1974), 258–277.http://hfs.sagepub.com/content/16/3/258.short

29. John D. Gould. 1975. Some psychological evidence onhow people debug computer programs. InternationalJournal of Man-Machine Studies 7, 2 (1975), 151–182.DOI:http://dx.doi.org/10.1016/S0020-7373(75)80005-8

30. Valentina Grigoreanu, Margaret Burnett, SusanWiedenbeck, Jill Cao, Kyle Rector, and Irwin Kwan.2012. End-user debugging strategies. ACM Transactionson Computer-Human Interaction 19, 1 (2012), 1–28.DOI:http://dx.doi.org/10.1145/2147783.2147788

31. Brian Hanks. 2008. Problems encountered by novice pairprogrammers. Journal on Educational Resources inComputing 7, 4 (2008), 1–13. DOI:http://dx.doi.org/10.1145/1316450.1316452

32. Steve Harrison, Deborah Tatar, and Phoebe Sengers.2007. The three paradigms of HCI. Alt. Chi. Session atthe SIGCHI . . . (2007), 1–18. DOI:http://dx.doi.org/10.1234/12345678

33. Maria Hristova, Ananya Misra, Megan Rutter, andRebecca Mercuri. 2003. Identifying and Correcting JavaProgramming Errors for Introductory Computer ScienceStudents. In SIGCSE.

34. Petri Ihantola, Matthew Butler, Stephen H Edwards,Virginia Tech, Ari Korhonen, Andrew Petersen, KellyRivers, Miguel ÃAngel Rubio, Judy Sheard, JaimeSpacco, Claudia Szabo, and Daniel Toll. 2015.Educational Data Mining and Learning Analytics inProgramming : Literature Review and Case Studies.ITiCSE WGR’16 (2015), 41–63. DOI:http://dx.doi.org/10.1145/2858796.2858798

35. International Telecommunication Union. 2012. Overviewof the Internet of things. Technical Report. 22 pages.

Page 11: Debugging Event-Driven Programming - Virginia Techexplosion of the event-driven Node.js server-side JavaScript framework [18] has demonstrated its relevance to applications running

36. Matthew C Jadud. 2006. Methods and tools for exploringnovice compilation behaviour. In Proceedings of theThird International Workshop on Computing EducationResearch. 73–84. DOI:http://dx.doi.org/10.1145/1151588.1151600

37. L J Ji, Kaiping Peng, and Richard E Nisbett. 2000.Culture, control, and perception of relationships in theenvironment. Journal of Personality and SocialPsychologycial psychology 78, 5 (2000), 943–955. DOI:http://dx.doi.org/10.1037/0022-3514.78.5.943

38. Bonnie E. John, Paul S. Rosenbloom, and Allen Newell.1985. A theory of stimulus-response compatibilityapplied to human-computer interaction. ACM SIGCHIBulletin 16, 4 (1985), 213–219. DOI:http://dx.doi.org/10.1145/1165385.317495

39. Baris Kasikci, Benjamin Schubert, Cristiano Pereira,Gilles Pokam, and George Candea. 2015. FailureSketching: A technique for automated root causediagnosis of in-production failures. Symposium onOperating Systems Principles (SOSP) (2015), 344–360.DOI:http://dx.doi.org/10.1145/2815400.2815412

40. Irvin R Katz and John R Anderson. 1988. Debugging :An Analysis of Bug-Location Strategies. InHuman-Computer Interaction, Vol. 3. 351–399.

41. Gerd Kortuem, Arosha K Bandara, Neil Smith, MikeRichards, and Marian Petre. 2013. Educating theinternet-of-things generation. Computer 46, 2 (2013),53–61. DOI:http://dx.doi.org/10.1109/MC.2012.390

42. Joseph Lawrance, Christopher Bogart, Margaret Burnett,Rachel Bellamy, Kyle Rector, and Scott D. Fleming.2013. How programmers debug, revisited: Aninformation foraging theory perspective. IEEETransactions on Software Engineering 39, 2 (2013),197–215. DOI:http://dx.doi.org/10.1109/TSE.2010.111

43. Hanna Mäenpää, Sasu Tarkoma, Samu Varjonen, andArto Vihavainen. 2015. Blending Problem-andProject-Based Learning in Internet of Things Education:Case Greenhouse Maintenance. SIGCSE (2015). DOI:http://dx.doi.org/10.1145/2676723.2677262

44. RenÃl’e McCauley, Sue Fitzgerald, Gary Lewandowski,Laurie Murphy, Beth Simon, Lynda Thomas, and CarolZander. 2008. Debugging: a review of the literature froman educational perspective. Computer Science Education18, 2 (2008), 67–92. DOI:http://dx.doi.org/10.1080/08993400802114581

45. Laurie Murphy, Sue Fitzgerald, Brian Hanks, and ReneeMcCauley. 2010. Pair Debugging : A TransactiveDiscourse Analysis. Icer (2010), 51–58. DOI:http://dx.doi.org/10.1145/1839594.1839604

46. Laurie Murphy, Gary Lewandowski, RenÃl’e McCauley,Beth Simon, Lynda Thomas, and Carol Zander. 2008.Debugging: the good, the bad, and the quirky âAS aqualitative analysis of novicesâAZ strategies.Proceedings of the 39th SIGCSE technical symposium onComputer Science Education (SIGCSE ’08) 40 (2008),163. DOI:http://dx.doi.org/10.1145/1352135.1352191

47. J. Myers and Brita Munsinger. 1996. Cognitive style andachievement in imperative and functional programminglanguage courses. In National Educational ComputingConference.

48. Isabel Myers-Briggs. 1962. Manual for The Myers-BriggsType Indicator. Educational Testing Service.

49. R Nisbett. 2010. The Geography of Thought: How Asiansand Westerners Think Differently... and.https://books.google.com/books?hl=en

50. Richard E Nisbett, Kaiping Peng, Incheol Choi, DanAmes, Scott Atran, Patricia Cheng, Marion Davis, EricKnowles, Trey Hedden, Lawrence Hirschfeld, Lijun Ji,Beom Jun Kim, Shinobu Kitayama, Ulrich Kuhnen, JanLeu, David Liu, Geoffrey Lloyd, Hazel Markus,Ramaswami Mahalingam, Taka Masuda, ColineMcConnell, Donald Munro, Denise Park, Winston Sieck,David Sherman, Michael Shin, Edward E Smith, StephenStich, Sanjay Srivastava, Timo-thy Wilson, and FrankYates. 2001. Culture and Systems of Thought: HolisticVersus Analytic Cognition. Psychological Review 108, 2(2001), 291–310. DOI:http://dx.doi.org/10.1037//0033-295X.108.2.291

51. Evgeny Osipov and Laurynas Riliskis. 2013. EducatingInnovators of Future Internet of Things. IEEE Frontiersin Education (2013).

52. Vivek S Pai, Peter Druschel, and Willy Zwaenepoel.1999. Flash : An Efficient and Portable Web Server.Proceedings of the 1999 USENIX Annual TechnicalConference (1999), 14. DOI:http://dx.doi.org/10.1.1.119.6738

53. John F Pane and Brad a Myers. 1996. Usability Issues inthe Design of Novice Programming Systems. EducationAugust (1996), 78. http://repository.cmu.edu/isr/820/

54. David J. Pauleen, Roberto Evaristo, Robert M. Davison,Soon Ang, Macedonio Alanis, and Stefan Klein. 2005.Cultural bias in information systems research andpractice: are you coming from the same place I am?. InICIS, Vol. 17. 2–37.http://aisel.aisnet.org/icis2005http:

//aisel.aisnet.org/icis2005/75http:

//search.ebscohost.com/login.aspx?direct=true

55. D.N. Perkins and Fay Martin. 1986. Fragile knowledgeand neglected strategies in novice programmers. InEmpirical Studies of Programmers. 213–229.https://books.google.com/books?hl=en

56. Anthony Robins, Patricia Haden, and Sandy Garner.2006. Problem distributions in a CS1 course. Conferencesin Research and Practice in Information TechnologySeries 52 (2006), 165–173.

57. Pablo Romero, Benedict du Boulay, Richard Cox, RudiLutz, and Sallyann Bryant. 2007. Debugging strategiesand tactics in a multi-representation softwareenvironment. International Journal of Human ComputerStudies 65, 12 (2007), 992–1009. DOI:http://dx.doi.org/10.1016/j.ijhcs.2007.07.005

Page 12: Debugging Event-Driven Programming - Virginia Techexplosion of the event-driven Node.js server-side JavaScript framework [18] has demonstrated its relevance to applications running

58. Daniel M. Russell, Mark J. Stefik, Peter Pirolli, andStuart K. Card. 1993. The cost structure of sensemaking.Proceedings of the SIGCHI conference on Human factorsin computing systems - CHI ’93 (1993), 269–276. DOI:http://dx.doi.org/10.1145/169059.169209

59. Herbert Schildt. 2014. Java: the complete reference (9ed.). McGraw-Hill Education.

60. Michael L. Scott. 2009. Programming LanguagePragmatics (3 ed.). Elsevier.

61. James C Spohrer and Elliot Soloway. 1986. NOVICEMISTAKES: ARE THE FOLK WISDOMS CORRECT?An evaluation of two folk wisdom serves to elucidate theunderlying or " deep-structure " reasons for novice errors.Commun. ACM 29, 7 (1986), 624–632.

62. Lynn Andrea Stein. 1998. What We Swept Under theRug: Radically Rethinking CS1. Computer ScienceEducation 8, 2 (1998), 118. DOI:http://dx.doi.org/10.1076/csed.8.2.118.3818

63. Richard E Sweet. 1985. The Mesa programmingenvironment. ACM SIGPLAN Notices 20, 7 (1985),216–229. DOI:http://dx.doi.org/10.1145/17919.806843

64. Peter Van Roy and Seif Haridi. 2004. Concepts,Techniques, and Models of Computer Programming. MITPress.

65. Iris Vessey. 1985. Expertise in debugging computerprograms: A process analysis. International Journal ofMan-Machine Studies 23, 5 (1985), 459–494. DOI:http://dx.doi.org/10.1016/S0020-7373(85)80054-7

66. Garry L White and Marcos P Sivitanides. 2002. A Theoryof the Relationships between Cognitive Requirements ofComputer Programming Languages andProgrammersâAZ Cognitive Characteristics. Journal ofInformation Systems Education 13, 1 (2002), 59–66.

67. Herman A. Witkin and Donald R. Goodenough. 1981.Cognitive Style : Essence and Origins. InternationalUniversities Press.

68. Tracy C; Shade Sherri Woszczynski, Amy B; Guthrie.2005. Personality and programming. Journal ofInformation Systems Education 16, 3 (2005), 293–299.DOI:http://dx.doi.org/10.1177/0021943604272028

69. John Zukowski. 1997. Java AWT reference. O’Reilly.