classroom walls that talk: using online course activity data of successful students to raise...

9
Classroom walls that talk: Using online course activity data of successful students to raise self-awareness of underperforming peers John Fritz Div. of Information Technology & the Department of Language, Literacy and Culture, University of Maryland, Baltimore County, USA abstract article info Keywords: Analytics Course management systems Retention Student success Self-efcacy Self-regulated learning Similar to other institutions, the University of Maryland, Baltimore County (UMBC) has determined that a relationship may exist between student performance as dened by grades, and activity in the campus' online course management system (CMS). Specically, since Fall 2007, UMBC's Most Active Blackboard Coursesreports show students earning a D or F in a sample of 131 courses used the CMS 39% less than students earning a grade of C or higher. While the sample needs to be expanded and the demographic backgrounds of students need to be studied further, what if this usage pattern holds true throughout the semester? And how might students' awareness, motivation and performance change if they could know this information sooner? This article presents a new tool that UMBC students can (and do) use to check their activity and grades against an anonymous summary of their peers, which might make them more inclined to seek or accept academic support. © 2010 Elsevier Inc. All rights reserved. 1. Introduction With barely one in ve Americans over 25 earning a bachelor's degree, retention of students who actually enter college is vitally important to our country's global competitiveness (Educational Attainment in the United States: 2007 Detailed Tables, 2008). Yet, nationally, the six-year graduation rate for all colleges and universities is 63% (Berkner, He, & Cataldi, 2003), and students in their second and third year of college can be among the least likely to persist (Lipka, 2006). Institutions need to do better, but retention experts agree that underperforming students also need to take some responsibility for their own learning (Choi, 2005; Hsieh, Sullivan, & Guerra, 2007; Tinto, 1993). To help, many institutions are turning to information technology in a way known as academic analytics.Typically associated with business and marketingAmazon analyzes other people's past purchases to suggest books you might be interested in buying nextanalytics is now being used in higher education to identify and even predict students who may be at risk by studying demographic and performance data of former students in the same course, major, and institution. However, the problem this article attempts to deneand illustrate with preliminary results of a still evolving case studyis how to apply academic analytics into a scalable intervention that motivates underperforming students to seek or accept help, without raising concerns about their privacy or academic proling. Even if our data models are highly predictive, how do we convey this insight in a way that underperforming students will not dismiss or misunder- stand? Why shouldn't they think they're the exception to our rules? 1.1. Review of literature and practice The earliest attempt to dene academic analytics appears in Goldstein and Katz (2005) who called it an imperfect equivalent term for business intelligence(p. 2), which essentially describes the use of information technology to support operational and nancial decision-making of corporations. Though still evolving, the crossover of analytics from business to higher education can be seen in Goldstein and Katz's survey of 380 higher education institutions and follow up interviews with 27 individuals who reported exemplary successwith academic analytics. They report that few organizations have achieved both broad and deep usageand also provide a useful framework for categorizing key milestones in any institutional application of analytics: Stage 1Extraction and reporting of transaction-level data Stage 2Analysis and monitoring of operational performance Stage 3What-if decision support (such as scenario building) Stage 4Predictive modeling and simulation Stage 5Automatic triggers of business processes (such as alerts) For the most part, Goldstein and Katz's use of the word academicdescribes a setting where analytics takes place, not necessarily a goal for improvement. They do give a nod to the potential for improving student learning outcomes,but the scope of their inquiry includes a Internet and Higher Education 14 (2011) 8997 E-mail address: [email protected]. 1096-7516/$ see front matter © 2010 Elsevier Inc. All rights reserved. doi:10.1016/j.iheduc.2010.07.007 Contents lists available at ScienceDirect Internet and Higher Education

Upload: john-fritz

Post on 05-Sep-2016

214 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Classroom walls that talk: Using online course activity data of successful students to raise self-awareness of underperforming peers

Internet and Higher Education 14 (2011) 89–97

Contents lists available at ScienceDirect

Internet and Higher Education

Classroom walls that talk: Using online course activity data of successful students toraise self-awareness of underperforming peers

John FritzDiv. of Information Technology & the Department of Language, Literacy and Culture, University of Maryland, Baltimore County, USA

E-mail address: [email protected].

1096-7516/$ – see front matter © 2010 Elsevier Inc. Aldoi:10.1016/j.iheduc.2010.07.007

a b s t r a c t

a r t i c l e i n f o

Keywords:

AnalyticsCourse management systemsRetentionStudent successSelf-efficacySelf-regulated learning

Similar to other institutions, the University of Maryland, Baltimore County (UMBC) has determined that arelationship may exist between student performance as defined by grades, and activity in the campus' onlinecourse management system (CMS). Specifically, since Fall 2007, UMBC's “Most Active Blackboard Courses”reports show students earning a D or F in a sample of 131 courses used the CMS 39% less than studentsearning a grade of C or higher. While the sample needs to be expanded and the demographic backgrounds ofstudents need to be studied further, what if this usage pattern holds true throughout the semester? And howmight students' awareness, motivation and performance change if they could know this information sooner?This article presents a new tool that UMBC students can (and do) use to check their activity and gradesagainst an anonymous summary of their peers, which might make them more inclined to seek or acceptacademic support.

l rights reserved.

© 2010 Elsevier Inc. All rights reserved.

1. Introduction

With barely one in five Americans over 25 earning a bachelor'sdegree, retention of students who actually enter college is vitallyimportant to our country's global competitiveness (EducationalAttainment in the United States: 2007 Detailed Tables, 2008). Yet,nationally, the six-year graduation rate for all colleges and universitiesis 63% (Berkner, He, & Cataldi, 2003), and students in their second andthird year of college can be among the least likely to persist (Lipka,2006). Institutions need to do better, but retention experts agree thatunderperforming students also need to take some responsibility fortheir own learning (Choi, 2005; Hsieh, Sullivan, & Guerra, 2007; Tinto,1993).

Tohelp,many institutions are turning to information technology in away known as “academic analytics.” Typically associated with businessand marketing—Amazon analyzes other people's past purchases tosuggest books youmight be interested in buying next—analytics is nowbeing used in higher education to identify and even predict studentswho may be at risk by studying demographic and performance data offormer students in the same course, major, and institution.

However, the problem this article attempts to define—andillustrate with preliminary results of a still evolving case study—ishow to apply academic analytics into a scalable intervention thatmotivates underperforming students to seek or accept help, withoutraising concerns about their privacy or academic profiling. Even if our

data models are highly predictive, how do we convey this insight in away that underperforming students will not dismiss or misunder-stand? Why shouldn't they think they're the exception to our rules?

1.1. Review of literature and practice

The earliest attempt to define academic analytics appears inGoldstein and Katz (2005) who called it “an imperfect equivalentterm for business intelligence” (p. 2), which essentially describes theuse of information technology to support operational and financialdecision-making of corporations. Though still evolving, the crossoverof analytics from business to higher education can be seen inGoldstein and Katz's survey of 380 higher education institutions andfollow up interviews with 27 individuals who reported “exemplarysuccess” with academic analytics. They report that few organizations“have achieved both broad and deep usage” and also provide a usefulframework for categorizing key milestones in any institutionalapplication of analytics:

• Stage 1—Extraction and reporting of transaction-level data• Stage 2—Analysis and monitoring of operational performance• Stage 3—What-if decision support (such as scenario building)• Stage 4—Predictive modeling and simulation• Stage 5—Automatic triggers of business processes (such as alerts)

For the most part, Goldstein and Katz's use of the word “academic”describes a setting where analytics takes place, not necessarily a goalfor improvement. They do give a nod to the potential for “improvingstudent learning outcomes,” but the scope of their inquiry includes a

Page 2: Classroom walls that talk: Using online course activity data of successful students to raise self-awareness of underperforming peers

90 J. Fritz / Internet and Higher Education 14 (2011) 89–97

wide range of administrative and operational concerns that includeadvancement/fundraising, business and finance, budget and planning,institutional research, human resources, research administration, andacademic affairs.

Within these areas, Goldstein and Katz found the most advanceduses of analytics in student services (e.g., enrollment management andretention), which most often report to Academic Affairs. Specifically,“most respondents reported using academic analytics most frequentlyto identify students who are the strongest prospects for admission.Similarly, in the retention area, respondents use academic analyticsmost frequently to identify students who may be at risk academically”(p. 10).

Indeed, more than 62% of respondents who did more than simplyanalyze transaction reports said they would “significantly expand”their capabilities over the next two years, especially as externalagencies and accrediting bodies sought more data on institutionaleffectiveness in achieving student learning outcomes and retention.

It is worth noting that to be successful in academic analytics,Goldstein and Katz found three significant factors that were present inexemplary institutions: 1) Effective institutional training; 2) Presenceof staff who are skilled in understanding and applying academicanalytics, and 3) Leadership that is committed to evidence-baseddecision-making. In short, the lack of a cohesive vision for academicanalytics in higher education often means that individual campusesare building the plane while they fly it. Consequently, it also meansacademic analytics is more often defined by example than precept.

Two years after Goldstein and Katz coined the term, Campbell et al.(2007, August) refined academic analytics by specifically focusing onthe need to improve student learning outcomes and retentionthrough “actionable intelligence,” especially if colleges and universi-ties want to remain competitive in a global economy (p. 42). Campbellhighlights examples from such schools as Baylor University, TheUniversity of Alabama, Sinclair Community College, Northern ArizonaUniversity, and Purdue University (where he works).

Purdue's analytics project is worth noting for its development of asophisticated “risk algorithm” that correlates two important types ofdata that all students bring or generate: 1) Pre-college preparation inthe form of high school GPA, standardized test scores, andsocioeconomic status that can lead to predictive modeling (Goldsteinand Katz' Stage Four); and 2) Post admission performance in the formof grades, advising visits and use of the campus' Course ManagementSystem (CMS).

While Campbell makes an excellent case for how analytics can beused to profile students who may be at risk, he stops short of definingthe optimal way to intervene or present the lessons institutions learnfrom their data to current students who may need to know them. Forall the potential benefits academic analytics can provide, Campbelleven speculates what an institution's “ethical obligation of knowing”can or should be (p. 54). In fact, he describes concerns some studentsmay have about privacy and Big Brother watching them. He also citescurious “lessons learned” from University of Northern Arizonaresearchers who “recognized an important aspect of ‘intrusiveadvising’ that:

Despite the positive gains in performance and retention, the way inwhich students learn about institutional efforts on their behalf mayaffect their perceptions of privacy; consequently, the timing and thecontent of communications require careful planning. (p. 50)

Shortly after Campbell's article raised questions about an institu-tion's ethical obligations for the student data it tracks and analyzes,Norris et al (2008) called for more, not less, actionable intelligence,and continued the trend of defining academic analytics by example.They also established six primary actions needed to evolve from the

current generation of academic analytics (tools, solutions, andservices) into action analytics, including number 5, a push to

Develop new practices/solutions that ensure the alignment ofinstitutional goals, strategies, initiatives, interventions, outcomes,and measures in a variety of ways, including alignment frominstitutional to college to department to program levels (p. 44).

A year after Campbell's article, The Chronicle of Higher Educationfeatured his work at Purdue as part of an extensive roundup of evenmore examples of academic analytics in action at several institutionsincluding Argosy University, Purdue University, Slippery RockUniversity of Pennsylvania, South Texas College, SUNY Buffalo, TiffinUniversity, University of Alabama, University of Central Florida, andthe University System of Georgia (Rampell, 2008).

Interestingly, Purdue's academic analytics initiative had developedfurther since Campbell wrote about it in 2007, and was nowemploying alerts to “at-risk” students (Goldstein and Katz's StageFive). Before issuing these system-generated alerts, Purdue adminis-trators used to notify an instructor when an at-risk student appearedto be underperforming. Intervention depended on the instructor todeliver the bad news, which seems logical but could vary ineffectiveness based on the instructor's willingness or ability to dothis well or in a timely manner.

As a result, based on their academic profile as well as their activitywithin the Blackboard Course Management System, Purdue studentsbegan to see a red, yellow or green “traffic signal” to indicate theircurrent and likely academic performance, and theymight get an emailencouraging them to “meet with an instructor or seek outside help”(p. 1).

According to The Chronicle, Purdue researchers “found that studentsin themoderate-risk (yellow light) groupwho received email messagesdid better in the course than did their counterparts in a control group.Most of the students identifiedasbeingathighest risk (red light) still didnot rectify their situations or take advantage of campus resources,however” (p. 1).While Purdue's intervention with students hadexpanded, the attitudes of some students (like those Campbell reportedat the University of Northern Arizona) had not. Describing studentreactions to their various institutional analytics projects, Rampellreports students often are “unaware of the efforts,” and “in some casesthe universities try to keep the students from finding out” (p. 3).

Indeed, one student said: “I kind of feel like this is an intrusion ofprivacy” and “this sort of sounds like Big Brother's watching” (p. 3). Shewent on to suggest Purdue should get permission from students beforemining their demographic and behavioral data. While some at Purduecontest this student's concern because she did not actually receive anintervention, Campbell's questions about an institution's “ethicalobligation of knowing” are wise. In fact, it may be that his ethicalpause explains why Purdue (and other schools) had not yet figured outthe best way to intervene with underperforming students in 2007.

Since then, Purdue's Kimberly Arnold has reported initial gains instudent success (2010), and her institution's early warning traffic lightsystem (now known as Signals) was featured on NBC Evening News. Inaddition, Purdue is now partnering with SunGuard Higher Education todevelop Signals further as a commercial offering for higher education(Kvinge, 2009).

However, given Signal's complex Student Success Algorithm (SSA)and required integrationof data repositories towhichmost instructionaltechnology organizations likely don't have access or can't manage, aswell as potential student privacy concerns, it may not be as easy ordesirable to replicate Signals at other institutions. Also, Arnold says oneof the most repeated concerns about Signals from Purdue students,faculty and administrators is a lack of effective practices or consistentuse of it by instructors, who are solely responsible for running anintervention. She says Signals was developed on the historical

Page 3: Classroom walls that talk: Using online course activity data of successful students to raise self-awareness of underperforming peers

91J. Fritz / Internet and Higher Education 14 (2011) 89–97

observation that “a student's risk status remained in constant fluxthroughout his or her career at Purdue” (Arnold, 2010). Why then ischange initiated by someone primarily focused on a course?

Perhaps a simpler intervention model can be derived from theUniversity of Georgia System, which has shown that student activityin the campus course management system alone can be used toreliably predict student retention and persistence (Morris, Finnegan,& Wu, 2005). By studying how long and how often students viewcourse content and participate in discussions, researchers found apositive correlation with student success in three online courses inEnglish, Geology and History. It is not clear what interventions, if any,the UGA system then deployed to raise awareness of future studentsin these classes, but isolating the CMS alone—as a diagnostic tool tosupport retention—is a significant evolution in academic analyticsinfrastructure.

Indeed, Macfadyen and Dawson (2010) have recently shown thatCMS usage data is highly predictive of student success, especially incourses that use interactive functions such as discussion boards andassessments. However, they challenge the institutional scope ofCampbell's CMS prediction model at Purdue, and caution against a“one size fits all” approach to student monitoring that does not takeinto account differences in instructors' “pedagogical intent” to use aCMS or their actual adoption of it. “Our findings suggest that for thepurposes of monitoring student activity and achievement, predictivemodels must be developed at the course level” (2010, p. 598).

Finally, it should be noted that a national study of undergraduates'use of technology has consistently shown students value the ability tocheck grades and have access to practice exams and assignmentsmuch more than any other CMS function (Caruso & Salaway, 2007;Salaway & Caruso, 2008). In other words, if feedback is available fromwithin a CMS, students will use it—but at what cost to instructors togenerate time-consuming, graded assignments? The challenge ishelping institutions build or buy more sophisticated analytics tools,perhaps enabled by instructors in courses or operated independentlyby individual students for their own benefit. The infrastructure for thiskind of flexible insight is simply missing in today's current market ofcourse management systems.

2. Methodology: UMBC's “Check My Activity” tool, a preliminarycase study

Similar to other institutions, the University of Maryland, BaltimoreCounty (UMBC), has determined that a relationship may existbetween student success as defined by grades and activity in thecampus' online course management system (CMS). Specifically, sinceFall 2007, UMBC's Most Active Blackboard Courses reports haveshown that students earning a final grade of D or F in a sample of 131courses used the CMS an average of 39% less than students earning agrade of C or higher:

• Spring 2010 (21 courses) | 37% less• Fall 2009 (29 courses) | 58% less• Summer 2009 (9 courses) | 44% less• Spring 2009 (11 courses) | 38% less• Fall 2008 (13 courses) | 41% less• Summer 2008 (7 courses) | 61% less• Spring 2008 (26 courses) | 32% less• Fall 2007 (15 courses) | 33% less

Since very few institutions have easy or ready access to both finalgrades and CMS usage data, these results were determined by askingfaculty to voluntarily add a final “GRADE” column (all caps) in theironline Bb grade book displayed to students. A custom computer scriptquery of the Bb system then identified the average hits per final gradedistribution in these courses. While Bb is not UMBC's official gradesubmission tool, this was the only way to combine final grades and

CMS activity, which we defined as any access (or “click”) inside theCMS during the semester.

We know the sample of courses needs to be expanded and thedemographic backgrounds of students needs to be studied further, butthe consistency of this CMS usage trend across multiple semestersraises interesting questions about possible interventions with under-performing students. For example, does the CMS usage discrepancypattern between D and F students and their peers hold truethroughout the semester? If so, how might students' self-awareness,motivation and performance change if they could know how theirCMS usage activity compares to more successful peers, earlier in thesemester? If not, how and when does the pattern break down? And isit significant enough to dilute student motivation or distort their self-awareness?

By comparing underperforming students' CMS usage trends withacademically successful peers—and developing anonymous feedbacktools they can use to see the difference—UMBC provides students withan on-demand, objective, and non-human assessment of their ownperformance that might make them more inclined to seek, accept, orsustain institutional help on their behalf.

Specifically, we have created a “CheckMy Activity” (CMA) tool thatallows students to compare their own activity in Blackboard (highereducation's most widely used CMS) against an anonymous summaryof their course peers any time they want. Also, if faculty post gradesfor any student assignment in the course's Bb grade book, studentscan then see a Grade Distribution Report (GDR) showing how theirown activity compares with peers who earned the same, higher orlower grade. While the CMA can only be accessed with a UMBC useridand password, a brief, online video demo of the CMA is available athttp://screencast.com/t/ppXtylo9aj.

The purpose of the CMA is to provide early and frequent systemfeedback directly to students, so they are the first to know if and howtheir CMS activity might be an indicator of their engagement in acourse. At the same time, we hope to amplify the feedback effect ofassessing student work that faculty are going to do anyway. Whileretention experts agree that underperforming students need frequentand early feedback to improve their own self-awareness, the burdenfaculty face in assigning more work that needs grading is adisincentive thatmay unintentionally keep underperforming studentsin the dark. There is a difference between feedback and assessment,but the CMA tool allows students to get both, without increasing theburden on faculty to assign and evaluate more student work.

During Fall 2008, we surveyed how students used the CMA tool inSCI100 “Water: An Interdisciplinary Study,” a 200-student, requiredlab-science course for non-science majors, that is equally enrolledacross all class standings (e.g., freshmen to seniors). The survey had avoluntary participation rate of 20% (41 students). We used Black-board's, built-in “anonymous” survey function, which only tracksstudent participation, not their specific answers. This means demo-graphic information such as race, gender, ethnicity or academic statussuch as Grade Point Average (GPA) could not be correlated to specificresults of this survey, a limitation that may need to be addressed tolearn more about variations in student CMS usage. We used the samesurvey instrument in a Spring 2009 version of SCI100, as well as HCST100 “Human Context of Science and Technology.” Though we had asmaller participation rate in each of these courses, we did find similarresults.

According to the FA2008 survey results (see Appendix A), 28% ofthe SCI100 students who used the Check My Activity (CMA) tool were“surprised” by what their activity data showed them compared to theclass average. While 12% were not surprised, another 42% said theywould have to use the CMAmore to determine its usefulness. In otherwords, one could say that 70% of students were at least intrigued bythe CMA.

In addition, 54% said theywould be “more inclined” to use the CMAbefore future assignments were due if they had access to a Grade

Page 4: Classroom walls that talk: Using online course activity data of successful students to raise self-awareness of underperforming peers

92 J. Fritz / Internet and Higher Education 14 (2011) 89–97

Distribution Report (GDR) showing Bb activity for past assignments.At the time of the survey, only instructors could run a per-assignmentGDR for students, who would then have to look up their own grade inthe Bb grade book, and compare it with the instructor's anonymous,per-assignment GDR summary on the course web site. But in Spring2009, after seeing the student responses to this question, andconferring with faculty on a way they could “opt out” from havingtheir assignment grades included, we gave students the ability to runtheir own GDR on any assignment with a grade in the Bb course gradebook. To our knowledge, no faculty member has actually exemptedtheir assignment grades, and we have received no complaints fromfaculty about the use of their grade book entries to contextualizestudent CMS activity.

3. Results

While SCI100 student feedback to the CMA tool was encouraging,adoption by the larger student body has been slow since the tool wasoriginally announced in Spring 2008, and featured in The RetrieverWeekly student newspaper a year later (Wiggins, 2009). Following theSCI100 study, it seemed clear that students who are not proactivelyintroduced to the CMA tool may not have understood how or whythey would use it. Also, since the CMA is not part of the deliveredBlackboard software, it is not a function students could expect to find,nor was it easy to do so—it is available only as a “self service” report onour main Blackboard Reports site at www.umbc.edu/blackboard/reports.

So, in academic year 2009–2010, we began a promotionalcampaign to make students and faculty more aware of the CMAtool. During the first week of October 2009 and March 2010, and eachof the following three weekends, we displayed a Blackboard system-wide announcement in all courses announcing the availability of theCMA tool, created a direct link to it and included the brief “show andtell” video demo of how it works (cited previously).

According to Google Analytics tracking reports of the CMA site, useraccesses of the CMA tool increased dramatically, from a daily average of13 visitors in September (the month before our campaign) to a dailyaverage of 388 visitors during theOctober campaign, followed by a dailyaverage of 109 visitors during November (the month after ourcampaign). After barely recording any user accesses 18 months afterthe CMA tool was actually launched, once students had easy, directaccess to it in the form of a system-wide announcement—and later in apermanent linkon themainBlackboard entry page—we recorded nearly15,000 visitors in a two-month window. A similar, but slightly smallerpattern emerged during the March 2010 campaign, which includedUMBC's Spring Break during the third week.

During this time, we also made the CMA tool easier to find and useby developing and implementing a My Activity “Building Block”directly inside any student's Blackboard course. Since research showsthat students value (and obsessively check) the grade book far morethan any other CMS function (Caruso & Salaway, 2007), it made senseto locate the “My Activity” link directly beneath the “My Grades” toolinside each Blackboard course. Overall, during AY2009-10, the CMAtool generated more than 45,000 visits and nearly 150,000 pageviews. Note: UMBC has approximately 12,000 students (9500undergraduate, 2500 graduate).

How users spent time on the CMA site has changed as well. Evenafter the initial campaign in October, the average time spent on thesite increased from 21 seconds viewing fewer than 1.5 pages (which islikely accounted for as developer hits testing the system) to 1 min,15 s viewing 3.4 pages. More importantly, returning visitorsaccounted for 81% of all visits during the October campaign, a rateof use that held true a month after the campaign, and again after theMarch 2010 promotion. Indeed, for the entire academic year (Fig. 1).

UMBC's work on understanding student use of Blackboard is basedon earlier efforts to understand how instructors use Blackboard. In

December 2006, the Division of Information Technology (DoIT) beganexperimenting with an “average hits per user” approach to determineUMBC's Most Active Blackboard courses. For reporting purposes, a“hit” is recorded whenever a user clicks anything inside a Blackboardcourse or community (e.g., announcements, documents, discussionboard, assignments, etc.). This way, “average hits per user” rankingsdon't favor large enrollment sites over smaller ones. To date, we nowhave publically available reports for every semester since 2007, andthe project was featured in The Chronicle of Higher Education (Young,2009).

We recognize hits alone are no measure of quality teaching orlearning. But as part of our annual Bb user surveys in 2007 and 2008, weasked students “If you had to identify one instructor who usesBlackboard well, who would it be and why?” Each survey resulted inmore than300 student “nominations” of effective instructors (publicallyavailable on the main Blackboard Reports web site (mentionedpreviously). Interestingly, these student nominations showed a re-markable overlap with instructors whose courses rank highly on the“Most Active Blackboard Courses” reports.

Our hypothesis is that analyzing (and displaying) faculty andstudent use of a tool they use every day can give them more realistic,timely feedback about their own effectiveness or engagement in aclass. Also, if students AND faculty can see that (over time) strongerstudents tend to be more active in Blackboard than weaker ones, thenperhaps this will motivate students to look more critically at theirstudy habits, how they seek help from an instructor or an advisor—and perhaps how they square their larger career goals with their ownactivity and academic performance. It may also influence aninstructor's course design to leverage CMS tools that create more“data points” for student interaction and self-awareness.

3.1. Limitations

Givenour current lackof easy access to and correlation of pre-collegedemographic data with Blackboard CMS usage, if we have not yetestablished our own students' CMS activity as a predictor of studentsuccess, a likely concern might be to wonder how we can leapfrog toemploying the CMS as an intervention? Here is where we mustacknowledge the still preliminary nature of UMBC's experience. Wedonot yet have significant evidence that theCMA is in fact “intervening”with underperforming students, by raising awareness of their ownperformance compared to peers—enough so that it motivates them tochange their behavior and improve their final grades.

However, apart from relying on several data mining studies of howcourse or learning management systems can be used to predict at-riskor underperforming students, there seems to be very little research inhigher education generally on how best to actually intervene withsuch students in these systems they use every day. To date, the onlyexception seems to be Purdue University, which has been working onits Signals intervention for several years, and assumes a veryinstructor-centric approach that has yet to achieve consistency offaculty adoption or a critical mass of effective practice.

To be sure, understanding the nature of a retention problem is keyto developing an efficient and effective technology solution. But ratherthan waiting for the perfect solution, we should design, develop,implement and evaluate a variety of reasonable ones that collectivelymove us forward. Also, unless or until we havemore intervention casestudies to draw upon, we should study non-technical interventions todetermine if there are crossover “lessons” that can be applied totechnical approaches.

3.2. Future plans

At this point, our most intriguing research question is under-standing why (after they are sufficiently introduced to it) a highpercentage of students are returning to the CMA tool, and what, if

Page 5: Classroom walls that talk: Using online course activity data of successful students to raise self-awareness of underperforming peers

Fig. 1. 9/01/09 to 05/20/10 Google analytics graph of CMA usage statistics.

93J. Fritz / Internet and Higher Education 14 (2011) 89–97

anything, their use can mean for changing student self-awareness,behavior and academic performance. Are weaker students who aretypically not as active in the CMS actually among the cohort of activeCMA users? Or is repeated CMA usage simply a case of “the richgetting richer,” in that stronger students will generally take advantageof any tools to help them monitor their behavior and self regulatetheir learning? At this point, we simply don't know.

However, this logical research question also raises an ethicalconcern. In order to identify how students are using the CMA, wehave to trackwho they are.We can do so, but shouldwe? To paraphraseCampbell, what is UMBC's right—not just our obligation—to knowinginformation about students, even if we believe such information willhelp us help them? Two approaches will guide our inquiry.

First, after consulting with our Institutional Review Board (IRB),we plan to identify who is using the CMA now. But we will notinterview, intervene with or otherwise alert them. We will simplyidentify the student userids, look up their grade point averages fromthe prior semester, if only to determine (with data) who is actuallyusing the CMA tool. Themain reason to do this is because few studentshave actually responded to our request for an interview to understandhow they might use the CMA, let alone how they actually do. And yet,large numbers of students are in fact using it. Why?

Based on this initial data analysis of actual student CMA usage, weenvision future qualitative interviews being guided by studentresponses to two “opt in” check boxes on a “help us improve thissite” pop-up section of the CMA tool itself:

1) “It's okay to track my usage of this site,”2) “It's okay to follow up with me for an informational interview.”

If any students respond, we will follow up with a qualitativeinterview to determine how and why they are using the CMA tool. Ifthey do not, we will pursue a second approach.

Similar to our promotional campaign amongst all students inAY2009-10, we will target groups of underperforming students whohave been serviced and extensively studied by UMBC in the past.Specifically, by midterm of every semester for the past 21 years,UMBC's Learning Resources Center (LRC) has asked faculty to identifyfirst year students who are in jeopardy of receiving a final grade of Dor F. More than 90% of facultymembers now identify these students tothe LRC, which sends the students a First Year Intervention (FYI) alertabout their academic standing.

According to LRC records, 40% of students who only receive theFYI alert eventually raise their final course grade by at least oneletter. What if they received similar feedback about their online CMSactivity earlier in the semester? If this could help increase theeventual pass rate by even 2–3 percentage points, the gains instudent persistence and lost tuition could be significant since successduring thefirst year (for freshmen and transfer students) is positivelycorrelated to graduation and eventual success after college (Tinto,1993).

Additionally, we plan to demo and promote the CMA to students inLRC101a, a required study skills and career planning course forstudents who are facing academic dismissal due to poor performancein three consecutive semesters. LRC101a does not yet have thelongitudinal track record of the FYI alerts program, but initial resultshave been very positive, and working with course instructors to demoand promote the CMA, we hope to see if these students will use it, if itwill make a difference, and if they will tell us or show us why.

Page 6: Classroom walls that talk: Using online course activity data of successful students to raise self-awareness of underperforming peers

94 J. Fritz / Internet and Higher Education 14 (2011) 89–97

As we begin to work more closely with FYI and LRC101a students,and the LRC staff who support them, we envision additional CMAfunctionality that could include the following:

• Giving students the ability to view not only the average hits per allusers compared to their own, but also frequency and duration oftheir own activity and their peers' activity;

• Giving students the ability to “opt in” to receive an alert when theiractivity falls below levels associated with students earning a gradethey desired for a particular assignment;

• Giving students the ability to “share” monitoring rights with anadvisor, mentor or LRC staff member who is working to support thestudent;

• Similarly, alerting LRC staff to whom FYI students have given“delegated” access by providing a summary report showing whenthose students fall below a mutually agreed upon activity level.

Finally, to help broaden the pool of CMS-enabled retentioninterventions (and hopefully replicate our study), we are also exploringtechnical changes and partnerships that would make it easier for otherinstitutions to do so. UMBC released its CMA code base at the 2009Blackboard Users World Conference by posting it on our project website. However, our approach relies on cloning the main productionserver (for reporting purposes), since early attempts at querying itdirectly crashed our system twice, a painful lessonwe actively pass on toother institutions whenwe present our work.While we are stable now,our “cloning”method makes it more complicated for other institutionsto replicate our approach.

Interestingly, Project ASTRO, a Blackboard-funded “Building Block”(or 3rd party plug-in) queries the production systemwithout crashingit (Nucifora & Kunnen, 2009). ASTRO was released by developers atSeneca College in Toronto, and Grand Rapids Community College inMichigan, but it does not currently provide students with the CMA-like, “self service” tools that might complement human interventionson their behalf. Still, if more colleges and universities could improvetheir CMS reporting infrastructure by implementing a light-weightsolution like this, using analytics to improve retention efforts wouldtake an important, scalable first step.

Also, Drexel University's “Morningstar Reports” have expanded onUMBC's “average hits per user” approach by looking at frequency andduration of user activity, which are part of an interesting algorithm thatalso includes “organization and pedagogical complexity” (Scheuermann& Berman, 2009). Unlike UMBC, Drexel does not publish the list of itsmost active courses, nor do they yet provide self-assessment tools thatstudents could use to gain insight into their own performance. ButDrexel's approach—developedby formerUMBCgraduate assistant JeffreyBerman,who is nowa seniorweb application developer atDrexel—raisesinteresting possibilities for student insight we are studying closely.

4. Discussion

In trying to use CMS activity data of successful students to intervenewith underperforming students, why is it important to do so non-intrusively? Apart from the privacy concerns raised by Campbell andothers, the sheer size of the national retention problemwe facemakes itdifficult and expensive to scale intrusive, one-on-one interventionsbetween students and their instructors, advisors or academic supportspecialists. Even the “course-level” model of student monitoringespoused by Macfayden and Dawson (2010) raises questions abouteffectiveness when it is applied as a model for intervention. Essentially,to engage underperforming students, an institution would need toengage (and persuade) faculty to change how they design and teachcourses with a CMS. No, this is not impossible, but is it scalable? Purdueis in the process of finding out.

In addition, it is not clear if or how Macfayden and Dawson'smonitoring “proof of concept” actually leverages student responsibility

for learning, which Tinto and other retention experts agree is animportant component to student retention and long-term success. Tobesure, conventional retention wisdom and practice suggests that early,frequent and (yes) intrusive advising by human agents working onbehalf of an institution is effective. We do not think these initiativesshould be discontinued. But can they be complemented throughsystem-generated, and private, contextualized feedback about students'use of that system? Enough so that they will seek, accept and sustainhelp if their performance falls below their aspirations?

In short, we are trying to understand what role, if any, technologycan play in complementing existing, face-to-face retention efforts, andapplying the insights of academic analytics into interventions thatraise self-awareness of underperforming students—to perhaps (asTinto would say) hold up their end of the retention responsibility theyshare with their college or university.

While Tinto's Leaving College doesn't address technology per se—indeed his seminal work was published in 1988 and 1993, before theWeb as we know it existed—his framework for effective retentioninterventions clearly puts some of the responsibility on students.Specifically, in his chapter, “The Dimensions of Institutional Action,”he says institutions can't “absolve” students from “at least partialresponsibility for their own education.”

To do so denies both the right of the individual to refuse educationand the right of the institution to be selective in its judgments as towho should be further educated. More importantly, it runs counterto the essential notion that effective education requires thatindividuals take responsibility for their own learning” (p. 144).

Indeed, in his only technology reference to “computerized studenttracking programs,” he foreshadows the kind of student feedbackinfrastructure UMBC is trying to automate with the CMA tool:

“However constructed, the principle of early warning systems isthe same, namely that treatment of student needs and problemsshould occur as early as possible in the student career, and shouldbe approached in an integrated fashion” (p. 171).

Toward such an “integrated approach,” it may be useful to considersome key concepts of educational psychology—self-efficacy, learninggoal motivation, and self-regulated learning—as possible foundationsfor future iterations of any CMS-based intervention strategy. Typically,self-efficacy refers to the belief in one's ability or likelihood to completea task (Bandura, 1997; Choi, 2005;Hsieh et al., 2007;Miller, 2002; Tinto,1993). Learning goalmotivation refers to howone initially perceives theact of learning: as an intrinsically interesting and worthwhile goal in itsown right, or as an externally defined performance benchmark oneeither can or can't achieve (Hsieh et al., 2007; Tinto, 1993). If studentsview their own engagement in education as a performance thatmust beattained, then their perception of how wide the gap is between theircurrent and desired goal could influence their sense of self-efficacy.

Encompassing both concepts is the theory of self-regulated learningthat leading proponent Barry Zimmerman defines as “an activity thatstudents do for themselves in a proactive way, rather than as a covertevent that happens to them reactively as a result of teachingexperiences” (2001, p. 1). While there are a several theories of self-regulated learning, ranging from behavioral, social cognitive toconstructivist, Zimmerman says self-regulated students are “meta-cognitively, motivationally, and behaviorally active participants in theirown learning process” (2001, p. 5). He admits students vary in theirabilities to self-regulate, and that capacity doesn't growautomatically aswe get older. But his own experience with developmental mathinstruction at the City University of New York (CUNY), and replicatedstudies at other schools, suggests they can learn to become self-regulated (Glenn, 2010).

Page 7: Classroom walls that talk: Using online course activity data of successful students to raise self-awareness of underperforming peers

SCI100 Bb CMA survey results (41 participants; responses expressed in percentage).

Question Responses

1. Did you knowabout themyUMBC “CheckMyActivity” (CMA) tool before itwas presentedto your SCI100 class, or you clicked on the CMA tool link in this Bb course?

Yes AND I've used it. 38.0385Yes BUT I've NOT used it. 12.44No 47.249I don't know 2.2725Unanswered 0Total 100

2. How many times have you used the CMA tool since you learned about it?1 to 5 times 71.6516 to 10 times 4.545511 to 20 times 2.6315More than 20 times 0Unanswered 2.2725Total 100

3. If you have not used the CMA since you learned about it, please select the best answerthat describes why?

I could not find it. 7.177I do not understand it or why I would use it. 18.5405I don't believe it can be of help to me. 10.5265Other (please explain in “Short Answer” question #4) 0

95J. Fritz / Internet and Higher Education 14 (2011) 89–97

It's tempting to view self-regulated learning as isolated or asocial,perhaps even wishful thinking on the part of administrators lookingfor a way to break through with underperforming students. But inreviewing the social cognitive theory of self-regulated learning,Zimmerman shows that social context is key to developing one'sability to self regulate, through the initial awareness and reflection wegain as we compare own performance with a mentor or model who ismore successful. This development typically follows four stages:

1. Observational: students learn to distinguish the major features of amodel's skill or strategy.

2. Emulative: a learner's performance enactive approximates thegeneral form of a model's skill or strategy.

3. Self-control: students can perform a skill or strategy based onmental representations of a model's performance.

4. Self-regulation: learners can adapt their skills and strategiessystematically as personal and contextual conditions change.

“Thus, from a social cognitive perspective, a learner's acquisitionand development of skill or strategy develops initially from socialsources and shifts subsequently to self-sources in a series of levels”(2001, p. 22).

This state of discrepancy—between what we think, feel, believe andknowabout ourselves vs.whatweobserve or believe to be trueof otherswe wish to emulate—is a key principal of Albert Bandura's socialcognitive research in self-efficacy (1986, 1997). Initially, Bandura wasfocused on helping patients overcome debilitating fears (e.g., handlingsnakes) or quitting difficult habits (e.g., smoking). Through repeatedobservation of successful models engaging in behaviors that mightinitially seemunimaginable, patientswouldnot just learn to ignore theirfears or addictions, but would actually begin to develop and apply self-confidence they sawmodeled by others—and eventually themselves. Asone might imagine, Bandura's work has been very influential in manyfields, including addictions clinical therapy (Miller, 2002).

However, Bandura's theories have also been useful in understand-ing how “discrepancy” might support students. Consider the follow-ing statements from his Social foundations of thought and action: Asocial-cognitive theory (1986), and how they might be applied in thecontext of a CMS-based intervention that allows students to comparetheir own activity with peers:

• Students judge how well they might do in a chemistry course fromknowing how peers, who performed comparably to them in physics,fared in chemistry (p. 404).

• People judge their capabilities partly by comparing their perfor-mances with those of others (p. 403).

• The performances of others are often selected as standards for self-improvement of abilities (p. 405).

• People judge their capabilities partly through social comparisonwith the performances of others (p. 421).

• The adequacy of performance attainments depends upon thepersonal standards against which they are judged (p. 447).

Consistent with Tinto's views on student responsibilities for theirown learning, we believe that focusing on students' self-efficacy andself-regulated learning is key to an institutional use of technology thatapplies the insights of academic analytics into a non-intrusiveintervention. Truly scalable interventions should be about facilitatingand leveraging students' own motivations into an “evidence-based”intervention that raises their self-awareness to seek or accept help theinstitution may be ready, willing and able to provide—if only thestudents will “hear” them.

5. Conclusions

While institutions vary in the scope of their academic analyticsinitiatives, and the resources they can afford to commit to them, we

believe consistency will emerge as more schools report effectiveinterventions (not just predictions) that lead to changes in studentself-awareness, motivation, and behavior.

However, legal, ethical and privacy issuesmean theremay be only somuch that institutions can or should do to change student motivation,which is key to real learning. Can a computer-generated report of peer-based activity be impersonal enough, so that students have no choicebut to consider only their behavior as a cause for poor performance?

Whether they do or do not act on this awareness is up to eachindividual, but in supporting at-risk or underperforming collegestudents, we believe institutions should err on the side of preserving astudent's right to decide. It is much easier to respond to a request forhelp, than it is to solicit one from a student who doesn't know or trustthe person delivering what they perceive to be bad or inaccurate news.In other words, we believe how underperforming students learn abouttheir deficiencies influences if and how theywill act on this information.

Accordingly, an analytics-based intervention may be effectiveprecisely because students see they are the first to know what theirperformance data may or may not mean to them. Just as UMBC facultyare learningwho is effectively using the campus CMS through publishedusage reports and student survey results, we believe UMBC studentscomparing their own performance to an anonymous summary of theirpeers may be more inclined to follow up and seek help.

Despite our functional and technical lessons learned, we believepersonalized CMS activity reports may be able to “say” to studentswhat they would not (or could not) initially hear or accept from aprofessor or academic advisor through a personal intervention.Similarly, if faculty will simply use the delivered grade book in anonline CMS to post assignment grades, then a "self help" tool like ourCheck My Activity (CMA) might be able to leverage students'obsessive status-checking tendencies, thus providing feedbackinstructors do not have to work very hard to provide. Taken together,we hope interventions based on academic analytics may help facultyand students inform each other's use of the course managementsystem in a mutually beneficial relationship.

Appendix A

(continued on next page)

Page 8: Classroom walls that talk: Using online course activity data of successful students to raise self-awareness of underperforming peers

A1(continued)

Question Responses

Not applicable (I've used it) 61.1245Unanswered 2.6315Total 100

5. If you have used the CMA, please select the best answer that describes your view ofwhat it showed about your Blackboard activity compared to peers.

I was surprised how my activity compared with peers. 27.8705It confirmed what I already knew about my own activity. 11.7225I would need to use it more to determine its usefulness 41.8665Other (please explain in “Short Answer” question #6) 0Not applicable (I have NOT used it) 16.268Unanswered 2.2725Total 100

7. If your instructor(s) published a report showing the average Blackboard hits peruser by all grade ranges for past assignments (e.g., As through Fs), would you bemore OR less inclined to use the CMA before future assignments are due?

More inclined 53.9475Less inclined 9.8085Not sure 36.244Unanswered 0Total 100

8. Overall, do you think a tool like the CMA could improve your own (or other students')engagement in a course that is using Blackboard? If yes or no, please explain why.

See separate “Comments” sheet

9. To help improve the CMA, which of the following would you be willing to do?Participate in a one-on-one interview about my CMA usage. Noidentifying information about you will be recorded. We're onlyinterested in how you use the tool.

4.904

Participate in a small focus group about my CMA attitudes or opinions.No identifying information about you will be recorded. We're onlyinterested in how you use the tool or what you think about it.

4.904

Complete a follow up anonymous survey like this one. 52.3925Nothing 37.799Unanswered 0Total 99.9995

10. If you are willing to participate in a follow up interview or focus group, please providethe following: 1) your first/last name, 2) your email address and a phone # where youcan be reached, 3) your preference for one-on-one interview of focus group. Otherwise,type N/A to complete this question.

Provided contact information 2Typed N/A 28Unanswered 11Total 41

SCI100 Survey Comments (samples—emphasis added to highlight range of studentattitudes)

8. Overall, do you think a tool like the CMA could improve your own (or other students')engagement in a course that is using Blackboard? If yes or no, please explain why.I'm not sure how everything works with this CMA so I can't answerI do believe that a tool like CMA could improve peoples' ENGAGEMENT in acourse squarely based on competition. But i do not believe that this tool will havetoo much of an impact on peoples overall grade in a course. A person who iswilling to do the work is just more likely to be more active on blackboard (that'swhat I believe).Well, mostly I think if people are dedicated enough to the class, they'll post a lot.But I think telling people howmuch their posting in comparison to others kind ofsets a standard as to how much they should be posting, so kids who aren't asdedicated will try to keep up with those who are by posting and clickingunnecessarily just to get the points. I feel like people are going to use this tool forthe wrong reasons. But that's just me...Yes, it will overall improve the students engagement in a course using blackboardbecause those with the most hits are generally those getting higher scores.No, not really. I don't think increased dependency or involvement in blackboardis necessarily a beneficial step in education. Also, if students don't use blackboardanyways, they are not going to check the CMA feature anyways.Yes, because it shows how engaged I am in comparison to other students using BB.no, i don't think it is very helpful. I skim through it but never really take notice of itCMA could improve my own engagement in a course that is using Blackboard,because it can shows me my activity compared with my course peers, which isvery beneficial for me. If I am not doing well in a class, I would know that I wouldhave to use Blackboard more often if there is a correlation between good gradesand Blackboard activity.

A1(continued)

SCI100 Survey Comments (samples—emphasis added to highlight range of studentattitudes)

Yes, because I would feel more pressure to be with the rest of the crowd and bejust as participating as them so I don't look bad.Yes. As with me, it would help gauge whether I'mmissing an aspect of the coursethat other students. For example, I use Bb for Sci 100 a lot, but not really for myother classes. With the CMA tool, I saw that classmates were using Bb and Iwasn't, so I visited my other class sites to see what I was missing.no, blackboard is a silly waste of time. why can't we do blackboard activities inclass?yes because we can actually compare how well we are doing in the class, or howwe can work harder like “other students—who click into the site” to improve ourstudying skills.I don't think that the amount a student looks at blackboard courses affects ability.It doesn't matter to me.I think it could improve the engagement in a course that is using Blackboardbecause it shows just how much you participate outside of class, but it is alsoVERY VERY difficult to remember to post things on here because people are notconstantly talking about it. I barely knew there was a survey we had to fill out.

Appendix A (continued) Appendix A (continued)

96 J. Fritz / Internet and Higher Education 14 (2011) 89–97

References

Arnold, K. (2010). Signals: Applying academic analytics. Educause Quarterly, 33(1)Retrieved from http://www.educause.edu/library/EQM10110

Bandura, A. (1986). Social foundations of thought at action: A social cognitive theory.Englewoold Cliffs, N.J.: Prentice-Hall.

Bandura, A. (1997). Self-efficacy: The exercise of control.: Macmillan.Berkner, L., He, S., & Cataldi, E. F. (2003). Descriptive summary of 1995–96 beginning

postsecondary students: Six years later. Education Statistics Quarterly, 5(1), 62−67.Campbell, J., Deblois, P., &Oblinger, D. (2007, August). Academic analytics: A new tool for a

new era. EDUCAUSE Review, 42(4), 40−57. Retrieved from http://www.educause.edu/EDUCAUSE+Review/EDUCAUSEReviewMagazineVolume42/AcademicAnalyticsANewToolforaN/161749

Caruso, J., & Salaway, G. (2007). The ECAR study of undergraduate students andinformation technology, 2007 (Key Findings) (pp. 1–15). Educause Center forApplied Research. Retrieved from http://www.educause.edu/ers0706

Choi, N. (2005). Self-efficacy and self-concept as predictors of college students' academicperformance. Psychology in the Schools, 42(2), 197−205. doi:10.1002/pits.20048.

Educational Attainment in the United States: 2007 Detailed Tables (Current PopulationSurvey, 2007 Annual Social and Economic Supplement). (2008). U.S. Census Bureau.Retrieved from http://www.census.gov/population/www/socdemo/education/cps2007.html

Glenn, D. (2010, February 7). Struggling students can improve by studyingthemselves, research shows. The chronicle of higher education. Retrieved from http://chronicle.com/article/Struggling-Students-Can/64004/?key=SThyclQ8MyFFYyZhe3UWfCJUYHEuIBkqaXMWYSsabV1U

Goldstein, P., & Katz, R. (2005). Academic analytics: The uses of managementinformation and technology in higher education — Key findings (key findings)(pp. 1–12). Educause Center for Applied Research. Retrieved from http://www.educause.edu/ECAR/AcademicAnalyticsTheUsesofMana/156526

Hsieh, P., Sullivan, J. R., & Guerra, N. S. (2007). A closer look at college students: Self-efficacyand goal orientation.Journal of Advanced Academics, 18(3), 454−476. Retrieved fromhttp://www.eric.ed.gov/ERICWebPortal/contentdelivery/servlet/ERICServlet?accno=EJ773185

Kvinge, L. (2009, October 29). Purdue University and SunGuard higher education tocollaborate onproduct to improve academic success. Press Release. Retrieved fromhttp://www.sungardhe.com/about/news/PressReleases/Article.aspx?id=8967&LangType=1033

Lipka, S. (2006, September 8). After the freshman bubble pops. The chronicle of highereducation. Retrieved from http://chronicle.com/article/After-the-Freshman-Bubble-Pops/4556

Macfadyen, L. P., & Dawson, S. (2010). Mining LMS data to develop an “early warningsystem” for educators: A proof of concept. Computers & Education, 54(2), 588−599.doi:10.1016/j.compedu.2009.09.008.

Miller, W. R. (2002).Motivational interviewing: Preparing people for change, 2nd ed. NewYork: Guilford Press.

Morris, L. V., Finnegan, C., & Wu, S. (2005). Tracking student behavior, persistence, andachievement in online courses. The Internet and Higher Education, 8(3), 221−231.doi:10.1016/j.iheduc.2005.06.009.

Norris, D., Baer, L., Leonard, J., Pugliese, L., & Lefrere, P. (2008, February). Action analytics:measuring and improving performance that matters in higher education. EDUCAUSEReview, 43(1), 42−67. Retrieved fromhttp://www.educause.edu/EDUCAUSE+Review/EDUCAUSEReviewMagazineVolume43/ActionAnalyticsMeasuringandImp/162422

Nucifora, S., & Kunnen, E. (2009). Project ASTRO. Project Web site. Retrieved from http://projects.oscelot.org/gf/project/astro/

Rampell, C. (2008, May 30). Colleges mine data to predict dropouts. The chronicle ofhigher education. Retrieved from http://chronicle.com/article/Colleges-Mine-Data-to-Predict/22141/

Salaway,G., &Caruso, J. (2008). TheECAR studyofundergraduate students and informationtechnology, 2008—Key findings. Educause Center for Applied Research. Retrieved fromhttp://www.educause.edu/ECAR/TheECARStudyofUndergraduateStu/163286

Page 9: Classroom walls that talk: Using online course activity data of successful students to raise self-awareness of underperforming peers

97J. Fritz / Internet and Higher Education 14 (2011) 89–97

Scheuermann, M., & Berman, J. (2009). Drexel Bb Vista STAR Report. Retrieved fromhttp://www.drexel.edu/irt/news/reports/BbVistaSTARReport/

Tinto, V. (1993). Leaving college: Rethinking the causes and cures of student attrition,2nd ed. Chicago: London: University of Chicago Press.

Wiggins, R. (2009, April 7). StudentswhouseBlackboard oftenaremore likely to dowell intheir classes. The RetrieverWeekly. Retrieved fromhttp://www.retrieverweekly.com/?module=displaystory&story_id=4360&format=html

Young, J. (2009, January 8). A wired way to rate professors— and connect teachers. Thechronicle of higher education. Retrieved from http://chronicle.com/article/A-Wired-Way-to-Rate-Profess/1439/

Zimmerman, B. J. (2001). Theories of self-regulated learning and academic achievement:An overview and analysis, Ch. 1. In Zimmerman, B. J., & Schunk, D. (2001). Self-regulated learning and academic achievement: Theoretical perspectives.Routledge.