proceedings of the 6th israel association for information

121
1 School of Management: Department of Information & Knowledge Management, Sagy Center for Internet Research Faculty of Social Sciences: Department of Information Systems Proceedings of the 6 th Israel Association for Information Systems (ILAIS) Conference July 2, 2012 University of Haifa Haifa, Israel Editors: Daphne Raban, David Bodoff, Irit Hadar The Israel Association for Information Systems (ILAIS) was founded in 2005 as the Israeli chapter of the Association for Information Systems (AIS). The goal of ILAIS is to promote the exchange of ideas, experiences and knowledge among IS scholars and professionals engaged in IS development, management and use.

Upload: others

Post on 13-Feb-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Proceedings of the 6th Israel Association for Information

1

School of Management: Department of Information & Knowledge Management, Sagy Center for Internet Research Faculty of Social Sciences: Department of Information Systems

Proceedings of the 6th Israel Association for Information Systems (ILAIS) Conference

July 2, 2012

University of Haifa Haifa, Israel

Editors: Daphne Raban, David Bodoff, Irit Hadar The Israel Association for Information Systems (ILAIS ) was founded in 2005 as the Israeli chapter of the Association for Information Systems (AIS). The goal of ILAIS is to promote the exchange of ideas, experiences and knowledge among IS scholars and professionals engaged in IS development, management and use.

Page 2: Proceedings of the 6th Israel Association for Information

2

School of Management: Department of Information & Knowledge Management, Sagy Center for Internet Research Faculty of Social Sciences: Department of Information Systems

Conference Program

09:00-09:30 Welcome and greetings:

Dov Te'eni, President of AIS and Phillip Ein-Dor, President of ILAIS

Room 570, Education Building (Brazil Building)

Co-Chairs: Daphne Raban, David Bodoff, Irit Hadar, University of Haifa

09:30-10:30 Keynote speaker: Professor Mark Silver, Fordham University

10:30-10:50 Coffee

10:50-12:20 Parallel sessions A-C

Session A: Room 570 Session Chair: Yesha Sivan Rivka Markus (The Knesset) The use of information in the working of the Knesset plenum Alon Peled (The Hebrew University of Jerusalem) The Grand Information Bazar: Lessons from Iceland, the Netherlands, and Australia on improving information exchange within the public sector Erran Carmel (American University, Washington D.C.) The Time-Zone Challenged Workplace: What we know and what we don’t know Ruti Gafni (Tel Aviv–Yaffo Academic College, Open University of Israel), Eran Aqua (Tel Aviv–Yaffo Academic College), Yesha Sivan (Tel Aviv–Yaffo Academic College) Teaching the Real Using the Virtual (TRUV): The case for a University Level Integrative Course Gali Naveh (Ben-Gurion University of the Negev), Adir Even (Ben-Gurion University of the Negev), Lior Fink (Ben-Gurion University of the Negev) , Sigal Berman (Ben-Gurion University of the Negev) Teaching Manufacturing-Oriented Information Technology for the Future Factory

Page 3: Proceedings of the 6th Israel Association for Information

3

Session B: Room 665

Session Chair: Nitza Geri Yael Karlinsky-Shichor (Tel Aviv University), Moshe Zviran (Tel Aviv University) Factors influencing perceived benefits and user satisfaction in knowledge management systems Ina Blau (The Open University of Israel) , Tami Neuthal (The Open University of Israel) Marketers in Social Media: Disseminating Information through a Twitter Professional Community of Practice Daphne Raban (University of Haifa), Hila Koren (University of Haifa), Ido Kaminer (Technion Haifa) Social Dynamics in Critical Mass Formation in Information Sharing Online Michal G. Yablowitz (University of Haifa), Daphne Raban (University of Haifa) The Effect of Media Types, Message Cues and Framing on Investment Decisions Eilat Chen Levy (University of Haifa), Sheizaf Rafaeli (University of Haifa), Yaron Ariel (Yezreel Valley College) Interruptions in Online Environment

Session C: Room 408 Session Chair: Mor Peleg Shiri Assis-Hassid (Ben-Gurion University of the Negev), Tsipi Heart (Ben-Gurion University of the Negev), Joseph Pliskin (Ben-Gurion University of the Negev), Iris Reychavb (Ariel University Center) The effects of electronic medical records on physician- patient communication− a theoretical framework Adi Fux (University of Haifa), Mor Peleg (University of Haifa), Pnina Soffer (University of Haifa) Personalizing Clinical Decision-support Systems: Eliciting Categories of Personal Context and Effects Esther Brainin (Ruppin Academic Center), Efrat Neter (Ruppin Academic Center) Assessing the readability and cultural sensitivity of health information on the Web Hadas Weinberger (Holon Institute of Technology), Boaz Tadmor (Beilinson Hospital), Pierre Singer (Beilinson Hospital) Healthcare Decision-making: the generative dance between chaos and control at the point of care for the complex-patient 12:20-13:45 Lunch

Page 4: Proceedings of the 6th Israel Association for Information

4

13:45-15:15 Parallel sessions D-F

Session D: Room 570 Session Chair: Pnina Soffer Irena Milstein (Holon Institute of Technology), Roy Gelbard (Bar-Ilan University), Arie Ben-David (Holon Institute of Technology) Supplier ranking by multi-alternative proposal analysis for agile projects Sofia Sherman (University of Haifa), Irit Hadar (University of Haifa) Software Architecture Process in Agile Development Methodologies Ofir Nimitz (University of Haifa), Pnina Soffer (University of Haifa) Process Choreography Languages Evaluation via Ontology Dizza Beimel (Ruppin Academic Center), Mor Peleg (University of Haifa) Comparing ontology languages for model representation of the SitBAC Ontology via a controlled experiment- research design

Session E: Room 665 Session Chair: Gilad Ravid Amit Rechavi (University of Haifa) Knowledge and Social Networks in Yahoo! Answers Amit Polak (Ben-Gurion University of the Negev), Adir Even (Ben-Gurion University of the Negev), Gilad Ravid (Ben-Gurion University of the Negev) Integrating Knowledge and Insights into Business Intelligence Systems Adva Assido (Ben-Gurion University of the Negev), Adir Even (Ben-Gurion University of the Negev), Edna Schechtman (Ben-Gurion University of the Negev) Using Inequality and Overlap Indices to Improve Decision-Making Processes in Business Intelligence Environments Orit Raphaeli (Ben-Gurion University of the Negev), Gali Naveh (Ben-Gurion University of the Negev), Michal Levi (Ben-Gurion University of the Negev), Sigal Berman (Ben-Gurion University of the Negev), Lior Fink (Ben-Gurion University of the Negev) Measuring the business value of e-business technologies

Page 5: Proceedings of the 6th Israel Association for Information

5

Session F: Room 408

Session Chair: Adir Even Alisa Wechsler (Ben-Gurion University of the Negev), Adir Even (Ben-Gurion University of the Negev) Assessing and Correcting Data Accuracy Defects Using the Markov-Chain Model Gabbay Nir (Tel Aviv University) Controlling critical spreadsheets using ESM (Enterprise Spreadsheet Management) system Michel Benaroch (Syracuse University), Anna Chernobai (Syracuse University) IT Operational Risk Events as COBiT Control Failures: A Conceptualization and Empirical Examination Mattias Weidlich (Technion – Israel Institute of Technology) The ICoP Framework for Business Process Model Matching 15:15-15:30 Coffee

15:30-16:15 Room 570 Peretz Shoval (Ben-Gurion University of the Negev) Report of the CHE Committee on Programs of Study in Information Systems in the Israeli Academy

16:15-17:00 ILAIS meeting Chair: Phillip Ein-Dor ILAIS Program committee Michel Avital Copenhagen Business School, Denmark Mira Balaban Ben-Gurion University of the Negev, Israel Dizza Beimel Ruppin Academic Center, Israel Adir Even Ben-Gurion University of the Negev, Israel Lior Fink Ben-Gurion University of the Negev, Israel Nitza Geri The Open University of Israel Tsipi Heart Ben-Gurion University of the Negev, Israel Yoram Kalman The Open University of Israel Igor Kanovsky Yezreel Valley College Tsvi Kuflik University of Haifa, Israel Yair Levy Nova Southeastern University, USA Yossi Lichtenstein The College of Management Academic Studies, Israel Gal Oestreicher-Singer Tel-Aviv University, Israel Gilad Ravid Ben-Gurion University of the Negev, Israel Iris Reinhartz-Berger University of Haifa, Israel Aviv Shachak University of Toronto Pnina Soffer University of Haifa, Israel Arnon Sturm Ben-Gurion University of the Negev, Israel

Page 6: Proceedings of the 6th Israel Association for Information

6

Table of Contents

How to Think about IT Mark Silver

8

9-11 The use of information in the working of the Knesset plenum

Rivka Markus

12-15 The Grand Information Bazar: Lessons from Iceland, the Netherlands, and Australia on improving information exchange within the public sector

Alon Peled

16-19 The Time-Zone Challenged Workplace: What we know and what we don’t know

Erran Carmel

20-27 Teaching the Real Using the Virtual (TRUV): The case for a University Level Integrative Course

Ruti Gafni, Eran Aqua and Yesha Sivan

28-30 Teaching Manufacturing-Oriented Information Technology for the Future Factory

Gali Naveh, Adir Even, Lior Fink and Sigal Berman

31-37 Factors influencing perceived benefits and user satisfaction in knowledge management systems

Yael Karlinsky-Shichor and Moshe Zviran

38-43 Marketers in Social Media: Disseminating Information through a Twitter Professional

Community of Practice Ina Blau and Tami Neuthal

44-48 Social Dynamics in Critical Mass Formation in Information Sharing Online

Daphne Raban, Hila Koren and Ido Kaminer

49-53 The Effect of Media Types, Message Cues and Framing on Investment Decisions Michal G. Yablowitz and Daphne Raban

54-57 Interruptions in Online Environment

Eilat Chen Levy, Sheizaf Rafaeli and Yaron Ariel

58-63 The effects of electronic medical records on physician- patient communication−−−− a theoretical framework

Shiri Assis-Hassid , Tsipi Heart, Joseph Pliskin and Iris Reychavb

64-66 Personalizing Clinical Decision-support Systems: Eliciting Categories of Personal Context and

Effects Adi Fux, Mor Peleg and Pnina Soffer

67-69 Assessing the readability and cultural sensitivity of health information on the Web

Esther Brainin and Efrat Neter

70-74 Healthcare Decision-making: the generative dance between chaos and control at the point of care for the complex-patient

Hadas Weinberger, Boaz Tadmor and Pierre Singer

Page 7: Proceedings of the 6th Israel Association for Information

7

75-77 Supplier ranking by multi-alternative proposal analysis for agile projects

Irena Milstein, Roy Gelbard and Arie Ben-David

78-80 Software Architecture Process in Agile Development Methodologies Sofia Sherman and Irit Hadar

81-85 Process Choreography Languages Evaluation via Ontology

Ofir Nimitz and Pnina Soffer

86-89 Comparing ontology languages for model representation of the SitBAC Ontology via a controlled experiment- research design

Dizza Beimel and Mor Peleg

90-93 Knowledge and Social Networks in Yahoo! Answers

Amit Rechavi and Sheizaf Rafaeli

94-96 Integrating Knowledge and Insights into Business Intelligence Systems Amit Polak, Adir Even and Gilad Ravid

97-100 Using Inequality and Overlap Indices to Improve Decision-Making Processes in Business

Intelligence Environments Adva Assido, Adir Even and Edna Schechtman

101-104 Measuring the business value of e-business technologies

Orit Raphaeli, Gali Naveh, Michal Levi, Sigal Berman and Lior Fink

105-107 Assessing and Correcting Data Accuracy Defects Using the Markov-Chain Model Alisa Wechsler and Adir Even

108-114 Controlling critical spreadsheets using ESM (Enterprise Spreadsheet Management) system

Gabbay Nir

115-117 IT Operational Risk Events as COBiT Control Failures: A Conceptualization and Empirical Examination

Michel Benaroch and Anna Chernobai

118-120 The ICoP Framework for Business Process Model Matching

Mattias Weidlich

121 Report of the CHE Committee on Programs of Study in Information Systems in the Israeli Academy

Peretz Shoval

Page 8: Proceedings of the 6th Israel Association for Information

8

Mark S. Silver Fordham University

Mark S. Silver is Associate Professor and Area Chair of Information Systems at Fordham University. Professor Silver received his Ph.D. from the Wharton School at the University of Pennsylvania and has been a faculty member at UCLA and NYU. He is the author of a book, Systems That Support Decision Makers: Description and Analysis (Wiley, 1991), and co-author of “The IT Interaction Model” (MIS Quarterly, 1995), among other journal articles and book chapters. Professor Silver introduced the concepts of “System Restrictiveness” and “Decisional Guidance” into the Information Systems literature. His current research interests focus on (1) the design features of IT artifacts and (2) their connection to IT effects. Professor Silver’s article, “A Foundation for the Study of IT Effects" (with M. Lynne Markus, JAIS, 2008) received two best paper awards.

How to Think about IT

Abstract

At a time when technology is everywhere and people are very facile at using the latest apps and devices, they often think they understand IT very well. But knowing how to operate a technologic device—knowing which buttons to push—is not the only, or even the most important, knowledge we require in today’s technology-laden society. In fact, learning the mechanics of operating a given technology can be the easy part. The more challenging and, typically, the more neglected matter is understanding how our lives are affected, for better or for worse, by the way we, and those around us, use IT. Such comprehension—knowing how to think about IT—is essential for anyone who wants to flourish, or even survive, in the high-tech world of the 21st century. This presentation proposes a set of fundamental principles for thinking about IT and their action implications for choosing among and using IT artifacts.

Page 9: Proceedings of the 6th Israel Association for Information

9

THE USE OF INFORMATION IN THE WORKING OF THE KNESSET PLENUM

Rivka Markus The Knesset Archive, Jerusalem, Israel

[email protected]

Keywords: bibliometry, sources of information, parliament, member of knesset.

INTRODUCTION

For the last fifty years information and the flow of information has been considered fundamental to organizational activities. The process of decision making is primarily based on the information each individual in the “decision making forum” has prior to the “decision making process” (Simon, 1947; Easton, 1953; Deutsch, 1966 ; Galnoor, 1982). Neither the decision making process nor its implementation is possible without information. In a democratic regime the parliament is the highest authority whose primary purpose is making decisions on behalf of the state. Therefore, the sources of information that members of the legislature have access to and how the members of parliament relate to these sources are important to understanding political activities. The Knesset, the Israeli parliament, was established a few months after the state was founded. The role of the Knesset is primarily to pass laws and to supervise over the government’s actions. The Knesset members have a major part in the political activity of the state of Israel. Thus, investigating how they used information from 1950 to 2006 (almost sixty years) can help in understanding the decision making process in Israel. The major objective of this research was to investigate what types of information was used by the Members of the Knesset and to find out if the information used was correlated to the members’ characteristics.

METHODOLOGY

In order to carry out the research a bibliometric analysis was used. A sample of quotations of participants in the debates in the Plenum during one working week in the Knessets: 1, 2, 8, 9, 13 14 and 16 was used. One working week of the Knesset consists of plenary sessions held on Monday, Tuesday, and Wednesday. The dates of these weeks were the second week of October in the years 1950, 1952, 1975, 1981, 1994, 1997, 2003, 2006. The week chosen was always the second session of each Knesset. In the Knessets tested, the information was categorized by the quotation used. In order to allow a statistical analysis, the information used by Knesset members was divided into three main categories:

1. Personal sources (e.g. professional experience, Jewish sources, literary sources ).

2. Internal government sources including all types of governmental publications and governmental documents (e.g. official publications, Knesset debates, laws, bills).

3. External sources with emphasis on written newspapers and on line-newspapers (e.g. media, communication resources, experts).

Page 10: Proceedings of the 6th Israel Association for Information

10

Each quotation was indicated by its date and by the type of activity in which it was said, i.e. general debate, inquiries or legislative debates. The Knesset members were characterized for example according the following parameters: opposition/coalition, government role, religion, and gender. One of the assumptions on which this study is based is that if a Member of the Knesset quoted a source he considerd the source important. In the statistical analysis the correlation between the dependent variables (the type of information) and the independent variables (member’s characteristic, date and type of activity, sampled Knessets) was obtained. The statistical package for the social sciences (SPSS) was used to analyze statistically the results of the study.

RESULTS

A total of 1043 speeches were examined. The speeches were distributed along the sampled Knessets. The increase of speeches as a function of time reveals a significant increase in the number of topics debated over the years. The time factor influenced the nature of the work of the Plenum. As time passed, the activity in the Plenum increased. Sixty percent of the activities in the 1st Knesset and 2nd Knesset were devoted to dealing with legislative matters. Later Knessets emphasized debates, inquiries, and other activities that relate to supervision over government’s actions. However, inquiries were minimized through the adoption of rules. In this study, a week of parliamentary work in each Plenum researched was taken as a sample. The number of speeches made during this sample week increased from tens of speeches in the first Knessets to 247 speeches in the 16th Knesset. The major causes for this increase can be attributed to: the debates in the Plenum being made public, increased public pressure, inter-party competition, increased pressure by interest groups, and lobbying. Although there was an increase in the number of speeches and simultaneously an increase in the number of sources of information there was no statistically significant difference in the proportion of sources quoted in various Knessets. In more than half of the speeches only one quotation was cited. In approximately 40% of the speeches two or more sources were cited. In less than 10% of the speeches no source was cited. During the 1st Knesset half of the quotes were accurate. However, as time passed, the accuracy of the quotes declined. Our statistical analysis revealed that the number of quoted resources did not changed significantly over the years. In comparing the type of sources, we found that the internal government sources were the mostly quoted. In the 1st Knesset, approximately three quarters of the sources cited by members of parliament were internal sources. From the 2nd Knesset onward, these sources of information constituted approximately 50% of sources quoted. The most significant finding regarding the sources used in the work of the Plenum pertains to the seniority (standing of the Knesset Member in his party) of the participants in the debate ("participant") in the Plenum. The more senior the "participant" in the debate, the more inclined he is to use internal sources. A lower ranking "participant" is more likely to use external sources. Furthermore, this finding was substantiated whether the "participant" held a position in the executive branch (ministers and deputy-ministers) or whether the "participant" held a position in the Knesset. This finding was also valid when considering the party rank of the candidates running for a seat in the Knesset or when considering social factors such as gender or nationality. Those Members of the Knesset who previously were high ranking officers in the Israeli Defense Forces were an exception to the rule. Amongst

Page 11: Proceedings of the 6th Israel Association for Information

11

these Knesset Members there were a higher percentage of quotes from internal sources. Furthermore, this finding regarding those Members of the Knesset who previously were high ranking officers in the Israeli Defense Forces was substantiated whether the "participant" held a position in the executive branch (ministers and deputy-ministers) or whether the "participant" held a position in the Knesset.

CONCLUSION

The types of information used by the Members of the Knesset significantly correlated to the Members’ characteristics, e.g. seniority, personal background, position in the Knesset. The position that the Member held in the government or in the Knesset was highly correlated to the sources of information (quotations). However this correlation was not a function of time. These findings were constant in all the Knessets that were tested. In the 60 years that were reviewed in this study the Knesset Members (in percent) did not change their attitudes to information. Throughout the period under investigation, there were changes in the availability of informational resources, the type of informational resources, the intensity of activities, the style of activities and rules. There is an interaction among the following three factors: the period, the type of activity and the personal background. Personal background plays a role in the debates in the Plenum especially when distinguishing among: information from personal sources, information from sources inside the political system and information from sources outside the political system. Based on our analysis we can conclude that the three factors that were examined, i.e. the time factor, the framework and types of activities and the personal background characteristics all interact. In addition the study does not substantiate our assumption that if information becomes more accessible that there will be an increase in the use of the information.

REFERENCES

Deutsch, W. K. (1966). The Nerves of Government. New York: The Free Press. Easton, D. (1953). The Political System, New York: Alferd A. Knopf. Galnoor, I (1982). Steering the polity: communication and politics in Israel. Beverly

Hills: Sage Publications. Markus, R. ( 2008). Part of the doctoral thesis carried out under the supervision of

Prof. Abraham Diskin, Hebrew University, December 2008. The Members of the Knesset As Consumers of Information.

Simon, H. A. (1947). Administrative Behavior, New York: The Free Press.

Page 12: Proceedings of the 6th Israel Association for Information

12

THE GRAND PUBLIC SECTOR INFORMATION BAZAR: LESSONS FROM ICELAND, THE

NETHERLANDS, AND AUSTRALIA

Alon Peled The Hebrew University of Jerusalem

Alon [email protected]

Keywords: Public Sector, Information Sharing, E-Government, Electronic Data Exchange

ABSTRACT

Electronic information sharing failures haunt the public sectors of many countries. Over the past decade, several countries introduced interesting new marketplace-oriented programs to incentivize public sector organizations to improve information exchange. Critics level three challenges against these programs: (1) they undermine the ideal of active citizenship; (2) they invade citizens’ privacy; and (3) they work against public organizations’ ownership claims over specific information assets. The paper argues that these three challenges represent three trade offs among political values that politicians, bureaucrats and citizens must re-consider. The spirited public debate in Iceland over the Health Sector Database Act of December 1998 demonstrates ways to overcome the active citizenship challenge. The fascinating Australian CRIMTRAC case study illustrates means through which the trade off between information sharing and privacy concerns could be addressed. The Dutch RINIS case study illustrates how we can overcome the tension regarding who owns public data.

INTRODUCTION:

THE INFORMATION SHARING CRISIS THAT DOES NOT GO AWA Y

The harshest depictions of and rage over IS failures appear in legislative minutes, scholarly articles, governmental investigative reports, and practitioner accounts regarding the public sector. For example, scholars defined the lack of information sharing among federal agencies before the Katrina Hurricane disaster and in its aftermath as an example of "appalling lack of coordination and stunning lack of inter-organizational goodwill" (Kapucu & Van Wart, 2008, p. 733). In Britain, in one specific gruesome case in August 2002, a man who had been investigated in relation to allegations of eight separate sexual offences from 1995 to 1999 in one police district was vetted by another police district and hired as a school janitor in 2001. Shortly thereafter, he murdered two ten-year old school girls in his house. Likewise, in Australia, an investigation committee attributed the death of several Northern Territory infants from malnutrition to agencies' reluctance to seek or share relevant information with other agencies. By some scholarly accounts, 80-90% of all IT investments in information integration fail for reasons that are rarely purely technical in origin (Kamal, 2006, p. 206).

Page 13: Proceedings of the 6th Israel Association for Information

13

In recent years, a small but growing group of scholars began noting how the treatment of information as corporate assets and the provision of concrete financial incentives to support the exchange of these assets improves information sharing in organizations (Akbulut, 2003; Clarkson, Jacobsen, & Batcheller, 2007; Dawes, 1996; Kim & Lee, 2006; Vann, 2005; Willem & Buelens, 2007). However, these studies ignore three critical challenges: citizenship, privacy, and ownership.

RESEARCH QUESTION, OBJECTIVES, AND METHODOLOGY How can we resolve citizenship, privacy, and ownership challenges that are raised

against marketplace-oriented exchange programs to improve information sharing within the public sector? The history of the Icelandic public debate over the construction of a genetics database is utilized to address the active citizenship concerns. The case study of Australian state police organizations sharing criminal data via the CRIMTRAC is employed to address privacy and big brother fears. Finally, the Dutch RINIS law and the RINIS system are highlighted to demonstrate how the ownership dilemma can be overcome. Through the study of these three case studies, this paper strives to demonstrate creative solutions for seemingly irreconcilable political and ethical challenges to improving public sector information sharing.

THE CITIZENSHIP CHALLENGE: LEARNING FROM THE ICELANDIC GENETICS DATABASE DEBAT E

The democratic challenge claims that the realization of some political ideals such as active citizenship demand that they be produced, exchanged, and enjoyed outside of market relations (Anderson, 1993). Accordingly, one scholar proposed that governmental information sharing does not belong to the “regime of management” but rather to the “regime of justice and legitimacy” (Wenjing, 2011, p. 365). Critics might therefore argue that important civic debates such as that on the proper uses of biometric data would become obsolete as PSIE teaches citizens that all public data is for sale.

However, the civic debate surrounding the Icelandic Biogenetic Project (1998-2003) demonstrates that the introduction of an incentives-driven public-private information-sharing program can present an opportunity to engage citizens in an important civic debate. This case teaches us how government can commoditize certain public sector information assets whilst granting the public an opportunity to learn about, become engaged in, and impact the final commoditization decision (Pálsson & HarÐardóttir, 2002, p. 285).

THE PRIVACY CHALLENGE:

GLEANING LESSONS FROM THE AUSTRALIAN CRIMTRAC CASE STUDY

Ethicists may argue that a successful marketplace-oriented exchange program would empower public agencies to link more data together. Data that ought to remain within the confines of a single agency would transition all too easily into the computer systems of other agencies. These ethicists would therefore argue that a marketplace-

Page 14: Proceedings of the 6th Israel Association for Information

14

oriented exchange program would worsen the state’s digital command of citizens’ lives and invasion of citizens’ privacy (Braman, 2006, pp. 126-127).

Nonetheless, the history of the Australian CrimTrac agency (2000-Present) provides a powerful lesson regarding how to balance between the ideal of efficient government and privacy concerns. Working within the police information-sharing domain, CrimTrac established an impressive supply-chain program to improve information sharing in one domain (i.e., the NCHRC background checks program) but refrained from doing so in more privacy-sensitive domains (i.e., the NCIDD DNA-matching program and the ANCOR child protection program).

THE OWNERSHIP CHALLENGE: DISCOVERING INSIGHTS IN THE DUTCH RINIS PROJECT

Scholars explain why data ownership issues must be settled before launching an exchange program (Fedorowicz, Gogan, & Williams, 2006). So, on what legal base might public sector agencies claim ownership over the information assets they labored so hard to create? Moreover, an attempt to address the data ownership challenge must first and foremost recognize the inherent tension between the claim regarding the public nature of governmental data and the claim that public agencies own this same data. The process of developing an effective data-ownership solution is therefore a balancing act between two competing claims.

The Dutch RINIS project provides a good example how agencies could exchange information for incentives while protecting their data-ownership claims and within the context of an internal public information commons (Huijboom & Hoogwout, 2004; OECD, 2007; Snellen, 2002). RINIS demonstrates that it is possible to build an effective public sector information sharing system without forcing agencies to surrender data-ownership and without “privatizing” data that, as a whole, belongs to the public.

CONCLUSION

Inertia and fear of the unknown characterize objections to constructing new incentives-driven programs to improve public sector information sharing. Frequently, such fear and inertia are expressed as concerns regarding how improved information sharing could negatively impact the values of active citizenship, privacy, and citizens’ ownership of all pubic data. However, a closer empirical examination of three case studies in Iceland, Australia, and the Netherlands reveals that creative solutions to these concerns exist. Ultimately, politicians, bureaucrats and citizens must debate and strike a new balance among the competing values of efficient government, privacy, information ownership, and active citizenship in order to improve information sharing in the public sector.

Page 15: Proceedings of the 6th Israel Association for Information

15

REFERENCES

Akbulut, A. Y. (2003). An Investigation of the Factors that Influence Electronic Iinformation Sharing Between State and Local Agencies. Unpublished Dissertation, Louisiana State University, Louisiana.

Anderson, E. (1993). Value in ethics and economics. Cambridge: Harvard University Press.

Braman, S. (2006). Change of State - Information, Policy, and Power. Cambridge, MA: The MIT Press.

Clarkson, G., Jacobsen, T. E., & Batcheller, A. L. (2007). Information Asymmetry and Information Sharing. Government Information Quarterly, 24, 827-839.

Dawes, S. S. (1996). Interagency information sharing: Expected benefits, manageable risks. Journal of Policy Analysis and Management, 15(3), 377−394.

Fedorowicz, J., Gogan, J. L., & Williams, C. B. (2006). The E-Government Collaboration Challenge: Lessons from Five Case Studies. IBM Center for The Business of Government, Networks and Partnerships Series.

Huijboom, N., & Hoogwout, M. (2004). Trust in e-Government Cooperation. In R. Traunmüller (Ed.), Electronic Government (pp. 332-335). Heidelberg: Springer

Kamal, M. M. (2006). IT innovation adoption in the government sector: identifying the critical success factors. Journal of Enterprise Information Management, 19(2), 192-222.

Kapucu, N., & Van Wart, M. (2008). Making Matters Worse : An Anatomy of Leadership Failures in Managing Catastrophic Events. Administration & Society, 40(7), 711-740.

Kim, S., & Lee, H. (2006). The impact of organizational context and information technology on employee knowledge-sharing capabilities. Public Administration Review, 66(3), 370−385.

OECD. (2007). e-Government Studies: Netherlands. Paris: OECD. Pálsson, G., & HarÐardóttir, Kristín E. (2002). For Whom the Cell Tolls: Debates

about Biomedicine. Current Anthropology, 43(2), 271-301. Snellen, I. (2002). Electronic Governance: Implications For Citizens, Politicians And

Public Servants. International Review of Administrative Sciences, 68(2), 183-198.

Vann, I. B. (2005). Electronic Data Sharing in Public Sector Agencies. In G. D. Garson (Ed.), Handbook of public information systems (pp. 143-153). Boca Raton: Marcel Dekker.

Wenjing, L. (2011). Government information sharing: Principles, practice, and problems — An international perspective. Government Information Quarterly, 28(3), 363-373.

Willem, A., & Buelens, M. (2007). Knowledge Sharing in Public Sector Organizations: The Effect of Organizational Characteristics on Interdepartmental Knowledge Sharing. Journal of Public Administration Research and Theory, 17, 581-606.

Page 16: Proceedings of the 6th Israel Association for Information

16

THE TIME-ZONE CHALLENGED WORKPLACE: WHAT WE KNOW AND WHAT WE DON’T KNOW

Erran Carmel

Professor of Information Technology and International Business Research

Kogod School of Business, American University, Washington D.C

edu.carmel@american

Keywords: time zones, workplace, globalization, coordination.

Distance is dead. Time zones are not.

(Carmel & Espinosa 2011).

For two decades the business and scholarly communities have confounded the shrinking of distance with the eradication of time zone challenges. Essentially, in this narrative, geographic distance and temporal distance are solved. Furthermore, the myth of “Follow the Sun” has been embraced even as every global knowledge worker lives with the daily challenges of time zone induced coordination hardships. We need to know more about the impact of time zones-- mostly bad, some good. There is too much myth-making about time zones. I detail three critical time zone research issues areas that we have distilled from our research. We know some things about these issues, but need to know more. The findings detailed below come from my recent co-authored book Carmel & Espinosa (2011). For brevity this source is not repeatedly cited

TIMESHIFTING Millions of knowledge workers are timeshifters now. In India, many software engineers regularly stay late at the office to overlap a bit with the U.S. workday. Even in Brazil, which overlaps significantly with most of its foreign partners to the north, we found pervasive ad-hoc timeshifting among software professionals. Timeshifting is rampant because it is viewed as essential to coordination for teams that span time zones. We found this in nearly every organization and project we have studied over the years. Timeshifting occurs for specific events, like meetings, or to open up what we label a convergence window—for ad hoc acts of coordination. Ad hoc coordination has been improved via awareness technologies: dashboards, calendars, situation awareness, live video feed. Three intertwined work norms have emerged. The first two are influenced by time zone separation: � Timeshifting. Global workers adjust their working hours-- creating one or

more overlap windows. � Scattertime is the scattered, splintered workday, jumping from personal time

to work and back. Academic workers, such as professors, have been practicing this for a long time, though it is not always for purposes of time zone accommodation.

Page 17: Proceedings of the 6th Israel Association for Information

17

� Nomadism means working anyplace. Nomads work in “third places,” like cafes, not their home or office.

Our simple proposition is that timeshifting impacts performance positively. The literature has been silent about time differences, generally examining the official time zone difference (e.g., New York to Israel is 7h) instead of the de facto time difference after timeshifting. Given the preponderance of timeshifting that is practiced in industry (as documented in our book) then we argue that separated teams see improved coordination leading to improved team performance .

Figure 1: simple research model of timeshifting.

However, there are also other behavioral and societal implications of timeshifting captured in Figure 2. The evidence is overwhelming that there are negative effects of night work (psychological, physical health, social adjustment problems). There is also strong research evidence that workers who have some control over their schedule/ some choice are more satisfied.

Figure 2: Broader research model (preliminary)

For the next two sections of this paper, the physical configuration choice is often a strategic consideration. Like the business adage, “location matters,” it is clear that “time zones matter” because the coordination costs are significant on one hand. or the location benefits are significant on the other hand. Time zone separation has been treated as a team level construct affecting team performance but rarely beyond. But time zone considerations are often strategic. Time zone is even creeping into national competitiveness, becoming fashionable as a positioning. For example, in 2009, New Zealand began a public discussion of positioning the country as the best place to do finance since every financial day begins

Page 18: Proceedings of the 6th Israel Association for Information

18

in NZ. Generally, though, decision makers don’t elevate time zones strategically when designing their distributed organization. Individuals, teams, or corporate divisions are determined based on cost, availability and expertise.

STRATEGY: CREATE CONFIGURATIONS THAT ARE IN TIME-ZONE PROXIMITY

In order to mitigate time zone problems the strategy in these cases is to arrange the work locations to be time zone proximate.

Nearshoring is a common strategy in sourcing companies around the world. Nearshoring means sourcing IT service work to a foreign, lower-wage country that is relatively close in distance or time zone, where the customer also expects to benefit from cultural and linguistic proximity.

Another different time zone location decision we found is that some companies have instituted an informal rule: The Rule of Two, stating that a maximum of two time zones are represented in a distributed project team.

Third, we identified Realtime Simulated Co-location (RTSC). The essence of the RTSC is to erase both distance and time zone differences as much as is managerially, behaviorally, and technologically possible. Time zone differences are erased by strict schedule alignment (timeshifting) between the distant sites. Team members work during the same time window no matter where they are. Meanwhile, distance is erased by always-on audio/video and dashboards, and awareness technologies, such as a shared dashboard of key project data and status.

STRATEGY: TRY NOT TO BE IN TIME-ZONE PROXIMITY:

FOLLOW-THE-SUN & ROUND-THE-CLOCK A disambiguation is needed here between these two similar sounding terms. Follow-the-Sun is defined as handing off unfinished work to the next site on a daily basis. Round-the-Clock work is about 24-hour coverage. For example, global help desks practice Round-the-Clock. This strategy is quite common and successful. In contrast, Follow-the-Sun is expected to reduce duration. It has not been successful and is quite rare. Round-the-Clock is a type of workflow strategy that leverages time zones to achieve 24-hour coverage. Similar to the local 24/7 supermarket, the goal is efficient staffing across shifts. Call centers spread over the world are staffed with the goal of leveraging time zone separation for continuous service. Round-the-Clock work includes the familiar helpdesk or Tier 1 support. A helpdesk, unlike a full project, handles a task quickly, often in just a few minutes, so there is little dependency between the distant sites. Helpdesks tend to do tasks that are said to be granular. Thus, there is little coordination challenge in handing-off work between various global sites.

Follow-the-Sun is understood intuitively: handoff work from one site to the next, in shifts. The company can theoretically reduce the project duration by 50% if two shifts are involved; or if there are three sites working three sequential shifts, duration could be theoretically reduced by 67%. Follow-the-Sun has become the great seduction of

Page 19: Proceedings of the 6th Israel Association for Information

19

global work in that while you sleep, others are working and getting the job done. The coordination challenge in this approach is huge. Every day there are several transition points in which unfinished work is transferred across sites. Follow-the-Sun has developed into a global myth. There have been relatively few successful cases of Follow-the-Sun in a complete project life cycle. It is simply too hard to do for the entire duration of a project. Many have claimed successful Follow-the-Sun projects but, on closer inspection, while these projects were indeed globally dispersed and were indeed operating twenty or twenty-four hours a day, project workers did not practice a daily handoff implied by Follow-the-Sun. So, while the projects may have been successful, they did not leverage time zones in any way. Further research can go in several directions. Engineering teams can continue experimenting with Follow-The-Sun. Separately; the daily handoffs need to be examined in global projects to understand where time zone advantages can be achieved.

REFERENCES

Erran Carmel & J. Alberto Espinosa. (2011) I’m Working While They’re Sleeping: Time Zone Separation Challenges and Solutions, Nedder Stream Press..

Page 20: Proceedings of the 6th Israel Association for Information

20

TEACHING THE REAL USING THE VIRTUAL (TRUV): THE CASE FOR A UNIVERSITY LEVEL INTEGRATIVE

COURSE

Ruti Gafni Tel Aviv–Yaffo Academic College

Open University of Israel

[email protected]

Eran Aqua Tel Aviv–Yaffo Academic

College

[email protected]

Yesha Sivan Tel Aviv–Yaffo Academic College

Metaverse Labs. Ltd.

[email protected]

Keywords: Virtual World, integrative course, Second Life.

INTRODUCTION

Teaching the Real Using the Virtual (TRUV) is an integrative university level course that tries to tackle one of the challenges of teaching and learning: building a semester-long experience that can simulate the real-world for the students, allowing them to integrate previous knowledge. We use the term “real” (as in “real” world, “real” money etc) to denote the experience of the actual world. We contrast that with the environment of a classroom, or computer based simulation. We claim that a virtual worlds (as used in the proposed course) is an approximation of the real world, offering a unique simulated environment. Universities are aware of the need to combine knowledge received in various courses using an integrative course to help students connect ideas and insights. Shi (2006) reported an initiative to promote integrated learning, supporting collaboration via information technologies interactive communities. Sanchez, Neriz and Ramis (2008) proposed a methodology for integrative courses, in which the students perform an active role that enhance learning and comprehension. This paper describes a research in progress, in which we introduce TRUV – a course that combine Virtual Worlds (VW) and active learning. TRUV was delivered in a few variations since 2006. The paper will review VWs as suitable environments for integrative courses, describe the course design, and report on students' responses to a reflective survey. Lastly we draw some lessons from to the initial results. VWs are defined as a combination of four factors: 3D, Community, Creation, and Commerce (3D3C) (Sivan, 2008): A three Dimensions world in which Communities of real people interact, Creating content, products, and services, and performing real economic value by Commerce. These factors were the foundation of TRUV. A VW can simulate the real world, imitating the behavior of a physical or abstract system, such as an event, situation or process. Affordable commodity-based products help making sophisticated simulation technologies accessible (Damassa & Sitko, 2010). Second Life (SL) is an online VW, in which an avatar, a figure representing a user, can move through and interact with each other and with objects (Baker, Wentz & Woods, 2009). SL can offer the user new ways to communicate and cooperate, and learners can be engaged in ways that are not possible outside the VW (Kelton, 2007). More than 100 universities use SL (Baker et al., 2009; Carpenter, 2009), to represent various learning environments and experiences, to allow students creative

freedom, or to support research. Faculty use these spaces to hold lectures, display

Page 21: Proceedings of the 6th Israel Association for Information

21

digital artwork and build environments, in varied fields: arts (Ayiter, 2008 ), medical education (Boulos, Hetherington & Wheeler, 2007 ), computer programming (Dann & Cooper, 2009), theater (Kuksa, 2009 ), and psychology (Stark-Wroblewski et al., 2008). VW has a number of advantages (Foss, 2009), like the possibility to simulate expensive real resources, performing collaborative tasks without the need to meet physically and risk assessing activities which would carry a degree of danger in the real- world.

OBJECTIVES: THE DESIGN OF THE COURSE

This paper proposes to harness the VW to integrate the knowledge acquired in different courses by conducting a real project. Carrying out a project in the real-world, from its inception, through its planning, implementation and till its marketing, is more costly, difficult, takes longer and cannot be performed during one semester. We claim that the 3D3C model makes this integrative course valuable and feasible. Specifically, the combination of the four factors of the full 3D interactive environment that operates 24 hours a day with globally accessible diverse communities including both customers and suppliers, affording a wide range of real or imaginary creations of products and services, and real commerce that connect real money on the creation side (buying services) and, when selling the outcome – creates a unique integrative learning experience. The aim of TRUV is to execute, in SL, the entire life cycle of a project, embracing activities as: idea finding, solution proposing, planning, risk analysis, implementation, and selling the product. Some activities may be performed using subcontractors, which need to be found, contracted, managed, paid, and controlled. The students need to learn the platform, including :introduction to VW and SL, creating and

customizing avatars (Figure 1 - examples), performing basic activities (moving through the VW and communicating with others) and a little knowledge of programming in SL. Thereafter, the students are ready to start the real project, which is done in pairs, using a real laboratory and a parallel virtual one (Figure 2). The finished projects (Figure 3) are exhibited in the Virtual Museum (Figure 4).

Figure 1 – Avatars

Page 22: Proceedings of the 6th Israel Association for Information

22

Figure 2 – Possible Design of Real (top) vs. Virtual (bottom) Labs

Kippas Space Ferry's Wheel Sparkling Ring Collara

Figure 3 – Examples of final projects

Page 23: Proceedings of the 6th Israel Association for Information

23

Figure 4 –Virtual Museum - outside (up), inside (bottom) and empty rack (right)

METHODOLOGY: GAUGING THE PERCEIVED VALUE BY STUDENT S

As part of developing TRUV we strive to gauge students’ perceived value of the course and their suggestion for improvements. Thus, a primary open questionnaire was conducted in five undergraduate courses (total of 68 responses, about 90% of class participates have answered the questionnaire in the last class after their final presentation), in which the students had to reflect on two questions: (1) their perceived learning benefits, and (2) the improvements the course needs. This preliminary survey is the base for future research which we plan to conduct. We should note that perceived value is defined as the students report on the value they think they got from the course.

SURVEY RESULTS: WHAT DO STUDENTS THINK?

We assume that the answers to these two questions capture overall student perceptions of the course. Table 1 exhibits the most repeated feedback sentences, clustered (using our own potentially biased clustering) by their meanings. Figure 5 shows the number of students who wrote each clause.

Page 24: Proceedings of the 6th Israel Association for Information

24

Table 1 – Summary of students' feedback

No. Clause

Question 1: What were the learning benefits C1 Programming in SL Knowledge with programming in SL, objects, scripts, etc.

C2 Abilities of self-learning Abilities of self-learning (coping with new materials by ourselves, research, comparison of alternatives)

C3 Teamwork Teamwork (which is not common during studies) and communication between partners

C4 Familiarization with VWs

Familiarization with VWs, pros and cons of VWs, economics in VWs

C5 Performing full project Performing full project - from initiation to marketing.

C6 Schedule planning and fulfillment

Confrontation with schedule planning and fulfillment in limited time

C7 Experiencing management skills

Experiencing management skills and capabilities (project planning, risk assessment, presentations, etc.)

C8 Subcontractors management

Subcontractors management (recruiting, supervision and management of professionals; dialogue with specialists in fields not known by us )

Question 2: What should be improved C9 Technical problems Technical problems (i.e. - Internet and computer speed,

technical failures)

C10 Need more technical support

Need more technical support and more scripts sources

C11 Want more deepening in VWs

Want more deepening in VWs: familiarization with different communities in VWs, interaction with other "citizens"

C12 The course is too short The course is too short - must be two semesters

C13 Too many assignments Too many assignments

C14 Need more programming classes

Want more programming classes, in order to make more sophisticated projects

C15 Programmers availability is poor

Programmers (subcontractors) availability is poor

C16 The real labs must be open for more hours

The real labs must be open for more hours, adjusted to student's schedules

Page 25: Proceedings of the 6th Israel Association for Information

25

Figure 5 – Number of students for each clause

DISCUSSION

The following insights, exposed in Table 2, should be taken into consideration in the design and implementation of such a course. The expectations, both of students and teachers should be synchronized with realistic timing and resources. For the students, the course seems relatively difficult at first. The teacher must certify that the planned projects are feasible in the period of the semester (C12), and chaperone the students as they go through the various project phases.

Table 2 – Insights from results Insight Explanation Perceived value The students were overall able to utter the outcomes of the course,

realizing the integrative character of the course that interlaces previous knowledge, like principles in management, project management, marketing, programming, and more (C5, C6 and C7).

Leadership role For many students, this course was the first time they could experience the challenges a manager comes up against, like set schedules (C6), the need to recruit and manage subcontractors (C8), handling risk assessment (C7), etc. The nature of the course facilitated the accomplishment of all the activities needed, without expensive resources, extending the connectivity to almost infinite possibilities, within the time allocated.

Page 26: Proceedings of the 6th Israel Association for Information

26

Insight Explanation The role of programming

The students understood the power of programming (C1, C11, C14 and C15), either by wanting to program by themselves or by contracting programmers. This is a skill needed in this era, where most of the professions are accompanied by information systems.

Self Learning Most of the students emphasized the contribution of the course to the self-learning ability (C2). The meaning is not to learn a subject, but learning how to learn. The self-learning ability is important for workers in this age of rapid changes.

Collaboration Teamwork skills (C3) include both the collaboration within the team and the cooperation with subcontractors and suppliers.

Limitations of Virtual Worlds

Use of VWs also has limitations and drawbacks. There is some learning time involved (C1, C4 and C10), especially the first time students got introduced to VW and have to perform technical assignments to be familiar with the infrastructure (C13).

Infrastructure Conducting such a course needs an established and solid infrastructure, because unstable servers and networks cause slowness of the work and dissatisfaction of the students (C9). The system needs a well-based processing of backups and upgrading of software versions. The real laboratory must be available to the students all over the course and, if possible, around the clock (C16).

TRUV provides the possibility to integrate knowledge from various courses in

order to complete an entire project during one semester .The integrative course

is performed according to the 3D3C model .This comprehends implementing the project in a three dimensional creative way, allowing relatively easy constructing of virtual artifacts, facilitating the communication with suitable subcontractors,

suppliers and customers, and enabling real commerce . This is a pilot research that strives to improve the design of a course. In future work we plan to consider further alternatives to the course components, better definition of the value to the students – perhaps in the direction of a business experience. We believe that teaching the real using the virtual in general and this course in particular holds a great promise as an answer to the integrative challenge of modern academia.

Page 27: Proceedings of the 6th Israel Association for Information

27

REFERENCES

Ayiter, E. (2008). Integrative art education in a metaverse: ground<c>. Technoetic Arts :A Journal of Speculative Research , 6(1), 41-53.

Baker, S. C., Wentz, R. K., & Woods, M. M. (2009). Using Virtual Worlds in Education: Second Life® as an Educational Tool. Teaching of Psychology, 36(1), 59- 64.

Boulos, M., Hetherington, L., & Wheeler, S. (2007). Second Life: an overview of the potential of 3-D virtual worlds in medical and health education. Health Information & Libraries Journal, 24(4), 233-245.

Carpenter, B.S. (2009). Virtual Worlds as educational experience: Living and learning in interesting times. Journal of Virtual Worlds Research, 2 (1).

Damassa, D.A. & Sitko, T.D (2010). Simulation Technologies in Higher Education: Uses, Trends, and Implications. EDUCAUSE Center for Applied Research, Research Bulletin, 3. Retrieved on 20 September 2011 from: http://net.educause.edu/ir/library/pdf/ERB1003.pdf

Dann, W., & Cooper, S. (2009). Education Alice 3: Concrete to Abstract. Communications of the ACM, 52(8), 27-29.

Foss, J. (2009). Lessons from learning in virtual environments. British Journal of Educational Technology, 40(3), 556-560.

Kelton, A. J. (2007).Second Life: Reaching into the virtual world for real-world learning. EDUCAUSE Center for Applied Research, Research Bulletin, 17. Retrieved on 20 September 2011 from: http://net.educause.edu/ir/library/pdf/ERB0717.pdf

Kuksa, I. (2009). Virtual reality in theatre education and design practice – new developments and applications . Art, Design & Communication in Higher Education, 7)2( ,73-89 .

Sanchez, I., Neriz, L., & Ramis, F. (2008). Design and application of learning environments based on integrative problems. European Journal of Engineering Education, 33(4), 445-452.

Shi, D. E. (2006). Technology and Integrative Learning: Enabling Serendipitous Connectivity across Courses. Association of American Colleges and Universities, Peer Review, 8(4), 4-7.

Sivan Y. (2008). 3D3C Real Virtual Worlds Defined: The immense potential of merging 3D, community, creation, and commerce. Journal of Virtual Worlds Research(special issue: Virtual worlds research: Past, present & future), 1(1).

Stark-Wroblewski, K., Kreiner, D. S., Boeding, C. M., Lopata, A. N., Ryan, J. J., & Church, T. M. (2008). Use of Virtual Reality Technology to Enhance Undergraduate Learning in Abnormal Psychology. Teaching of Psychology, 35(4), 343-348.

Page 28: Proceedings of the 6th Israel Association for Information

28

TEACHING MANUFACTURING-ORIENTED INFORMATION TECHNOLOGY FOR THE FUTURE

FACTORY

Gali Naveh Ben-Gurion University

of the Negev il.ac.galin@bgu

Adir Even Ben-Gurion University

of the Negevil .ac.adireven@bgu

Lior Fink Ben-Gurion University

of the Negev [email protected]

Sigal Berman Ben-Gurion University

of the Negev [email protected]

Keywords: IS education, digital factory, laboratory.

EXTENDED ABSTRACT

Equipping students with the concepts, knowledge, skills, and perceptions to cope with the requirements of the position she or he will hold upon graduation, is always challenging, and the field of Information Technology (IT) is no exception. The future factory is an important subject matter presenting such a challenge. Boyer, Ward and Leong (1996), citing others, define the factory of the future as based on advanced manufacturing technologies (AMT) and the economies of scope that they engender: paperless, almost workerless, and possessing the ability to produce a large variety of products cost effectively in lot sizes as small as one. Almost a decade later, Bracht and Masurat (2005) define the digital factory in similar terms- fully computer-aided planning, production and operation of the factory, networked through a central database. Sackett and McCluney (1992) state that in order to succeed in such a dynamic environment, an engineering graduate must develop a holistic view of the business process. Zuehlke (2010) state that apart from the vision of a deserted factory, the changes we face today are similar to the ones we faced two decades ago. Hands-on teaching is one of the best ways to convey this complex industrial environment (Dessouky, Verma, Bailey, & Rickel, 2001), but constructing and maintaining a suitable physical system facilitating such experience is not simple. The major difficulties stem from the high costs and the complexity of hardware and software on one hand, and the need to augment production concepts with concepts from different disciplines (e.g. information systems and human factors) on the other. Information systems have evolved considerably in the past two decades, particularly in directions that require cross-organizational standardization and integration, which even further increase the complexity of their implementation in teaching production-oriented settings. In light of these challenging needs and constrains, it is somewhat surprising that relatively little attention is devoted to IT education in supply management settings. Medina- Lopez, Alfalla- Luque and Marin- Garcia (2011), who performed an analysis of research on operation management teaching, state that there is a dearth of articles on teaching in the main operational management journals. This submission is aimed at filling some of the gap by presenting the contribution of an educational future factory facility to IT education. Leaning on fruitful previous experience of the Computer Integrated Manufacturing (CIM) laboratory established in 1993 (Berman, Edan, & Jamshidi, 2002, 2003; Berman, Edan, & Rabinowitz, 2003) with resembling challenges, the Department of Industrial Engineering and Management (IEM) at the Ben-Gurion University of the Negev began a project of building the Integrated Manufacturing Technology (IMT)

Page 29: Proceedings of the 6th Israel Association for Information

29

laboratory which is a future factory facility (Berman & Fink, 2011). Concepts underlying the IMT laboratory come from the fields of production planning, intelligent automation, information systems, and human factors. Its main goal is to facilitate teaching and research on the integration of future factory concepts, cutting edge technologies of shop floor automation, and organizational information systems. The design of the IMT laboratory aims at the facilitation of demonstrations of links between the intangible information systems and the physical real-life organizational environment, providing in-depth understanding of such links to students of information systems courses. In this laboratory, abstract notions taught (database, extract- transform- load (ETL), data warehouse, business intelligence (BI), data mining etc.) will be turned into visible and tangible concepts for the students. In the future, business information systems such as Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), and Supply Chain Management (SCM) will be implemented and further demonstrate the cross-organizational nature of information systems. The first IT course to enter the IMT laboratory was Enterprise Information Architecture (BI), an elective 4th-year course. The course is focused on information architecture, specifically infrastructure and application that generate information resources according to the business needs of the organization. The integration of a business intelligence course in the factory setting was not trivial and the linkage was carefully planned. Forty four students, taking the course on the winter semester of the 2012 academic year, entered the laboratory toward the end of the semester for a short presentation of the future factory. Since the laboratory is still under construction, a large portion of the demonstration was 'virtual' (i.e. explanation of what will be and how it will work). At the end of the visit, the lecturer and his assistants presented the student with an assignment, based on the laboratory activity. They opened with a presentation of simulation of incoming calls from customers with feedback regarding the product they received (positive and negative) and with manufacturing and business data from a simulation model of the factory (employee working hours and machine operation, salesman activity, material quality etc.) programed using ArenaTM (Rockwell) simulation software. The students were asked, as a graded course assignment, to analyze the data and identify the source of the customers' complaints. In the following weeks, the students were asked to fill out a questionnaire regarding their satisfaction with the activity. Twenty seven (61%) students responded, with a general positive feedback and a strong indication that they would like the visit to be more hands-on, i.e., to see the full production process, walk around the laboratory, touch things etc. This feedback is in-line with the literature on IT education as well as the IMT laboratory objectives. In the coming semesters, the IMT laboratory will be integrated in the curriculum of additional IT courses, and as the laboratory construction progresses, more hands-on activity will take place, aiming at realizing the full potential of the IMT laboratory in demonstrating abstract concept of information technology in a manufacturing environment.

ACKNOWLEDGMENT

This IMT laboratory is supported by the Paul Ivanier Center for Robotics Research and Production Management, by Intel Israel (Intel mentor: Dr. Adar Kalir) and the European Community's Seventh Framework Program under grant agreement no. FP7-284928 ComVantage. The authors wish to thank Gil Baron, Ido Zitelny and Idan Cohen for their assistance in the preparation of the BI course material.

Page 30: Proceedings of the 6th Israel Association for Information

30

REFERENCES

Berman, S., Edan, Y., & Jamshidi, M. (2002). Decentralized Autonomous AGV system for Material Handling. International Journal of Production Research, 40(15), 3995-4006.

Berman, S., Edan, Y., & Jamshidi, M. (2003). Decentralized Autonomous Automatic Guided Vehicles in Material Handling. IEEE Transactions on Robotics and Automation, 19(4), 743-749.

Berman, S., Edan, Y., & Rabinowitz, G. (2003). CIM-NEGEV a computer integrated manufacturing teaching environment. Paper presented at the WFEO/ASEE e-Conf. of the American Society for Engineering Education.

Berman, S., & Fink, L. (2011). Integrated manufacturing technology laboratory. Paper presented at the International Conference on Production Research (ICPR), Stuttgart, Germany.

Boyer, K. K., Ward, P. T., & Leong, G. K. (1996). Approaches to the factory of the future. An empirical taxonomy. Journal of Operations Management, 14(4), 297-313. doi: 10.1016/s0272-6963(96)00093-9

Bracht, U., & Masurat, T. (2005). The Digital Factory between vision and reality. Computers in Industry, 56(4), 325-333. doi: 10.1016/j.compind.2005.01.008

Dessouky, M. M., Verma, S., Bailey, D. E., & Rickel, J. (2001). A methodology for developing a web-based factory simulator for manufacturing education. IIE Transactions, 33(3), 167-180. doi: 10.1080/07408170108936820

Medina- Lopez, C., Alfalla- Luque, R., & Marin- Garcia, J. A. (2011). Research in operations management teaching: trends and challenges. Intangible Capital, 7(2).

Sackett, P., & McCluney, D. (1992). Inter enterprise CIM—a mechanism for graduate education. Robotics and Computer-Integrated Manufacturing, 9(1), 9-13. doi: 10.1016/0736-5845(92)90014-w

Zuehlke, D. (2010). SmartFactory—Towards a factory-of-things. Annual Reviews in Control, 34(1), 129-138. doi: 10.1016/j.arcontrol.2010.02.008

Page 31: Proceedings of the 6th Israel Association for Information

31

FACTORS INFLUENCING PERCEIVED BENEFITS AND USER SATISFACTION IN KNOWLEDGE

MANAGEMENT SYSTEMS

Yael Karlinsky-Shichor Tel Aviv University

il.ac.tau.yaelkarl@tauex

Moshe Zviran Tel Aviv University

il.ac.zviran@tau

Keywords: knowledge management systems; perceived benefits; user satisfaction; knowledge quality; information quality

INTRODUCTION

Information systems (IS) serve as important enablers of business processes in organizations. Still, there is no obvious positive relation between IT investments and business performance: on the one hand, top-performing companies in terms of revenue, return on assets, and cash-flow growth spend less on IT on average than other companies; on the other hand, the highest IT spenders typically under-perform compared with their best-in-class peers (Malhotra 2005). Investing in cutting-edge technologies is not enough to gain business advantage. Moreover, socio-psychological factors such as users’ motivation and commitment play an important role in determining IT performance (Strassmann 1997, Malhotra and Galleta 2003). The IS literature long ago recognized users’ perceptions and attitudes as factors of system effectiveness (DeLone and McLean 1992). In particular, a comprehensive work by DeLone and McLean (2003) identified three quality dimensions affecting use behavior and user satisfaction in IS: system quality, information quality and service quality. This research focuses on knowledge management systems (KMS), special types of information systems that support and enhance knowledge processes, and examines user satisfaction and perceived benefits from using these systems.

Knowledge management (KM) is the practice of selectively applying knowledge from previous experiences of decision making to current and future decision making activities with the express purpose of improving the organization’s effectiveness. It is a practice that creates a synergy of the technical information processing capabilities with the innovative and creative capabilities of human and social elements in the organization (Malhotra, 2000). KMS are defined as the systems that are created to facilitate the capture, storage, retrieval and reuse of knowledge (Jennex 2005).

A study by the Economist Intelligence Unit (Ernest-Jones, 2005) found that KM tools are the most important technology for achieving strategic goals, improving decision making processes and competing for customers. While IS handle the operation and management of the organization, KMS often add to them by integrating seemingly unconnected or hidden pieces of information to create insights that are relevant to the user in a specific context. They do so by using sophisticated technical methods (i.e. text analysis), or by facilitating state-of-the-art Web 2.0 tools for sharing knowledge and collaborating. Another important difference between IS and KMS is that the latter are essentially social systems. Their functionality and structure often depend on human participation and social interaction (WIKI, to name one example).

Page 32: Proceedings of the 6th Israel Association for Information

32

Defining and measuring KM success is important to provide a basis for company valuation, to stimulate management performance, and to justify investments in KM activities (Jennex et al. 2007). It is difficult to directly tie knowledge management practices and organizational competitiveness due to the many intervening variables, as well as the intangible and long-term nature of KM benefits (Bots and de Bruiin 2002, Lee et al. 2005, Kulkarni et al. 2006). While much research has been done on IS users, less is available on KMS users, particularly studies that incorporate actual knowledge functions in their measurement tools.

A model of factors influencing perceived user benefits from and user satisfaction with KMS is presented in this work (see figure 1). The model originates in the D&M IS model (2003), elaborating it with conceptions of knowledge and knowledge management, as well as concepts drawn from technology acceptance models. A new multi-item measurement scale is proposed for the KM system quality factor. The model was validated through surveying employees working in the knowledge-intensive software industry.

Figure 1: Factors influencing perceived benefits and user satisfaction in KMS

Page 33: Proceedings of the 6th Israel Association for Information

33

METHODOLOGY

Respondents were requested to answer on-line questionnaires: one for employees using KMS and another for the organization’s KM manager (if such a function existed) or the IT manager. The user’s questionnaire is based on existing measures from the IS/KMS literature. The IT/KM manager questionnaire was created for the purpose of this study and validated prior to putting it to use by an expert panel of nine academic advisors and practitioners in the IS field.

The study population is composed of employees using KMS in an organizational context and the IT/KM manager of the respective organization. Ten Israeli hi-tech companies or Israeli branches of multinational companies participated in the study. The usable sample of 100, consisted of 34 females and 66 males, most of which are research and development employees. The organizations' KM profile is shown in table 1.

Table 1: Knowledge management profile of organizations

KM characteristic Frequency/total

Organizational function in charge KM manager IT manager IT/KM manager (unified role)

3/10 6/10 1/10

Portal implementation Internal communication External communication Single access point to org. applications Document management Collaboration space Experts / best practices directory

9/10 8/9 3/9 5/9 5/9 7/9 4/9

Wiki implementation Internal communication External communication Document management Teams collaboration Experts / best practices directory

6/10 2/6 0/6 1/6 2/6 3/6

Social networks or instant messaging tool implementation 8/10

Other knowledge management tools1 7/10

1 Mostly collaboration and sharing tools (MS SharePoint, forums, blogs) and internal and external technical knowledge bases.

Page 34: Proceedings of the 6th Israel Association for Information

34

RESULTS

Pearson correlations were calculated to measure the dependencies between the variables. To assess the structural model, a regression was performed of the independent variables, which showed significant correlations with the dependent variable, on the dependent variables. Path coefficients and R2 values are shown in Figure 2. Non-significant correlations are shown as dashed lines.

KM level had a significantly positive effect on user satisfaction, but technical resources and systems linkages had no such effect. None of the sub-constructs of system quality were found to affect perceived benefits from the system.

Increased knowledge quality of the system was found to be associated with increased perceived benefits to the user as well as increased user satisfaction. Seventy-three percent of the variance in user satisfaction was explained by KM level and knowledge quality. The hypotheses regarding the effect of user competence and organizational attitude were not supported.

** Significant at the 0.01 level, *. Significant at the 0.05 level

Figure 2: Hypotheses testing results

Page 35: Proceedings of the 6th Israel Association for Information

35

ANALYSIS

An IDC study2 found that knowledge workers spend 15-30% of their time seeking specific information and that these searches are successful less than 50% of the time. The purpose of KM tools is not only to enhance sharing and circulation of knowledge among employees, but to do so efficiently and effectively (e.g. minimizing search and query times, keeping knowledge updated and reliable). The research model includes factors that are designed to support these needs, most important of them being the addition of the KM level and systems linkages as key characters of system quality and the adaptation of the information quality construct to specific knowledge characteristics, mainly context and linkages.

In the adaptation of the knowledge quality construct to the KMS model, the classic information/knowledge dimensions are measured (e.g. consistency, importance) and characteristics of context and linkages to experts or information sources are added. The context of the knowledge provided and the quality of the linkages are important in light of: (1) The value of information/knowledge may change depending on the circumstances (2) The deluge of information that employees face and the time spent on finding information or finding the right person to ask. The results show that knowledge quality has a significant positive influence on perceived benefits as well as on user satisfaction, and provide another empirical proof of the importance of adopting knowledge characteristics when dealing with KMS.

KM level was added to the research model as a sub-construct of system quality, with a new measure developed based on the knowledge functions: acquisition, retention, maintenance and search and retrieval. Judicious implementation of these functions is a prerequisite to knowledge richness (through acquisition), correctness and reliability (through retention and maintenance) and ease of access and system usefulness (through search and retrieval mechanisms and display forms). The high reliability of the KM level construct (Cronbach’s α=0.9) allowed its utilization in the measurement model. It was found to have a significantly positive effect on satisfaction from the KMS, an important result indicating that unique knowledge functions should be measured when predicting users' attitudes towards KMS. No correlation was found between KM level and perceived benefits, maybe because the measure deals mainly with technical specifications that relate more to general functionality and user experience – typically satisfaction dimensions, rather than with direct means of enhancing job performance – typically perceived benefit dimensions.

The research model hypothesized that in order to broaden the span of available knowledge and produce value to users, the infrastructure should also be well linked, i.e. systems in the organization should intercommunicate based on common interfaces and shared object representations. The hypothesis was not supported, probably due to issues concerning the scale, the reliability of which was barely on the edge of acceptability (α=0.65). Nevertheless, systems linkages seems to be an important measure of the overall KM infrastructure of the organization, and should be included in the model using a better scale.

Viewing the knowledge management profile of the organizations it is immediately apparent that most organizations do not have a formally defined KM manager. Organizations often disband the chief knowledge officer (CKO) since knowledge management is subsumed under information technology (Desouza and Raider 2006). The existence of a CKO or KM manager is an indication of the importance of KM in the overall strategy of an organization. It may

Cited by 2/10-of-4-efficiency-more-delivers-technology-0-2-enterprise/blog/com.freshconsulting://http

Page 36: Proceedings of the 6th Israel Association for Information

36

indicate, among other things, the organizational attitude to KM, as one of the CKO’s roles is to encourage the creation of a knowledge-intensive environment and the utilization of knowledge and knowledge tools (Earl and Scott 1999). The absence of a CKO in most organizations may explain the fact that the organizational attitude to KM was not significantly correlated with either of the dependent variables.

CONCLUSION The need for adjustments to the IS model when applying it to KMS was confirmed by the empirical results. Though knowledge and information are sometimes interchangeable, knowledge has certain unique characteristics that define its quality, mainly relating to linkages and context. Knowledge is of value only if it is produced in the right context. Moreover, much of the value of KM tools lies in the mapping of best practices and experts in the organization. At the measurement level, a new measure for KM level was introduced, based on the Stein and Zwass mnemonic functions (1995). The reliability of the new measure is high, and it showed a significant effect on user satisfaction. It can serve as a base for the development and validation of a KMS quality tool that will include the measurement of all aspects of system quality from the knowledge perspective.

Page 37: Proceedings of the 6th Israel Association for Information

37

REFERENCES Bots, P.W.G. and de Bruiin, H. (2002). Effective Knowledge Management in Professional

Organizations: Going by the rules. Proceedings of the 35th Hawaii International Conference on System Sciences, IEEE Computer Society Press.

DeLone, W.H. and McLean, E.R. (1992). Information systems success: The quest for the dependent variable. Information Systems Research, 3(1), pp. 60-95.

DeLone, W. H. and McLean, E. R. (2003). The DeLone and McLean model of information systems success: A ten-year update. Journal of Management Information Systems, 19(4), pp. 9-30.

Desouza, K.C. and Raider, J.J. (2006). Cutting corners: CKOs and knowledge management. Business Process Management Journal, 1(2), pp. 129-134.

Doll, W.J. and Torkzadeh, G. (1988). The measurement of end-user computing satisfaction. MIS Quarterly, 12(2), pp. 259-274.

Earl, M.J. and Scott, I.A. (1999). What is a chief knowledge officer? Sloan Management Review, 40(2), pp. 29-38.

Ernest-Jones, T. (2005). Managing Knowledge for Competitive Advantage. The Economist Intelligent Unit.

Jennex, M. E. (2005). What is knowledge management? International Journal of Knowledge Management, 1(4), pp. 1-4.

Jennex, M.E. and Olfman, L. (2006). A model of knowledge management success. International Journal of Knowledge Management, 2(3), pp. 51-68.

Jennex, M. E., Smolnik, S. and Croasdell, D. (2007). Towards defining knowledge management success. Proceedings of the 40th Annual Hawaii International Conference on System Science.

Kulkarni U.R., Ravindran S. and Freeze R. (2006). A knowledge management success model: Theoretical development and empirical validation. Journal of Management Information Systems, 23(3), pp. 309-347.

Lee, K. C., Lee, S. and Kang, I. W. (2005). KPMI: Measuring knowledge management performance. Information & Management, 42, pp. 469-482.

Malhotra, Y. (2000). From information management to knowledge management: beyond the 'Hi-tech hidebound' systems. In K.Srikantaiah & M.E.D Koenig (Eds.), Knowledge Management for the Information Professional. Medford: N.J.: Information Today Inc., pp. 37-61.

Malhotra, Y. (2005). Integrating knowledge management technologies in organizational business processes: Getting real time enterprises to deliver real business performance. Journal of Knowledge Management, 9(1), pp. 7-28.

Malhotra, Y. and Galletta, D.F. (2003). Role of commitment and motivation in knowledge management systems implementation: Theory, conceptualization, and measurement of antecedents of success. Proceedings of the Hawaii International Conference on Systems Sciences (HICSS 36), available at: www.brint.org/KMSuccess.pdf

Munro, M.C., Huff, S.L., Marcolin B.L. and Compeau D.R. (1997). Understanding and measuring user competence. Information & Management, 33, pp. 45-57.

Stein, E. W. and Zwass, V. (1995). Actualizing organizational memory with information systems. Information Systems Research, 6(2), pp. 85-117.

Strassmann, P. (1997). The Squandered Computer: Evaluating the Business Alignment of Information Technologies. Information Economics Press, New Canaan, CT.

Thompson, R.L., Higgins, C.A. and Howell, J.M. (1991). Personal computing: Toward a conceptual model of utilization. MIS Quarterly, 15(1), pp. 124-143.

Wu, J.H. and Wang, Y.M. (2006). Measuring KMS success: A re-specification of the DeLone and McLean model. Information & Management, 43(6), pp. 728–739.

Page 38: Proceedings of the 6th Israel Association for Information

38

MARKETERS IN SOCIAL MEDIA: DISSEMINATING INFORMATION THROUGHT A TWITTER PROFESSIONAL

COMMUNITY OF PRACTICE

Ina Blau Department of Education & Psychology,

The Open University of Israel [email protected]

Tami Neuthal Shoham Center,

The Open University of Israel [email protected]

INTRODUCTION

Online professional communities of practice have recently moved to social media platforms, such as Twitter. This paper investigates an Israeli community of people working in the fields of PR and marketing and using Twitter to enhance their professional activities.

The process of adopting new technologies can be described, in terms of the Diffusion of Innovation Theory (Rogers, 2003), by individual differences among the participants. This approach suggests that adoption of an innovation over time is normally distributed: from innovators (2.5%) and early adopters (13.5%), through early (34%) and late majority (34%), to laggards (16%). The theory offers insights for modeling the entire life cycle of innovation adoption (Chang, 2010).

Looking at the level of participation or content contribution at a one time-point, the research shows that only a small fraction of the participants of Twitter communities' members are active content contributors, while the majority are mostly consumers (Forkosh-Baruch & Hershkovitz, 2012). Applying the Pareto Law to the behavior of online users (Persky, 1992), the distribution ranges near the 20:80 rule: 20% of the participants produce approximately 80% of the content. This "long tail" distribution of online participation is generic and restricted neither to the platform nor to the participants' age (Blau, 2011).

Since structural connections between Twitter users are directed, they can follow other participants without being reciprocated (Boyd et al., 2010). Therefore this platform allows exploring the behavior of Twitter users in terms of Uses and Gratification approach (Rubin & Bantz, 1987), as a relationship between the investment into the community through participation and gratification mechanisms - different forms of influencing the audience (Blau & Neuthal, 2012).

Study Goals and Hypotheses

This study investigates an Israeli community on Twitter that connects professionals in the fields of PR, social media and digital marketing. We explore (1) the evolution of the community in terms of Diffusion of Innovations Theory (Rogers, 2003), (2) active participation rate in terms of the Pareto Law, and (3) relationships between the participation and gratification mechanisms of influencing the audience in terms of Uses and Gratification approach (Rubin & Bantz, 1987).

We hypothesized that:

(1) The evolution of the community would be consistent with the Diffusion of Innovations Model (Rogers, 2003)

Page 39: Proceedings of the 6th Israel Association for Information

39

(2) Consistent with the Pareto Law, the level of active participation by tweeting would be approximately according to the 20:80 rule

(3) Consistent with Uses and Gratification framework (Rubin & Bantz, 1987) and empirical findings regarding participation in other social network platforms (Blau et al., 2009; Zuckerman et al., 2009) and participation in other communities of practice on Twitter (Blau & Neuthal, 2012), the level of participant investment onto the community (i.e., participation by tweeting and following others) would correlate with gratification mechanisms (i.e., influence on the audience).

METHOD

The participants were 84 public accounts of Israeli Twitter users working in the field of PR and social media marketing, with 1437 structural connections (edges) among them. The time frame was 5 years: March 2007-2012. Participation was measured by tweeting and following others; influence on the audience - by the degree of centrality into the community network - PageRank (Khrabrov & Cybenko, 2010, Subbian & Melville, 2011; Weng, Lim, Jiang, & He, 2010), number of followers – an indicator of a user's popularity (Cha et al., 2010), and the number and percentage of tweets marked as favorites by others (i.e., numbers of favorite tweets by total number of tweets) that can indicate the quality of content contribution. The data was extracted and visualized by NodeXL application and analyzed using SPSS-19.

RESULTS AND DISCUSSION

Figures 1-4 present the evolution of the community during 5 years from its beginning in March 2007 until the March 2012.

Figure 1: The community it the 1st year (March 2008)

Page 40: Proceedings of the 6th Israel Association for Information

40

Figure 2: The community in the 2nd year (March 2009)

Figure 3: The community in the 3rd year (March 2010)

Page 41: Proceedings of the 6th Israel Association for Information

41

Figure 4: The community in the 5th year (March 2012)

As can be seen from the Figures, members of all three sub-groups of this community (presented by different geometrical figures) joined the community from the beginning of its existence. In terms of Rogers' theory (2003), innovators and early adopters joined the community during the first year of the investigation (Figure 1), the early majority during the second year (Figure 2), and the late majority - during the third year of investigation (Figure 3). Only six new members joined this community in the last two years of the investigation and few new connections between the existing users were established (Figure 4). This rate of adopting the use of Twitter is significantly faster compared to the Israeli professional community practicing Learning / Information Technologies (Blau & Neuthal. 2012) that started adopting Twitter at the same month: 3 years compared to almost 5 years. Despite the fact that some of the professional needs of both communities are similar, it seems that they dictate quite different social norms of adopting innovations.

Table 1 shows descriptive statistics for tweeting and following others.

Table 1: Participation - descriptive statistics

Tweets Followed Median 1703.50 392.50 Mean 5103.65 1288.93 Std. Deviation 12031.89 3815.02 Skewness 5.76 7.80 Minimum 10.00 8.00 Maximum 95195.00 33924.00

As can be seen from the data, the community members were active, both in terms of content

Page 42: Proceedings of the 6th Israel Association for Information

42

creation by tweeting and content consumption by following others. However, further analysis shows that the participation was asymmetrical: similarly to the 20:80 rule, 17 (20.2%) members disseminated 73.4% of the community tweets.

Table 2 presents the Spearman correlations between the participation and influence on the audience (n=84).

Table 2: Correlations between the participation and influence on the audience

PageRank

Number of followers

Number of favorites

% of favorites

Number of tweets .57*** .74*** .73*** .11 Number of followed .72*** .79*** .32** -.04

As can be seen from the data presented, participation in the professional community of practice was highly gratified by influence on the audience, regardless of the influence being visible to others, as the number of followers, or invisible, as the degree on centrality into the community network measured by PageRank. These results differ from the data of the Learning Technologies community of practice, in which only medium correlations were found between the participation and influence on the audience invisible to other Twitter users. It seems that for this community influencing the audience is an essential goal and not related to social pleasing. Participation and quality of the content were related when the quality was measured as the number of tweets marked by others as favorites. However, no significant correlations were found when the quality of tweets was measured as a percentage of favorites, suggesting that the relation with the number of favorites is a byproduct of a user's activity level.

CONCLUSIONS

This study explored a Twitter professional community practicing PR and social media marketing. The evolution of the community corresponded to the bell curve predicted by the Diffusion of Innovation Theory, but with a low dispersing, i.e. very fast - about 3 years. Despite the members very high activity level, active participation is still asymmetrical and similar to the 20:80 rule: 20% of the participants contribute about 73% of the community tweets. User's investment through participation highly gratifies by both visible and invisible forms of influencing the audience – the number of followers and the degree of centrality in the network. However, no relationships were found between the participation and the quality of tweets as rated by other members.

REFERENCES

Blau, I. (2011). E-collaboration within, between, and without institutions: Towards better functioning of online groups through networks. International Journal of e-Collaboration, 7, 22-36.

Blau, I., & Neuthal, T. (2012, February). Social networking behavior in Israeli community practicing Learning and Information Technologies on Twitter. Paper presented at the International Symposium on Cyber Behavior – CB2012. Taipei, Taiwan.

Boyd, D., Golder, S., & Lotan, G. (2010). Tweet, tweet, retweet: Conversational aspects of retweeting on twitter. Paper presented at the HICSS -43. IEEE Computer Society. Kauai,

Page 43: Proceedings of the 6th Israel Association for Information

43

HI. Retrieved March 1, 2012 from http://www.computer.org/portal/web/csdl/doi/10.1109/HICSS.2010.412

Cha, M., Haddadi, H., Benevenuto, F., & Gummadi, K. P. (2010). Measuring user influence in twitter: The million follower fallacy. Paper presented at the 4th International Conference on Weblogs and Social Media (ICWSM). Retrieved March1, 2012 from http://www.aaai.org/ocs/index.php/ICWSM/ICWSM10/paper/download/1538/1826

Chang, H.-C. (2010). A new perspective on Twitter hashtag use: Diffusion of innovation theory. In Proceedings of the American Society for Information Science and Technology - ASIST2010 (Vol. 47, pp. 1–4). Pittsburgh, PA: Wiley Online Library.

Forkosh-Baruch, A., & Hershkovitz, A. (2012). A case study of Israeli higher-education institutes sharing scholarly information with the community via social networks. The Internet and Higher Education, 15(1), 58-68.

Khrabrov, A., & Cybenko, G. (2010). Discovering influence in communication networks using dynamic graph analysis. In Proceedings of the 2010 IEEE Second International Conference on Social Computing (pp. 288-294).

Leavitt, A., Burchard, E., Fisher, D., & Gilbert, S. (2009). The influentials: New approaches for analyzing influence on Twitter. A publication of the Web Ecology Project. Retrieved October 17, 2011 from http://tinyurl.com/lzjlzq

Persky, J. (1992). Retrospectives: Pareto's law. Journal of Economic Perspectives, 6, 181-192.

Rogers, E.M. (2003). Diffusion of innovations (5th ed.). New York: Free Press.

Rubin, A. M., & Bantz, C. R. (1987). Utility of videocassette recorders. American Behavioral Scientist, 30(5), 471-485

Subbian, K., & Melville, P. (2011). Supervised rank aggregation for predicting influence in Twitter. In Proceedings of the 2011 IEEE 3rd International Conference on Social Compuiting - Privacy, Security, Risk and Trust - PASSAT (pp. 661-665). IBM TJ. Watson Res. Center, Yorktown Heights, NY, USA.

Weng, J., Lim, E.-P., Jiang, J., & He, Q. (2010). TwitterRank: Finding topic-sensitive influential Twitterers. Paper presented at the WSDM'10. NY, USA: ACM. Retrieved October 17, 2011 from http://ink.library.smu.edu.sg/cgi/viewcontent.cgi?article=1503&context=sis_research

Zuckerman, O., Blau, I., & Monroy-Hernández, A. (2009). Children's participation patterns in online communities: An analysis of Israeli learners in the Scratch online community. Interdisciplinary Journal of E-Learning and Learning Objects, 5, 263-274.

Page 44: Proceedings of the 6th Israel Association for Information

44

SOCIAL DYNAMICS IN CRITICAL MASS FORMATION IN INFORMATION SHARING ONLINE

Hila Koren Ido Kaminer Daphne Raban

University of Haifa Technion Haifa University of Haifa

School of Management Physics Department School of Management

[email protected] [email protected] [email protected]

INTRODUCTION

According to the Diffusion of Innovations theory, diffusion within a social system follows an S-shaped curve in which smooth continuous progression takes place from one adoptive segment to another. Critical mass, according to Rogers (1995) is a point in time in which the rate of the adoption is fastest, i.e. the number of new adopters is increasing most rapidly. This point occurs when about 16% of the individuals in the social system have adopted an innovation. Critical Mass Theory (Oliver et al. 1985) on the other hand, suggests that reaching the critical mass point does not necessarily imply vast diffusion. The dynamics leading towards this point have implications regarding the outcome that follows the critical mass point and thus affect overall diffusion. In order to understand better the factors that contribute to critical mass formation and its sustainability we employ the process approach offered by Critical Mass Theory. Observing the participatory culture characterizing the web, we introduce the factor of reinvention from Diffusion of Innovations theory (Rogers, 1962) as a contributor to the diffusion of information online.

Critical Mass Theory and Social Dynamics

Critical Mass Theory (CMT) aims to predict the probability, extent and effectiveness of group actions in pursuit of a collective good (Oliver et.al, 1985). Communication scholars have adopted and applied CMT, recognizing information as a type of collective good. Various information sharing structures have properties that affect people's willingness to participate in information sharing, thus affecting its diffusion (Oliver & Marwell, 2001).

Viewing information as a collective good allows us to apply CMT to the diffusion process of new information. According to CMT, in a given group, individuals exhibit different levels of interest on a given topic and have diverse levels of resources to contribute to its realization. Individuals in the group are mobilized towards a collective action through sequential interdependence, namely, they make independent decisions one at a time, but past decisions of others are known to them and thus the total amount of prior contributions provided by the group influences subsequent contributions.

Sequential interdependence is visualized in a graph plotting the collective output of a group [p(r)] against the resources contributed by the group (r). Two main graphs are obtained. The first, describes situations in which the earliest contributors have the greatest effect on

Page 45: Proceedings of the 6th Israel Association for Information

45

achieving the public good, and subsequent contributors have progressively less effect. This situation is referred to as a decelerating production function. In the second situation, initial contributors have only negligible effects on achieving the collective good, and subsequent contributors yield the greatest effect. This situation is referred to as an accelerating production function. A decelerating production function leads to negative interdependence, where each contribution makes others' subsequent contribution less worthwhile, and thus less likely. For example, adding a book review to an existing review repository. Accelerating production functions, on the other hand, are characterized by positive interdependence in which each contribution makes the next one more worthwhile and thus, more likely. For example, joining a petition.

Diffusion of Information Online

Widely used information diffusion models such as Independent Cascade Model (Goldenberg et al. 2001; Kempe et al. 2003); Continuous Time Independent Cascade Model (Gruhl et al. 2004); Susceptible Infected Recovered Model (Kermak & Mckendric, 1927); Susceptible Infected Susceptible Model (Pastor-Satorras & Vespignani, 2001) and Rumor Spreading Model (Zhao et al., 2011) are based on the assumption that exposure to information, like exposure to a virus, is enough for it to spread. Conversely, the diffusion of digitized information is more complex and involves more than just exposure. Wu et al. (2004) point out the differences between online information flows and the spread of viruses: while viruses tend to be indiscriminate, infecting any susceptible individual, information is selective and passed by its host only to individuals the host thinks would be interested in it. Furthermore, receivers of new information are faced with the decision whether to consume the information and upon consumption they choose whether to pass it on to others. Upon passing the information to others, transmitters have the freedom to modify it, i.e. reinvent it. Explicit retransmissions of information are key for information to spread and reach critical mass of exposure, as in the case of WOM (Word of Mouth).

Factors Affecting Retransmission of Information

Some researchers emphasize attributes of content (Berger & Milkman, 2009; Berger & Heath, 2005) as affecting retransmission of information, some point to usefulness (Wojnicki & Godes, 2008), yet others focus on structural attributes of the network such as centrality (Bolland, 1988; Freeman, 1979; Canright & Engo-Mosen, 2006); and density (Gould, 1993;

Figure 2. Accelerating PF

aFiFi Figure 1. Decelerating

Page 46: Proceedings of the 6th Israel Association for Information

46

Watts & Dodds, 2007). Another line of research emphasizes the strength of ties (Granovetter, 1973; Goldenberg et. al., 2001) and the role of the source of information – whether it is an influential individuals (Goldenberg et. al., 2009; Kempe et al., 2003) or active individuals (Stephen et al, 2010). We study the role of a new factor, reinvention of information, in adopters' inclination to retransmit information.

Reinvention is defined as the degree to which an innovation is changed or modified by a user in the process of adoption (Rogers, 1995). When applied to information, reinvention implies the modification of the information upon retransmission. In the context of CMT, it is of interest to examine whether reinvention activity manifests an accelerating or decelerating PF, i.e. does reinvention activity manifest a positive or negative interdependence and how these production functions effect the diffusion of information.

RESEARCH QUESTIONS

RQ1: will information that has been reinvented reach an inflection point faster than information that has not?

RQ2: Will information that has been reinvented reach a larger final audience?

RQ3: will the change in the value of information (i.e. relative vs. absolute RI) affect the dynamics of its spread?

METHODOLOGY

We use an agent based mathematical model using MatLab version 7.12 and discrete step iterations to demonstrate the influence of re-invention on the process of critical mass formation. The model is run with samples of data from actual networks harvested from the web and incorporates theoretical constructs where variables in the model are given a certain value or probability based on prior literature (Oliver et al. 1985; Moldovan & Goldenberg, 2004).

The model runs on the flow of information without reinvention and then with reinvention in order to isolate its unique effect. In addition, reinvention is operationalized so as to produce two types of production functions: accelerating and decelerating. For the accelerating production function the value of information increases by a constant percentage (10% new information for each reinvention). This stage is referred to as "Relative RI". The decelerating production function is based on a constant value added to the value of information leading to an ever-decreasing fraction of value added, referred to as "Absolute RI".In order to get representative results for the model we simulate the full process of information flow and repeat the full experiment many times. A typical number of repeating experiments is 2000.

INITIAL RESULTS

Figure 3 shows the results of an exploratory simulation of the spread of information with and without reinvention. The graphs suggest that RI hinders the spread of information, requires more nodes to reach inflection point and the information propagates in the network for a shorter period of time.

Page 47: Proceedings of the 6th Israel Association for Information

47

Figure 3 A. Without RI Figure 3 B. With RI

Figure 3. Results of the exploratory simulation of the diffusion of information without (A) and with (B) reinvention. X=number of nodes, Y=number of experiments.

In order to determine whether the change in value of reinvented information affects its spread, the two types of RI were incorporated in the simulation and compared to the spread of information without RI. In this simulation the process started with 4 isolated nodes.

Figure 4. Absolute RI Figure 5. Relative RI Figure 6. No RI

In all three cases, information spreads fast to a small number of nodes, then the spread is slower and peaks again. In both cases of RI, information reaches a larger final audience and requires a smaller number of nodes in order to reach inflection point.

Further work involves running the simulation with different number of starting nodes either peripheral or hubs and different network topologies.

REFERENCES

Berger, J. A, & Heath, C. (2005). Idea habitats: How the prevalence of environmental cues influences the success of ideas. Cognitive Science, 29(2), 195–221.

Berger, Jonah A., & Milkman, K. L. (2009). Social Transmission, Emotion, and the Virality of Online Content. SSRN eLibrary. Retrieved from http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1528077

Bolland, J. M. (1988). Sorting out centrality: An analysis of the performance of four centrality models in real and simulated networks. Social Networks, 10(3), 233-253. doi:10.1016/0378-8733(88)90014-7

Canright, G. S., & Eng\o-Monsen, K. (2006). Spreading on networks: a topographic view. Complexus, 3(1), 131–146.

Page 48: Proceedings of the 6th Israel Association for Information

48

Freeman, L. C. (1979). Centrality in social networks conceptual clarification. Social networks, 1(3), 215–239.

Goldenberg, J., & Efroni, S. (2001). Using cellular automata modeling of the emergence of innovations. Technological Forecasting and Social Change, 68(3), 293–308.

Goldenberg, J., Han, S., Lehmann, D. R., & Hong, J. W. (2009). The role of hubs in the adoption process. Journal of Marketing, 73(2), 1–13.

Gould, R. V. (1993). Collective action and network structure. American Sociological Review, 58(2), 182–196.

Granovetter, M. S. (1973). The strength of weak ties. American journal of sociology, 1360–1380.

Gruhl, D., Guha, R., Liben-Nowell, D., & Tomkins, A. (2004). Information diffusion through blogspace. Proceedings of the 13th international conference on World Wide Web (pp. 491–501).

Kempe, D., Kleinberg, J., & Tardos, É. (2003). Maximizing the spread of influence through a social network. Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 137-146). Washington, D.C.: ACM. Retrieved from http://dx.doi.org/10.1145/956750.956769

Kermark, M., & Mckendrick, A. (1927). Contributions to the mathematical theory of epidemics. Part I. Proc. R. Soc. A (Vol. 115, pp. 700–721).

Moldovan, S., & Goldenberg, J. (2004). Cellular automata modeling of resistance to innovations: Effects and solutions. Technological Forecasting and Social Change, 71(5), 425–442.

Oliver, P. E., & Marwell, G. (2001). Whatever happened to critical mass theory? A retrospective and assessment. Sociological Theory, 19(3), 292–311.

Oliver, P., Marwell, G., & Teixeira, R. (1985). A Theory of the Critical Mass. I. Interdependence, Group Heterogeneity, and the Production of Collective Action. The American Journal of Sociology, 91(3), 522-556.

Pastor-Satorras, R., & Vespignani, A. (2001). Epidemic spreading in scale-free networks. Physical review letters, 86(14), 3200–3203.

Rogers, E. M. (1995). Diffusion of innovations. Simon and Schuster. Stephen, A. T., Dover, Y., & Goldenberg, J. (2010). A Comparison of the Effects of

Transmitter Activity and Connectivity on the Diffusion of Information Over Online Social Networks. SSRN eLibrary. Retrieved from http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1609611

Watts, D. J., & Dodds, P. (2007). The accidental influentials. Harvard Business Review, 85(2), 22–23.

Wojnicki, A., & Godes, D. (2008). Word-of-mouth as self-enhancement. HBS Marketing Research Paper No. 06-01.

Wu, F., Huberman, B. A., Adamic, L. A., & Tyler, J. R. (2004). Information flow in social groups. Physica A: Statistical Mechanics and its Applications, 337(1), 327–335.

Zhao, L., Wang, J., Chen, Y., Wang, Q., Cheng, J., & Cui, H. (2011). SIHR rumor spreading model in social networks. Physica A: Statistical Mechanics and its Applications.

Page 49: Proceedings of the 6th Israel Association for Information

49

THE EFFECT OF MEDIA TYPES, MESSAGE CUES AND FRAMING ON INVESTMENT DECISIONS

A Research Proposal Michal G Yablowitz

University of Haifa

[email protected]

Daphne R Raban University of Haifa

[email protected].

INTRODUCTION

The following proposal main purpose is to examine the effects of Tech-blogs on VC's investment decision. Tech blogs specialize in profiling startups and breaking tech news for a readership of early adopters, entrepreneurs, venture capitalists and PR professionals. The combination of rapidly published online news items in business and technology issues for those stakeholders produces a significant blog industry which may have far- reaching consequences in the highly uncertain start-ups world.

LITRATURE REVIEW

Venture capital investors (also known as VCs) are considered experts in identifying high potential new ventures in highly uncertain environments. Thus, the process of decision making under such complex conditions has received considerable attention within the entrepreneurship literature. One of the areas of inquiry within this literature deals with the influence of the overconfidence bias on VCs information processing about new ventures. VCs who are subject to the overconfidence bias may ignore important new data while preferring information that asserts it (MacMillan, Zeman, and Subba Narasimha , 1987; Robinson, 1987).

Social Judgment Theory and Familiar Framed Information Zacharakis and Shepherd (2001), who focused on the influence of business information (such as business plans or executive summaries) on VC's accurate predictions of new ventures, proposed that VCs are more confident about decisions based on familiar framed information than the same information framed in an unfamiliar way. Their proposition receives support by the Social Judgment Theory (SJT) which claims that different types of information impact the decision process. SJT suggests that people make decisions using two different types of information factors: cognitive cues and task cues. (Brunswik 1956) Cognitive cues are information factors that expert VCs deem as best discriminating between success and failure such as leadership ability or market size. However, these cues may not be optimal in distinguishing between outcomes (Zacharakis and Shepherd, ibid). Task cues, on the other hand, are the information factors that best distinguish between possible outcomes like completeness of team or time to product development. They are proxies for information which VCs deem important but not in a familiar form. Since the VC's decision making process is often based on intuitions, cognitive cues that reinforce a VC's self confidence are the preferred information factors in VC's decision making process (Oaksford, Morris and Williams, 1996).

Page 50: Proceedings of the 6th Israel Association for Information

50

Availability cascade influence on crowd beliefs Kuran & Sunstein (1999) suggest another kind of cognitive information characteristic which has a positive effect on the VC's confidence due to its macro level approach, the availability cascade. According to this concept, inaccurate information may dominate the collective beliefs and actions due to its high availability. As a result, this information becomes widely used and persistently available at the collective level, crowding out alternatives and becoming dominant in decision making process (Sherman, Cialdini, Schwartzman, & Reynolds, 1985). As suggested by the availability cascade concept, VC's collective beliefs are influenced by information that is available, but not necessarily accurate. This raises questions about the selection and interpretation of this information by VCs, and how these are influenced by The Media. Framing Theory Mass communication research has documented the impact of media reports on public perceptions of political candidates and their framing as positive or negative (Golan & Wanta, 2001; McCombs,1981; McCombs et al., 1997). Based on the assumption that media coverage also frames social issues, Framing Theory claims that in conditions of uncertainty, media coverage contains a high degree of information generated by opinion leaders, such as journalists and financial analysts, expressing their judgment and in this way enabling individuals to imitate their opinion. Since positive or negative media coverage represents a public evaluation, it serves as a source of "social proof" that increases the legitimacy of a new venture (Pollock and Rindova, 2003). Framed Media Coverage influences on investors behaviour Pollock & Rindova argue that media provided information affects investors' impression of newly public firms. They discovered that beyond a certain level, positive media tenor increases perceptions of value, thereby increasing the demand of a firm’s stock and driving its price up. In summary, the relations between the central players who affect a VC behavior can be described by a funnel which drains the macro influence of the media, through the crowd beliefs into the work patterns of an individual VC's ventures assessment (Figure 1). A Proposed Theoretical model The figure below refers only to familiar cognitive cues because of their proven superiority in the VC's information processing of new ventures.

Page 51: Proceedings of the 6th Israel Association for Information

51

Figure 1. Media influence upon a VC behavior – a proposed funnel model

As stated, newspapers and magazines reports impact the ways stakeholders interpret information about firms by framing them in positive or negative terms. However, blogs function differently than any traditional news medium.

Distinctions between traditional media coverage and blogs coverage Woodly (2007) suggests that the bloggers independency from corporate ownership makes this form of communication unique. One of the most important norms for establishing credibility in the blogosphere relies on independence from the influences of traditional media. Therefore, they seek sources beyond official’s statements and contribute analyses that officials have missed or ignored. Thereby, it is necessary to seek further explanations about the impact of financial tech-blogs on readership of analysts and VC's. Blogs Influence on decision making According to Mckenzie & Ozler (2011), there is some indication for the impact of economics blogs on public policy. For example, a senior blogger at the Center for Global Development blogged about Kiva, a popular micro lending website, and pointed out, through his post, about problems in Kiva's loans method. This post led to modifications in Kiva's website. Thus, policy changes due to a post in an economic blog are possible but can this happen in VC's decisions making?

HYPOTHESES

As stated by Zacharakis and Shepherd (2001), familiar cognitive cues function as the superior cues in VC's information processing of new ventures. As shown by Pollock and Rindova (2003), positive media tenor increases the demands for a firm's stock. Therefore I hypothesize that: 1. When applying positive framing, cognitive cues will generate more investment decisions than task cues, when delivered either in a traditional newspaper or in a blog. As discovered by Woodley (2007) and Hugo & Muslu (2010), tech-bloggers, as opposed to traditional media, tend to rely on unofficial sources whereas VC's prefer conservative information sources. Therefore I hypothesize that: 2. When using either cognitive or task cues, positive framing will generate more investment decisions than negative framing, when delivered either in a traditional newspaper or in a blog. 3. Cues provided in a positive framing in a traditional newspaper article will result in more investment decisions than the same cues when delivered in a blog. The independent variables matrix with the derived research groups is described in the following table:

Table 1. Independent variables and research groups' matrix:

Framing Blogs/Traditional

Media Positive (A) Negative (B) Cognitive (C)

1 2

Cu

e ty

pe9

Task (D) 3 4

Interactions among the three independent variables will be analyzed as well.

Page 52: Proceedings of the 6th Israel Association for Information

52

METHODOLOGY

Hypotheses will be examined by an experiment with 120 participants, divided between two main groups: A. Blog group- participants that will be exposed to a tech-blog post B. Newspaper group- participants that will be exposed to a financial newspaper article. Each main group will be divided into four sub-groups (altogether, eight groups). Each sub- group will be exposed to a different version of the text. The versions are separated, according to research hypotheses, by two parameters: I. Framing II. Cue type The blog's name, newspaper name and the covered company's name will be replaced to control confounding variables For the sake of uniformity all the newspaper and blog articles will be about the same start-up company.

Procedure

In a web-based game, participants will receive an initial sum of play money for investment and will be exposed to one of the 8 varieties of information described above. Within a limited time frame each participant will make an investment decision regarding the start-up company s/he read about. After reading, each participant will choose one of three options: whether to invest and what sum, non-investment or costly request for more information to improve the investment decision. The participants' goal is to finish with the highest possible sum of money from the initial sum of money received. After decision making, each participant reveals the loss or profit he won according to decision. Participants' decisions are saved in a database for further statistical analysis.

IN CONCLUSION

The proposed research offers a novel theoretic model and an innovative research tool in order to examine the effect of media types, message cues and framing on VC's investment decisions. The attempt to confirm the research hypotheses by the suggested theoretic model along with the examination of the interactions among the independent variables are the envisioned challenges of this research proposal.

REFERENCES

Brunswik, E. 1956. Perception and the Representative Design of Experiments. Berkeley: University of California Press.

Golan, G., & Wanta,W. 2001. Second-level agenda setting in the New Hampshire primary: A comparison of coverage in three newspapers and public perceptions of candidates. Journalism and Mass Communication Quarterly, 78: 247–259

Kuran, T., & Sunstein, C.R. 1999. Availability cascades and risk regulation. Stanford Law Review, 51: 683–768.

McCombs, M. E. 1981. The agenda setting approach. In D.Nimmo & K. R. Sanders (Eds.), Handbook of political communication: 121–140. Newbury Park, CA Sage

MacMillan,I., C.,Zemann, L., and SubbaNarasimha, P.N. 1987. Criteria distinguishing unsuccessful ventures in the venture screening process. Journal of Business Venturing 2:123–137.

Page 53: Proceedings of the 6th Israel Association for Information

53

McCombs, M., Llamas, J. P., Lopez-Escobar, E., & Rey, F.1997. Candidate images in special elections: Second-level agenda setting effects. Journalism and Mass Communication Quarterly, 74: 703–717.

Mckenzie, D., & Ozler, B. 2010. The impact of economic blogs. CEPR Discussion Paper No. DP8558.

Oaksford, M., Morris, F., Grainger, B., & Williams, J. M. G. 1996. Mood, reasoning, and central executive processes. Journal of Experimental Psychology: Learning, Memory and Cognition. 22: 477-493.

Pollock, G. T., & Rindova, V.P. 2003. Media Legitimating Effects in the Market for Initial Public Offering. Academy of Management Journal, 46,631-642

Robinson, R.B. 1987. Emerging strategies in the venture capital industry. Journal of Business Venturing, 2:53–77.

Sherman, J.S., Cialdini, R.B., Schwartzman, D.F. and Reynolds, K.D. 1985. Imagining Can Heighten or Lower the Perceived Likelihood of Contracting a Disease: The Mediating Effect of Ease of Imagery. Personality & Social Psychology Bulletin, 11, 118-127.

Woodly, D., 2007. New competencies in democratic communication? Blogs, agenda setting and political participation. Public Choice, 134: 109–123.

Zacharakis, A.L.,& Sheperd D.A. 2001. The Nature of Information and Overconfidence on Venture Capitalist' Decision Making, Journal of Business Venturing 16: 311-332.

Page 54: Proceedings of the 6th Israel Association for Information

54

INTERRUPTIONS IN ONLINE ENVIRONMENT

Eilat Chen Levy* * Graduate School of

Management & Sagy Center for Internet

Research University of Haifa

il.net.012eilatlevy@

Sheizaf Rafaeli* * Graduate School of

Management & Sagy Center for Internet

Research University of Haifa

net.sheizaf@rafaeli

Yaron Ariel ** ** Department of

Communication, Yezreel Valley College

[email protected]

Keywords: Interruption, Media/Information Richness Theory, Online-Computerized Business Game.

INTRODUCTION

This paper examines the effect of interruptions in online environments on cognitive performance. The aim of this research is twofold: the first is to explore whether external online interruptions have different effect on cognitive performance considering the 'richness' of the interruption; and the second is to assess time as an intermediate variable of the process.

114 Participants, in a simulation of online-computerized business game, were exposed to external online interruptions that were manipulated through 'push' technology. Four groups and one control group were compared after exposing them to four different formats of the same ads (using ‘Push’ technology) - Mobile phones (SMS/MMS) and WWW (text/banner).

THEORETICAL BACKGROUND

Defining interruption

Following Coraggio (1990) interruption defined here as "an externally-generated, randomly occurring, discrete event that breaks continuity of cognitive focus on a primary task” (p.19). Moreover, McFarlane (1998) suggested another definition for interruption as: "Human Interruption is the process of coordinating abrupt change in people activities” (p.119). Mark et al., (2008) found that frequent interruptions made people complete tasks in less time with no difference in quality, and compensate for the interruptions by working faster. External online interruptions could materialize in many shapes and forms. This research posits online advertisements as the external interruption of cognitive activity. The main research question investigated whether external online interruptions have different effect on cognitive performance considering the 'Richness' of the interruption.

The Richness of Information

In the present research online ads built in conjunction with the assumptions of Media Richness Theory (Daft &Lengel, 1984; Daft, Lengel, & Trevino, 1987).

Page 55: Proceedings of the 6th Israel Association for Information

55

Media/Information Richness Theory (Daft &Lengel, 1984, 1986; Daft, Lengel, & Trevino, 1987; Trevino, Lengel, & Daft, 1987) is one of the controversial theories in computer mediated communication research (Walther &Burgoon, 1992; King & Xia, 1997; Robert & Dennis, 2005). The theory assumes that media differ in their ability to carry varied types of information and ‘Social Richness’ (Short, Williams, & Christie, 1976; Daft &Lengel, 1984; Rice, 1993).Criticize on the theory focuses on media classification as 'rich' or 'poor' and the lack of distinction between the technological capabilities of the medium to the message transmitted through.

A medium’s richness is based on four media criteria (Daft et al., 1987): (1) the number of cues, (2) the immediacy of feedback, (3) the natural language, and (4) message personalization. Theory ranks the media from a rich medium to the lean medium hierarchically and in order to decreasing richness. Media classifications as suggested by Media Richness Theory are: face to face communication, telephone, letter, note, memo, flyer and newsletter as the poorest (Figure 1).

Figure 1: Hierarchy of Media Richness (Daft, Lengle& Trevino, 1987).

This paper advocates Walther’s (2002) criticism over theories of Computer Mediated Communication. Walther proposes that the lacks of visual and face-to-face cues inherent in CMC are disadvantages to be overcome over time. Thus, scholars should consider the influence of time length on performance qualities of computer mediated interactions. Experimental design research, which makes comparisons between various forms of channels in which information has been transferred (e.g. computer-mediated-communication interactions and face-to-face interaction), tend to ignore the importance of time as an intermediate variable of the process.

RESEARCH HYPOTHESES

Manipulation achieved by exposing participants to lean or rich ads (see Table 1).

(H1) a significant difference in Time Performance will be found between participants who interrupted via 'poor' and 'rich' ads.

Page 56: Proceedings of the 6th Israel Association for Information

56

(H2) a significant difference Quality Performance of cognitive activity will be found between participants who interrupted via 'poor' and 'rich' ads.

RESEARCH DESIGN An experimental research design of computerized simulation game (named the 'Sea

Trader') measured participants' game score as an indicator of their cognitive performance. Four groups and one control group (n=114) were manipulated by external interruption to their activities in the computerized simulation game by exposing them to four different format of the same ads (see Figure 2).

Figure 2: Ads

Online ads were built in conjunction with the assumptions of Media Richness Theory. Using ‘Push’ technology ads interruptions were manipulated in Mobile phones (via SMS or MMS) and WWW (via text or banner). Table 1 illustrates four experimental groups.

Table 1: Experiment Groups

Medium Type

Mobile Phone

www Internet

application

Poor Message

Text Message (SMS)

Text Banner Message Richness

Rich Message

Picture Message + text

(MMS)

Picture Banner

RESULTS AND DISCUSSION

Page 57: Proceedings of the 6th Israel Association for Information

57

Results indicate a significant difference between the groups' average time of fulfilling cognitive task [F(4)=3.79, p<.01]. The richest messages acquired a greater length of time. Therefore, the research hypothesis H1 was confirmed.

One-way ANOVA was conducted to test whether there is a difference between the experimental groups in their adjusted cognitive performance score. The adjusted score considered the ratio between performance score and the time required to complete the task. Results indicate significant differences between groups in cognitive performance (F(4)=2.27, p<.05), in a descending order: non interrupted group; WWW text; SMS; WWW banner; MMS.Contrary to the assumption of the Media Richness Theory, interruption of 'poor' ads (SMS/Text) leads to a better performance. Therefore, we also confirmed H2. (Cognitive performance, with no dependence on time, was found higher in interrupted groups via rich message followed by uninterrupted groups).

This paper implies that contrary to the essential assumption of the Media Richness Theory, exposure to rich messages will lead to a better performance only under time limit constrain. Therefore, contradictory results based on the richness of message, might accrue in any study that doesn't consider time as an intermediate variable.

REFERENCES

Corragio, L. (1990). Deleterious effects of intermittent interruptions on the task performance of knowledge workers: A laboratory investigation.Unpublished Doctoral dissertation, University of Arizona.

Daft, R.L., &Lengel, R.H. (1984). Information richness: A new approach to managerial information processing and organizational design. In L.L. Cummings & B.M. Staw (Eds.), Research in organizational behavior (pp. 191-234). Greenwich, CT: JAI.

Daft, R., &Lengel, R.H. (1986).Organizational information requirements, media richness and structural design.Management Science, 32(5), 554-571.

Daft, R. L., Lengel, R. H., & Trevino, L. K. (1987). Message equivocality, media selection, and manager performance: Implications for information systems. MIS Quarterly, 11(3), 355-366.

King, R. C., & Xia, W.D. (1997). Media appropriateness: Effects of experience on communication media choice. Decision Sciences, 28(4), 877-910.

Mark, G., Gudith, D., &Klocke, U. (2008).The cost of interrupted work: More speed and stress. Paper presented at the Conference on Human Factors in Computing Systems, Florence, Italy.

McFarlane, D.C. (1998). Interruption of people in human-computer interaction (Doctoral dissertation, George Washington University, 2000).

Robert, L.P., & Dennis, A.R. (2005). Paradox of richness: A cognitive model of media choice. IEEE Transactions of Professional Communication, 48(1), 10-21.

Rice, R.E. (1993). Using social presence theory to compare traditional and new organizational media.Human Communication Research,19(4), 451-484.

Short, J., Williams, E., & Christie, B. (1976).The social psychology of telecommunications. London: John Wiley & Sons.

Trevino, L., Lengel, R., & Daft, R. (1987). Media symbolism, media richness, and media choice in organizations: A symbolic interactionist perspective. Communication Research, 14(5), 553-574.

Walther, J. B., &Burgoon, J. K. (1992).Relational communication in computer-mediated interaction.Human Communication Research, 19(1), 50-88.

Page 58: Proceedings of the 6th Israel Association for Information

58

THE EFFECTS OF ELECTRONIC MEDICAL RECORDS ON PHYSICIAN-PATIENT COMMUNICATION – A

THEROETICAL FRAMEWORK Assis-Hassid Shiri

Department of IE&M Ben-Grion University

of the Negev [email protected]

Heart Tsipi Department of IE&M Ben-Grion University

of the Negev il.ac.heart@bgu

Reychav Iris Department of IE&M

Ariel University Center, Ariel, Israel. [email protected]

Pliskin Joseph Department of IE&M Ben-Grion University

of the Negev [email protected]

Keywords: Electronic Medical Record, primary care, medical encounter, task-technology fit, physician-patient communication

INTRODUCTION

Thousands of medical interactions have been studied to elucidate the key ‘ingredients’ of good consultation in primary medical care, and effective physician-patient communication emerged as an important mean for this end (Mead, Bower, & Hann, 2002). Consequently, researchers have adopted the concept of ‘patient-centered care’ as an indicator of good quality consulting.

In the last decade, most healthcare policy makers believe that physicians should use Electronic Medical Record systems (EMRs) instead of paper based records, yet adoption rates have not met expectations. In fact, recent studies show that less than 30% of physicians in the U.S. use EMRs to some extent, and even fewer use comprehensive systems (Bates et al., 2003; DesRoches et al., 2008). Moreover, research shows that computer use at the clinic affects physician-patient communication which is hypothesized by theories from the Medicine discipline to affect the medical outcomes. Indeed, among barriers to adoption is the notion that the EMR system may not fit the primary care medical encounter, for example the narrative nature of the medical interview (Patel, Arocha, & Kushniruk, 2002). Thus, in this study we try to address the issue of Task-EMR-Fit by integrating theories from the Medicine and IS disciplines into a model describing factors affecting EMR use while focusing on the fit between the EMR system and the medical encounter in primary care.

THE ELECTRONIC MEDICAL RECORD (EMR) SYSTEM

The most apparent manifestation of IT in healthcare is the continuingly growing use of Electronic Medical Records (EMRs) in the exam room. The EMR is defined as a computer system for storing, organizing and retrieving information about patients (McGrath, Arar, & Pugh, 2007). EMRs typically include a problem list, medication list, allergy list, notes, health maintenance, information and results retrieval (laboratory, radiology and other test results) as well as prescribing tools (Bates et al., 2003). Table 1 shows EMR adoption rates by physicians in various countries, showing particularly low usage rate in North America.

Table 1. Comparison of rates of Electronic Medical Record use (Jha et al., 2008)

Country Australia Canada Germany Netherlands New Zealand UK US

EMR (% adoption)

79–90 20–23 42–90 95–98 92–98 89–99 24–28

Page 59: Proceedings of the 6th Israel Association for Information

59

The motivation to use computerized information systems in healthcare is driven by the expectation that such systems will improve quality of care, patient safety and lower medical costs (Bates & Gawande, 2003). Several potential negative effects of computer use in primary care may have slowed the EMR dissemination processes. Concerns about increased costs, lengthened visit time, additional training needs and treatment inflexibility have been raised (Greatbatch et al., 1995; Miller & Sim, 2004) as well as potential privacy and confidentiality breaches that may cause lower rapport between physician and patient (Patel et al., 2002; Warshawsky et al., 1994) . Bates et al. (2003) argue that physician resistance to using EMRs as part of their routine may be an additional barrier. This resistance may derive from physicians' perception that the EMR use will negatively affect their workflow.

PHYSICIAN-PATIENT COMMUNICATION

The concept of patient centeredness has evolved from the bio-psychosocial model (Engel, 1977; Engel, 1980). In practice, physicians need to be attentive to the stories patients tell, nuances of the patient's body language, facial expressions etc. (Epstein et al., 1993; Engel, 1977).

Well before the introduction of the EMR and computerization into the medical environment, researchers addressed the importance of non-verbal communication in the medical encounter. For example, it has been suggested that physicians' non-verbal skills are associated with outcome variables such as patient satisfaction, patient recall of medical information, compliance with keeping appointments, and compliance with medical regimens (DiMatteo et al., 1980; DiMatteo, Hays, & Prince, 1986; Hall, Roter, & Rand, 1981; Hall, Harrigan, & Rosenthal, 1996; Larsen & Smith, 1981; Ong, De Haes, Hoos, & Lammes, 1995). Physicians' indirect or broken eye contact and indirect facial orientation have been associated with less patient disclosure (Duggan & Parrott, 2001). Physicians have been classified according to their communication skills as either ‘instrumental’ or ‘socio-emotional’, where the former is task oriented, whereas the latter also pays attention to verbal as well as non-verbal interpersonal aspects of the interaction (Hall & Dornan, 1988).

DYADIC RELATIONSHIPS TRANSFORM INTO TRIADIC

Most of the research and teaching of patient centered communication has assumed that the relationship between physician and patient is purely dyadic. However, the computer may be playing a more significant and visible role that has turned this relationship into a triadic one (Pearce et al., 2008). A computerized exam room has the potential to shift the physician's attention and involvement away from the patient to the keyboard and monitor. Since attention to the patient may be associated with positive outcomes of care (Pearce et al., 2008), it is important to examine how computer use enhances and/or interferes with the physician's attention to the patient and the effects on physician-patient communication.

The Effects of EMR use on the Physician-Patient Relationship

An early study by Greatbatch et al. (1995) that videotaped physicians’ EMR use found that physicians’ were 'pre-occupied' with attention largely focused on the computer monitor and only occasionally on the patient, and visits were characterized by long periods of silence and minimal verbal engagement with patients. According to Shachak and Reis (2009), the EMR had a positive impact on information related tasks and information exchange. However, the EMR was found to have a negative impact on establishing rapport with patients, and

Page 60: Proceedings of the 6th Israel Association for Information

60

physicians rarely used the computer for patient education and behavioral management (Shachak & Reis, 2009).

Makoul et al. (2001) found that EMR use strengthened the physician's ability to complete information tasks but reduced the physician's attention to patient centered aspects of patient communication. Patient disclosure of biomedical information to the physician was positively related to levels of physician keyboarding. Interestingly, emotional exchange (empathy, concern, reassurance) appeared lessened with screen gaze (Margalit et al., 2006). The above studies thus imply that the current EMR may not appropriately fit the medical task in primary care.

TASK-TECHNOLOGY FIT AND RESEARCH MODEL

According to the TTF theory, for an information technology to have a positive effect on individual performance, the technology must fit the tasks it supports. TTF focuses on situations in which system use is mandatory and asserts that when a technology "fits" the requirements of the tasks, it is more likely to be effectively used thus positive performance impacts will be shown.

The TTF model shows how technology, user tasks and user/individual characteristics (training, experience, motivation), relate to changes in performance – also referred to as the technology-to-performance chain (TPC). The antecedents of TTF, as specified by Goodhue & Thompson (1995) are the interactions between task, technology and individual. In the case of this study, this means that if the EMR effectively supports the physician's tasks during the medical interview, it will be used effectively and have positive results in all aspects of the medical encounter such as better allocation of time per patient, increased data reliability and better communication with patients. Individual characteristics of the physician, such as: motivation, training, computer skills, etc. interact with the task and technology at hand to affect the physician's performance.

Building on the above rationale, we propose a model that explains factors affecting a physician's communication behavior while using the EMR, termed 'Observed Fit' (Figure 1). It is hypothesized that Observed Fit is positively affected by the physician's perceived fit of the EMR and his/her initial communication style. These two factors form the subjective construct termed 'Task-EMR-Fit'. This construct in turn is affected by external variables.

Figure 1. The Task-EMR Fit Model

Page 61: Proceedings of the 6th Israel Association for Information

61

The model assessment can entail several contributions to research and practice. First, as far as we know, EMR-fit to the primary care medical task has not been measured before, avoiding insight of an important adoption factor. Second, we introduce a new construct – perceived fit, which describes the physician's prior conception of EMR use as part of the medical task.

CONCLUSIONS

It has been shown that good physician-patient communication constitutes the heart of primary care, as it is one of the most important tools available to a physician and holds the ability to improve health outcomes.

It seems that in order to obtain the potential benefits of incorporating the EMR in healthcare, models offering successful IS implementation and long-term use cannot be ignored and must be integrated into medical educational models, aimed at affecting task and physician characteristics, and into the EMR development processes in order to adjust the system’s characteristics.

Our literature review shows that not much research has explored the effects of EMR implementation on the physician-patient relationship, yet what has been done implies possible adverse implications. Furthermore, inter-disciplinary research that synthesizes knowledge from the IS and Medicine disciplines in order to improve ICT acceptance by medical staff is even more rare. It seems that as EMRs' use around the world is continuously expanding; further research in this area must be conducted.

To our knowledge, most of IS theories such as TTF have not yet been applied or researched thoroughly in healthcare. We find that it is particularly important to study and apply these theoretical constructs in the context of healthcare in order to maximize the potential benefits offered by implementation of EMRs in primary care.

Page 62: Proceedings of the 6th Israel Association for Information

62

References

Bates, D. W., Ebell, M., Gotlieb, E., Zapp, J., & Mullins, H. (2003). A proposal for electronic medical records in US primary care. Journal of the American Medical Informatics Association, 10(1), 1-10.

Bates, D. W., & Gawande, A. A. (2003). Improving safety with information technology. New England Journal of Medicine, 348(25), 2526-2534.

DesRoches, C. M., Campbell, E. G., Rao, S. R., Donelan, K., Ferris, T. G., Jha, A., et al. (2008). Electronic health records in ambulatory care—a national survey of physicians. New England Journal of Medicine, 359(1), 50-60.

DiMatteo, M. R., Hays, R. D., & Prince, L. M. (1986). Relationship of physicians' nonverbal communication skill to patient satisfaction, appointment noncompliance, and physician workload. Health Psychology, 5(6), 581-594.

DiMatteo, M. R., Taranta, A., Friedman, H. S., & Prince, L. M. (1980). Predicting patient satisfaction from physicians' nonverbal communication skills. Medical Care, 18(4), 376-387.

Duggan, P., & Parrott, L. (2001). Physicians' nonverbal rapport building and patients' talk about the subjective component of illness. Human Communication Research, 27(2), 299-311.

Engel, G. L. (1977). The need for a new medical model: A challenge for biomedicine. Science, 196(4286), 129-136.

Engel, G. L. (1980). The clinical application of the biopsychosocial model. The American Journal of Psychiatry, 137(5), 535-544.

Epstein, R. M., Campbell, T. L., Cohen-Cole, S. A., McWhinney, I. R., & Smilkstein, G. (1993). Perspectives on patient-doctor communication. The Journal of Family Practice, 37(4), 377-388.

Goodhue, D. L. (1998). Development and measurement validity of a task-technology fit instrument for user evaluations of information systems. Decision Sciences, 29, 105-138.

Greatbatch, D., Heath, C., Campion, P., & Luff, P. (1995). How do desk-top computers affect the doctor-patient interaction. Family Practice, 12(1), 32-36.

Hall, J. A., Harrigan, J. A., & Rosenthal, R. (1996). Nonverbal behavior in clinician--patient interaction. Applied and Preventive Psychology, 4(1), 21-37.

Hall, J. A., Roter, D. L., & Rand, C. S. (1981). Communication of affect between patient and physician. Journal of Health and Social Behavior, 22(1), 18-30.

Page 63: Proceedings of the 6th Israel Association for Information

63

Hall, J. A., & Dornan, M. C. (1988). What patients like about their medical care and how often they are asked: A meta-analysis of the satisfaction literature. Social Science & Medicine, 27(9), 935-939.

Jha A . K ., Doolan D ., Grandt D ., Scott T ., & Bates D . W . (2008). The use of health information technology in seven nations. International Journal of Medical Informatics, 77(12), 848-854.

Larsen, K. M., & Smith, C. K. (1981). Assessment of nonverbal communication in the patient-physician interview. The Journal of Family Practice, 12(3), 481-488.

Makoul. (2001). The SEGUE framework for teaching and assessing communication skills. Patient Education and Counseling, 45(1), 23-34.

Margalit, R. S., Roter, D., Dunevant, M. A., Larson, S., & Reis, S. (2006). Electronic medical record use and physician-patient communication: An observational study of israeli primary care encounters. Patient Education and Counseling, 61(1), 134-141.

McGrath, J. M., Arar, N. H., & Pugh, J. A. (2007). The influence of electronic medical record usage on nonverbal communication in the medical interview. Health Informatics Journal, 13(2), 105-118.

Mead, N., Bower, P., & Hann, M. (2002). The impact of general practitioners' patient-centredness on patients' post-consultation satisfaction and enablement. Social Science & Medicine, 55(2), 283-299.

Miller, R. H., & Sim, I. (2004). Physicians’ use of electronic medical records: Barriers and solutions. Health Affairs, 23(2), 116-126.

Ong, L. M. L., De Haes, J. C. J. M., Hoos, A. M., & Lammes, F. B. (1995). Doctor-patient communication: A review of the literature. Social Science & Medicine, 40(7), 903-918.

Patel, V. L., Arocha, J. F., & Kushniruk, A. W. (2002). Patients' and physicians' understanding of health and biomedical concepts: Relationship to the design of EMR systems. Journal of Biomedical Informatics, 35(1), 8-16.

Pearce, C., Trumble, S., Arnold, M., Dwan, K., & Phillips, C. (2008). Computers in the new consultation: Within the first minute. Family Practice,

Shachak, A., & Reis, S. (2009). The impact of electronic medical records on patient–doctor communication during consultation: A narrative literature review. Journal of Evaluation in Clinical Practice, 15(4), 641-649.

Warshawsky, S. S., Pliskin, J. S., Urkin, J., Cohen, N., Sharon, A., Binztok, M., et al. (1994). Physician use of a computerized medical record system during the patient encounter: A descriptive study. Computer Methods and Programs in Biomedicine, 43(3-4), 269-273.

Page 64: Proceedings of the 6th Israel Association for Information

64

PERSONALIZING CLINICAL DECISION-SUPPORT SYSTEMS: ELICITING CATEGORIES OF PERSONAL

CONTEXT AND EFFECTS

Adi Fux1 Mor Peleg1 Pnina Soffer1 1Department of Information systems, University of Haifa

[email protected] [email protected] [email protected]

ABSTRACT. Personalization of information systems in general, and of decision-support systems in particular, is a topic that is receiving much attention recently. In order for the advice recommended by a decision-support system to be suitable for its users, the system needs to take account of personal information and preferences. We are interested in researching the personalization of decision-support systems in the medical domain. Patients' adherence to treatment depends on its suitability to the patient's personal needs and social and demographic situation. Formalization of this knowledge of socio-demographic patient context and its effects on clinical decision-making can allow utilizing this knowledge to develop decision-support systems that can deliver personalized decision-support to the patient and his care providers. Furthermore, formalization of the types of socio-demographic context can help structure their inclusion in personal health records (PHRs) to support acquisition of personal data and to enable automatic personalized decision-support. In order to develop personalized clinical decision-support system we interviewed physicians, nurses, care providers and patients to elicit main personal categories and effects on treatment recommendations.

Background. ‘‘Personalized health care is information-based health care. It is health care that works better for each patient, based partly on scientific information that is new and partly on technology to make complex information useful…personalized health care is about a transformed role for information in health care.’’ (Former Secretary of the United States Department of Health and Human Services Michael O. Leavitt, 2007). This quotation captures the essence of successful delivery of personalized medicine (Garets & Davis, 2006), other presents the potential of the personalized treatment recommendations by electronic personal records that will allow patients to enter their needs or preferences, (Pagliari, Detmer, & Singleton, 2007). Our goal is to develop a personalized clinical-guideline based decision-support system (DSS) that would ensure that chronic patients, after stabilization, would receive optimal treatment based on their clinical state as well as their preferences and personal context. Clinical guidelines provide recommendations that are specific to the patient's clinical data that are often codified in the patient's electronic health record (Gamberger et al., 2006). However, guidelines do not thoroughly address the ways in which personal considerations should influence the medical recommendations. Another hurdle for personalizing DSSs is that such personal information is recorded in electronic health records as unstructured textual remarks (Chang, 2009) that are difficult for automatic analysis.

In order to systematically personalize DSSs, we first need to understand what categories of information should be considered as personal context and patient's preferences (e.g., the availability of family members to assist the patient, whether the patient is currently travelling, patient’s risk-aversion) and the types of effects that personal context can have on recommended clinical actions (e.g., deferring a clinical action, changing dosage to avoid a side effect that the patient cannot cope with). The emerging categories will help us develop an ontology which would be integrated into computer-interpretable guideline (CIG)

Page 65: Proceedings of the 6th Israel Association for Information

65

formalisms. The updated CIG, with the context and effect ontology, could provide personalized recommendation.

Research Question. What are the main classes of personal information (non-clinical context) that affect decision making, and what are the different effects of these context classes on clinical decision-making?

Methods. To identify emerging personal context and effect categories, we conducted preliminary qualitative studies consisting of interviews and questionnaires with different stakeholders, including 8 patients, 21 physicians, 9 other care providers (nurses, social workers and nutritionists). The interviews were based on open questions regarding scenarios of relevant patients.

Results. The results of the personal categories, based on the questionnaire's analysis are provided in tables 1 below, and include the main personal categories, exemplified by relevant quotes from the interviews. These personal categories are for steady states and for intervals and affects treatment goals, types and schedule, drug types, variety and doses, daily routine (work, hobbies etc.), physical activities and exercise, measurement data schedule and types, appointment schedule refer to other specialists and diet, during follow up of the chronic patients.

Discussion. Our initial analysis of the interviews indicates that the main impact of the personal considerations is on therapeutic decisions and follow-up; the DSS should recommend treatment options, provide reminders, tips, schedules and needed evaluations. The proposed categories are important but they may not fully capture the complexity and diversity of the personal information which is important for decision making. In the next phases we will develop an ontology of context and effect based on the elicited categories and a methodology for personalizing clinical guidelines based on the ontology. We will also integrate the ontology of context and effect into Computer Interpretable Guidelines (CIGs) in order to yield personalized DSS.

Table 1. main personal categories influencing treatment recommendations

Category Example quote Category Example quote

Age (most influences goals)

“You optimize based on life expectancy. For old man or for young man.”

Habits (smoking, alcohol, etc.)

“in case of a heavy smoker I can’t prescribe some medicine…”

Living area (place, time)

“does the patient have an elevator, or does he need to climb stairs?”

Functional condition

“if a patient comes with a cane or a wheelchair , we must consider…”

Family status “…if someone lives alone and has CVA starting with insulin…”

Occupation “…office work requires repeated daily activities…”

Financial status “A 70-year old patient takes a lot of medications. Can he afford it? “

Hobbies “she is a marathon runner, can we reduce the medication?”

Page 66: Proceedings of the 6th Israel Association for Information

66

References

Chang, I. (2009). Stakeholder Perspectives on Electronic Health Record Adoption in Taiwan. APMR, 15, 133-145.

Gamberger, D., Prcela, M., Jović, A., Šmuc, T., Parati, G., Valentini, M., Kawecka-Jaszcz, K., et al. (2006). Medical Knowledge Representation Within Heartfaid Platform. Studies In Health Technology And Informatics.

Garets, D., & Davis, M. (2006). Electronic Medical Records vs. Electronic Health Records: Yes, There Is a Difference.pdf. A HIMSS AnalyticsTM White Paper.

Pagliari, C., Detmer, D., & Singleton, P. (2007, August 18). Potential of electronic personal health records. BMJ (Clinical research ed.). doi:10.1136/bmj.39279.482963.AD

Page 67: Proceedings of the 6th Israel Association for Information

67

ASSESSING THE READABILITY AND CULTURAL

SENSITIVITY OF HEALTH INFORMATION ON THE WEB

Esther Brainin, PhD Ruppin Academic Center

il.ac.estherb@ruppin

Efrat Neter, PhD Ruppin Academic Center

il.ac.neter@ruppin

Keywords: eHealth literacy, information search, cultural sensitivity, Web readability, regulation, health.

INTRODUCTION

Healthcare providers such as governmental hospitals and Health Maintenance Organizations (HMO) are gradually moving their information services onto the Web, on the assumption that people can benefit from using the Web for health purposes. However, not all Internet health communication is adequately adapted to people’s literacy level, language, and cultural context. There is little understanding of how non-mainstream populations either use or will use the Web. The disjunction between the readability level of health information materials and the populations' eHealth literacy levels and different cultural backgrounds creates barriers to information use, even when it can be accessed. A vast proportion of articles in the professional literature provide guidelines for assessing readability and quality of health-related content (e.g. Eysenbach, Powell, Kuss, & Sa, 2002; Minervation, 2008; O'Grady, 2006 ; Shedlosky-Shemaker, Sturm, Saleen & Kelly, 2009; Stvillia, Mon & Yi, 2009). However, very few site developers and content managers assess information sensitivity according to differences in eHealth literacy and cultural variation (Friedman & Hoffman-Goetz, 2007; Zahedi & Bansal, 2011). The mere translation of content into different languages is not enough for tailoring the information to the audience’s needs.

OBJECTIVES

(1) Identify eHealth literacy and cultural sensitivity factors relevant to groups of different cultural and eHealth literacy background; (2) Create usability assessment criteria for those factors; and (3) Assess these criteria in the Hebrew-language portions of five websites targeting health consumers.

RESEARCH DESIGN

This research is exploratory in nature. A comprehensive literature review of policy papers and published articles containing relevant search terms was conducted. Results analysis identified seven theme-based categories to serve as components of a rating instrument: content, authorship, advertising, currency, scope, contact information, and legal issues. Based on the literature review and 23 open-ended interviews conducted with health webmasters and content managers, a 15-item usability assessment tool was constructed. Each item in the constructed scale relates to one (or more) of the theme-based categories identified as pertinent to eHealth literacy and cultural sensitivity. In general, the items address the cultural sensitivity of content and pictorial materials as well as the site’s adjustment to various languages and eHealth literacy levels. Thus, to determine whether the site complies with “cultural sensitivity,” an evaluator has to address the following questions: (1) “Does the site include content that might be hurtful (especially for religious groups)?”, and (2) “Does the site display pictures that might upset certain ethnic

Page 68: Proceedings of the 6th Israel Association for Information

68

groups?” After pretesting, two judges used the constructed tool to assess the content usability of five health websites representing a governmental health agency, a hospital, a large HMO, a large non- profit health association and a private, commercial health website.

FINDINGS

Sites were found to be insensitive to issues of eHealth literacy: only a partial definition of concepts was evident; only two sites contained a dictionary of terms, and four out of the five sites used medical terms in their headings/labels. The latter practice could prove to deter users at the very first stage of information search, due to lack or partial knowledge of medical terms. Sites sometimes contained information in a language other than Hebrew: four contained information in English and three contained information in Arabic and Russian; however, the information in these languages was cursory compared to the information in the Hebrew part. In spite of adequate representation of various sub-populations and no stereotypical depiction of any group on these sites, hurtful content was found in four of the five sites. There was no culturally-adjusted content except for on the governmental site. This lack of cultural sensitivity was augmented by the sites’ lack of accessibility to people with visual disabilities and the fact that none of the sites allowed for adjusting the Web page to correspond to personal needs.

CONCLUSIONS

The sites reviewed did not meet most of the recommended usability criteria, thus excluding potential users. Increasing awareness of eHealth literacy and cultural suitability of health information among webmasters and content managers may foster adoption of new guidelines in the development and maintenance of health information sites, which in turn could reduce disparities in the use of Internet for health purposes. Public sector websites intended for public benefit need to be especially and continuously redesigned, in order to remain mindful of social inclusion.

RESEARCH LIMITATIONS AND FURTHER RESEARCH

The study is based on the analysis of five large health web sites. All sites are Hebrew-written. It could be that the findings are unique to Hebrew-written sites or to Israeli sites. More studies are needed to validate the instrument through which the evaluation of cultural sensitivity was carried out. The results of this work have immediate application in the design of Web sites for a multicultural audience.

Page 69: Proceedings of the 6th Israel Association for Information

69

REFERENCES

Eysenbach, G., Powell, J., Kuss, O., & Sa, E. R. (2002). Empirical studies assessing the quality of

health information for consumers on the World Wide Web, a systematic review. Journal of the American Medical Association, 287. Retrieved April 26, 2008, from http://jama.ama-assn.org/cgi/reprint/287/20/2691

Friedman, D.B. and Hoffman-Goetz, L. (2007). Assessing cultural sensitivity of breast cancer information for older Aboriginal women, Journal of Cancer Education, 22(2), 112-118.

Minervation. LIDA: Minervation Validation Instrument for Health Care Web Sites. Retrieved October 19, 2008, from http://www.minervation.com/mod_product/LIDA/minervalidation.pdf

O'Grady, L. (2006). Future directions for depicting credibility in health care web sites. International Journal of Medical Informatics, 75, 58-65.

Shedlosky-Shoemaker, R., Sturm, A.C., Saleem, M., and Kelly, K.M. (2008). Tools for Assessing Readability and Quality of Health-Related Web Sites. Journal of Genetic Counselling, 18(1), 49-59 .

Stvilia, B., Mon, L., & Yi, Y. J. (2009). A model for online consumer health information quality. Journal of the American Society for Information Science and Technology, 60(9), 1781-1791. doi:10.1002/asi.v60:9

Zahedi, F. and Bansal, G. (2011). Cultural Signifiers of Web Site Images, Journal of Management Information Systems 28(1), 147-200. DOI: 10.2753/MIS0742-1222280106

Page 70: Proceedings of the 6th Israel Association for Information

70

HEALTHCARE DECISION-MAKING: BETWEEN CHAOS AND CONTROL AT THE POINT OF CARE FOR THE COMPLEX-

PATIENT

Weinberger Hadas

Holon Institute of Technology, Department of

Instructional Systems, Holon, Israel

[email protected]

Tadmor Boaz Beilinson Hospital, Rabin

Medical Centre, Petah Israel Tikva,

il.org.BoazT@clalit

Singer Pierre Department of Intensive Care, Institute for Nutrition Research, Rabin Medical Centre, Beilinson Hospital, Petah Tikva, ISRAEL

[email protected]

Keywords: healthcare, complex-patient, decision-making, medical reasoning

INTRODUCTION

Hospital physicians practicing in Intensive-Care Units (ICU), emergency care and in the interior wards are encountered daily by the demanding task of decision-making for the complex-patient. The current hospital information system architecture, however, does not support this (Kohli & Piontek, 2010). Classical decision-support tools and evidence-based medicine (EBM) lack support and knowledge for these, usually unprecedented situations. Consequently, balancing the trade-offs between patient safety and the quality of care, value-based healthcare and such as ethical consistency, is a major challenge encountered daily by physicians. For example, medical decisions are sometimes based on on-site physician judgement in which the absent of EBM makes it difficult to explain or else ignore other physicians advice, domain experts suggestion or family request (Weinberger, Tadmor & Singer, 2012). There is in these situations to affect decision-making consistency. Occasionally, the delivery of value-based healthcare could be questioned (Weinberger, Tadmor, 2011). This paper discusses the introduction of an Interprofessional Decision-Making Model, namely, IDAM to hospital practice. This is based on preliminary findings of our experience in the ICU at the Beilinson tertiary hospital, a university affiliated hospital in Israel. The ultimate goal of this effort is to advance value-based healthcare (Porter and Teisberg, 2006) by supporting leadership at the point of care (Lee, 2010). Decision-making for the complex-patient is an intensive task; a generative dance is required between chaos and control towards patient’s well-being. The challenge is two-faceted: on the one hand is the clinical question and on the other hand is the interprofessional (e.g., medical, social, legal, ethical perspectives) debate. There is in this situation to further challenge the efficiency and effectiveness of decision-making processes. An example in this context is found with the uncertainty, ambiguous and elusiveness of complex-patient cases and the resulting efforts towards remedy on the one hand, and the need to avoid futile treatment, on the other hand. The relationship between the individual occurrences of the complex-patient and multimorbidity has not yet been investigated for providing statistical data on the distribution of these occurrences among distinct groups. For this reason we do not restrict the discussion for specific population. The conceptualization of IDAM follows theories and models of the literature on issues such as leadership in healthcare (Lee, 2010), medical decision-making (Peleg & Samson, 2006; Harper, 2005), complexity and group collaboration (Chiasson et al., 2007), interprofessional decision-making (Legare et al., 2010), design research and modelling (Ostrele et al., 2010; Soffer & Hadar, 2010), as well as context-aware systems (Chen & Chen , 2010) and case-based reasoning

Page 71: Proceedings of the 6th Israel Association for Information

71

(Bichinariz & Marling, 2006). Striving to balance rigor and relevance, theoretical principles of the literature were embedded in the design in order to meet physicians’ practice. This is as means to encourage usefulness. Table 1 holds a summary of the manner by which these issues are addressed in this research.

Table 1. IDAM’s areas of innovation Theme Contribution

Leadership in healthcare

Structure collaboration; encourage investigation

Decision-making Case-based reasoning Uncertainty A model and a methodology Interprofessional decision-making

Process-centered approach

Balancing rigor & relevance

Multi-perspective modelling

Context-awareness Ontology architecture

This model could prove useful for hospital physicians seeking support for the untangling of complex-patient dilemmas. For hospital management and IT specialists, there is in this model to serve for the development of evidence-based knowledge as well as for the enhancement of prevailing IT, such as the EPR. Rules practiced as part of this model could be considered for implementation as part of innovative reasoning architecture, such as IBM’s Watson. OBJECTVIES This research is motivated towards several objectives concerning three development levels: 1) conceptual, 2) design and 3) application and dissemination.

• Conceptual: this research aims to apply innovative thinking to practice, while maintaining the balance between rigor and relevance in the design of a decision-support framework intended for the complex patient.

• Design: this research is focused on the identification of reasoning strategies for decision-making for the complex-patient. This is based on a triangular Event-Task-Resources ontology architecture for such as a CBR application.

• Application and dissemination: imminent to the model application we aim towards: a) the extension of its adoption and b) encouraging the construction of evidence-based knowledge-base, to support the retrieval and negotiation of similar problems, knowledge sharing and reuse. This could also guide innovative medical educational paradigm.

METHOD This research utilizes a combined action research and design research methodology as part of a qualitative research. Aiming at the design research to result in an IT artifact it is of utmost importance to incorporate users already as part of design (Osterle, H., et al., 2010). The investigation of the complex-patient phenomena is rooted in our daily experience. Complementary to a theoretic investigation, a randomized study provided insights and examples of complex-patients. Action research methods were incorporated as part of the analysis stage, aiming at the identification of decision entities - e.g., partners, decision stages, and the relationships amongst these. For the conceptualization of this model, a series of interviews and brain storming sessions were run with management, Intensive-Care Unit (ICU) - and internal medicine doctors. A randomized study was run as part of decision-making processes for 21 complex-patients in the ICU, during a period of 4 weeks for the purpose of: a) assessing the model feasibility, b) validating the model ontological completeness, c) evaluation of the coherence of the suggested decision-rules, and d)

Page 72: Proceedings of the 6th Israel Association for Information

72

assessing its usefulness. Consequently, a refined framework was defined to be used for the following stages of this research. INTER-PROFEESIONAL DECISION MAKING In this section we describe the interprofessional decision-making model, namely, IDAM, in brief, referencing its architecture and its application. This model is based on our understanding of the bi-dimensional view of interprofessional decision-making: acknowledging decision domain perspectives (e.g., medical, ethical, legal and social) on the one hand, and decision elements and rules on the other hand (Weinberger & Tadmor, 2011). Three intertwined ontological perspectives guide this model referencing a) agents (e.g., decision partners - human and IS), b) tasks, and c) resources (Weinberger, Tadmor & Singer, 2012). Currently, adaptation is practiced in the interior wards. Concurrently, measures are taken towards the development of two IT products: a) scaled integration of the decision-making model attributes to hospital information systems - e.g., the Electronic Patient Record (EPR), and the design of an AI-based IT artefact using CBR architecture. Figure 1 illustrates the conceived model. An Inference engine utilizing case based reasoning and association rules in the context of the ontology architecture and evidence-based knowledge.

Figure 1. IDAM case-based reasoning architecture RESULTS In this section preliminary results, as obtained through discussions with participating physicians, are briefly discussed in light of the research questions.

• Model feasibility - while there is in the model to serve as a roadmap advising decision-making, it should be considered in the context of the decision setting. This means such as the consideration of elements incorporation as part of the EPR, also prior to the design of a dedicated information system.

• Ontological completeness - the model answers to a spectrum of interprofessional interactions alongside answering to clinical-medical decision aspects. However, in order for the model to reach beyond the check-list level, albeit this is of importance, further effort is required towards the identification and engineering of decision rules. The latter could enhance the model prediction capabilities.

• Coherence of the suggested decision-rules - there is in adhering to the relationships between decision elements to bring forward the network of considerations involved in decision-making for the complex-patient, as well as multiple combinations and their attributes.

All in all, the usefulness of this model might differ between senior and novice physicians. For novice, this model could be used as part of training, on the one hand, and as part of resolving complexity at the point to care, on the other hand. Moreover, the model can be adapted to indicate decision-paths for pre-defined problems in this context. For senior physicians, the model can serve

Page 73: Proceedings of the 6th Israel Association for Information

73

as a valuation tool, thus, supporting the decision-analysis process towards such as resource identification and partner’s consultation. CONCULSION The research endeavour described here indicates the promise and ability to deliver solutions to fields in transition - such as the healthcare domain. From the methodological perspective, there is in the method practiced here to advocate participatory research, particularly when striving towards balancing rigor and relevance. From the technological perspective, this research advocates the identification and provision of advanced contextual solutions. Any IS solution is a communication means, as much as any healthcare problem and complex-patients in particular, present a communication need. For the decision-making challenge discussed here, i.e., transforming decision-making to an interprofessional process, while facing the scarcity of evidence, there is in this to also trigger organizational transformation. It is in this context that the competitive advantage of Information Systems for organizational performance is relevant and in this particular case, also welcome. The importance of cooperation from organizational managerial decision-making perspective cannot be underestimated. As an IT product, the advised model can pave way for hospitals to acquire competitive advantage. For individuals, the application of this model might encourage the delivery of value-based healthcare.

REFERENCES

Bichinariz, I. Marling, C. (2006). Case-based reasoning in the health sciences: What’s next? Artificial Intelligence in medicine, 36, 127-135.

Chen N, & Chen A. (2010). Integrating context-aware computing in decision support systems. Proc. The international Multiconference of Engineers and Computer Scientists, 1, IMECS, Hong Kong.

Chiasson M, Reddy M, Kaplan B, Davidson E. (2007). Expanding multi-disciplinary approaches to healthcare information technologies: What does information systems offer medical informatics? International journal of medical informatics, 71, S89-S97, Elsevier, Ireland.

Harper, P. (2005). A review and comparison of classification algorithms for medical decision-making, Health policy, 71, 315-331.

Kohli, R., Piontek, F. (2010). DSS in healthcare: advancements and opportunities. In: Frada Burstein, C. W. Holsapple (eds.) Handbook on decision support systems, 483-499.

Lee T. Turning doctors into leaders. Harvard business review 2010: April.

Legare, F., Ratte, S., Stacey, D., Kryworuchko, J., Gravel, K., Graham ,K., Turcotte, S. (2010). Interventions for improving the adoption of shared decision making by healthcare professionals, The Cochrane library, 5.

Ostrele, H., et al., (2010). Memorandum on design-oriented information system research. European Journal of Information Systems 2010; 1-4.

Peleg, M., & Samson Tu. (2006). Decision support, knowledge representation and management in medicine. (invited paper). In: Reinhold Haux and Casimir Kulikowski, (eds.). International Medical Informatics Association (IMIA) Yearbook of Medical Informatics, Assessing Information Technologies for Health. Schattauer GmbH, Germany,

Porter, M. E, & Teisberg E O. (2006). Redefining Healthcare creating value-based competition on results, Harvard Business Press, Boston, Mass.

Page 74: Proceedings of the 6th Israel Association for Information

74

Soffer, P., & Hadar, I. (2007). Applying ontology-based rules to conceptual modeling: a reflection on modeling decision making. European Journal of Information Systems, 16, 599–611.

Weinberger, H., Tadmore, B. (2011). Bridging perspectives: Interprofessional decision support framework. 15th Int'l. Symposium on Health Information management Research (ISHIMR 2011) P. Bath, T. Mettler, D. Raptis, B. Sen (eds.)Zurich, Switzerland, 112-135.

Weinberger H Tadmor B Singer P. (2012). A model for untangling complexity with respect to multimorbidity and Interprofessionality at the point of care. Technical paper, Beilinson Hospital, Israel.

Page 75: Proceedings of the 6th Israel Association for Information

75

SUPPLIER RANKING BY MULTI-ALTERNATIVE PROPOSAL ANALYSIS FOR AGILE PROJECTS ∗∗∗∗

Arie Ben-David Faculty of Technology

Management, Holon Institute of Technology

il.ac.abendav@hit

Roy Gelbard Information Systems

Program, Graduate School of Business Administration, Bar-

Ilan Universitycom.r_gelbard@yahoo

Irena Milstein Faculty of Technology

Management, Holon Institute of Technology

[email protected]

Keywords: RFP, proposal analysis, agile, ROC curve, cost-benefit curve.

INTRODUCTION

Request for Proposals (RFP) and proposal analysis are significant milestones in Information Technology (IT) projects, regardless of the management and development approaches that are implemented. In current tender procedures, suppliers are required to submit their proposals according to predefined specifications. These specifications explicitly ask bidders to state system requirements, timetables and deliverables. The latter usually define intermediate milestones for system sub-products. The participants present their proposals in detail, covering both quality related items (i.e., the functional aspects) as well as cost related issues (i.e., budgetary issues) in what is supposed to be a clear and unequivocal format. Despite the clear definitions in both the RFPs and the suppliers’ proposals, practical experience shows that functional specifications and deliverable schedules may not be rigid. Academic studies on IT project success, such as those discussed in Molkken and Jrgensen (2003), pinpoint significant discrepancies in many projects in terms of functionality, schedule and budget. These findings are supported by analyses of commercial firms such as the Standish Group (Standish Group, 2004), as well as theoretical studies (Madpat, 2005; Agarwal and Rathod, 2006).

OBJECTIVES

The Agile approaches deal not only with possible changes in requirements during the project, but also with flexibility in project management (Olsson, 2006). The present study expands the Agile framework to the tender-RFP stage, and suggests a method for bid evaluation, in which each bid can be composed of several contract formats. Each bid is in fact a Multi-Alternative Proposal. Thus, the current study does not deal with the various values and beliefs regarding the Agile forms and concepts. Rather, it focuses on the mechanism for rational, optimal and objective evaluation of RFP proposals, in which any proposal can contain a wide range of different prospects / alternatives, and suggests multi-alternative proposal analysis. In multi-alternative proposal analysis each potential supplier can submit various proposals. This situation enables each bidder to present a variety of alternatives, thus expressing his / her competitive advantage. An option given by one supplier is not necessarily offered by all of them. The client’s task is to compare and evaluate the suppliers’ multi-alternative proposals across a variety of options, and to select the supplier who tailors the contractual suit which best fits the client’s objectives, as a function of the situations that may occur during the project’s life cycle. The proposed method has an added value when at least one of the suppliers gives a multi-alternative proposal. This situation is very common in the current dynamic IT market. In this case we need to rank the various suppliers and not individual proposals, since one cannot anticipate with absolute certainty which proposal will actually

∗ Based on A. Ben-David, R. Gelbard, I. Milstein, Supplier ranking by multi-alternative proposal analysis for agile

projects, Int'l. J. of Project Management, In Press (2012).

Page 76: Proceedings of the 6th Israel Association for Information

76

emerge as preferable as the project progresses (sometimes a combination of several proposals from a single supplier is the best choice). However, when every supplier submits a single-alternative proposal our method has no added value over existing ones.

METHOD

The current study proposes a new method for objective multi-alternative proposal analysis. The approach taken here combines the concept of Multi-Objective Utility Functions suggested by Keeney and Raiffa (Keeney, 1974; Keeney and Raiffa, 1993), with an index which is used in signal processing, machine learning and data mining for ranking the performance of classifiers and other kinds of decision-making automata models. This index is the Area Under Receiver Operating Characteristic Curve, commonly known as the AUC-index. The concepts are implemented in a visualized form, enabling the evaluation of multi-alternative proposals and the rankings of the suppliers. The proposed method is an extension of the graphical cost-benefits approach developed by Shoval and Lugacy (1988). However, in contrast to their approach, which only ranks single proposals, our method can rank suppliers in multi-alternative proposal situations (e.g., select the 'best' one). Each alternative typically has a unique cost and benefit. Similar to Shoval and Lugacy (1988), our method does not deal with benefit calculation and/or added value evaluation using different methods, like the Additive Weight, AHP, etc. (see, for example, Buss, 1983; Lucas and Moore, 1976; Saaty, 1980). It works on top of these evaluation methods, and is not designed to replace them. We innovate by suggesting a simple index, called the Area Under Cost-Benefit Curve (AUCB), for ranking the various suppliers.

RESULTS AND CONCLUSIONS

The AUCB enables clients to choose not only the best proposal (current methods do this very well already), but more importantly to identify the supplier who offers the most attractive set of alternatives for various situations which may occur during project’s life cycle. Specifically, the method ranks the various suppliers without requiring the alternatives to be identical, and is free of subjectivity biases. The proposed model anchors the flexibility of Agile concepts in the RFP stage (tender and contract), thus creating a better fit between formal contracts and actual reality. The method combines concepts from the Agile approach with notions applied in signal processing and data mining. The result is a simple and intuitive method for supplier ranking in which each supplier can present several (not necessarily identical) alternatives, and the multi-alternative proposal analysis introduced ranks the suppliers at the IT project’s RFP phase.

REFERENCES

Agarwal, N. & Rathod, U. (2006). Defining 'success' for software projects: An exploratory revelation. International Journal of Project Management, 24(4), 358-370.

Buss, M. D. J. (1983). How to rank computer projects. Harvard Business Review, 61, 118-125. Keeney, R. (1974). Multiplicative utility functions. Operations Research, 22, 22-34. Keeney, R. & Raiffa, H. (1993). Decisions with multiple objectives. Cambridge University Press. Lucas, H. C. & Moore, J. R. (1976). A multiple-criterion scoring approach to information system

project selection. INFOR, 14, 1-12. Madpat, S. (2005). Bridging organizational strategy & projects - an OPM3 insiders perspective.

PMI Journal, 5(10), 16-20. Molkken, K. & Jrgensen, M. (2003). A review of surveys on software effort estimation,

Proceedings of the International Symposium on Empirical Software Engineering, 223-231. Olsson, N. O. E. (2006). Management of flexibility in projects. International Journal of Project

Management, 24, 66–74. Saaty, T.L. (1980). The analytic hierarchy process. McGraw-Hill, NY.

Page 77: Proceedings of the 6th Israel Association for Information

77

Shoval, P. & Lugasi, Y. (1988). Computer systems selection: the graphical cost-benefit approach. Information and Management, 15, 163-172.

Standish Group (2004). The chaos report. Boston, MA: The Standish Group International.

Page 78: Proceedings of the 6th Israel Association for Information

78

Software Architecture Process in Agile Development Methodologies

Sofia Sherman

Department of Information Systems University of Haifa

Haifa, Israel [email protected]

Irit Hadar Department of Information Systems

University of Haifa Haifa, Israel

[email protected]

Keywords: agile development, software architecture process, documentation.

INTRODUCTION

The use of agile software development methodologies has increased significantly over the past decade. This growing use of agile methodologies and the need to adjust them to enterprise products and complex systems’ development have created a difficulty to comply with the requirement of minimalism in agile methods and the need for well-defined up-front software architecture. The main characteristics of agile development are short releases, flexibility, and minimal documentation (Ambler, 2002). At a first glance this seems to be essentially contradictory to architecture, or as Booch (2010) puts it: Agile architecture is an oxymoron. Blair et al. (2010) declare that “The architect’s role in agile development remains at best ill-defined and at worst simply ignored.” (ibid, p.26). Boehm (2002) claims that the values of agile development may be interpreted in many ways, some of which are problematic because they ignore architectural features that do not support the current version of the system. He claims that this approach works well when future requirements are unpredictable, but may cause the loss of valuable architectural support. On the other hand, Nord and Tomayko (2006) state that software architecture is part of product quality and is not tied to a particular process, technology, culture, or tool, and Abrahamsson et al. (2010) strongly believe that a healthy focus on architecture is not antithetic to any agile process. The objective of this research is to define the software architecture process in agile development for improving quality, shortening time-to-market, and reducing cost of complex systems.

RESEARCH METHODOLOGY

This research applies a combination of empirical study and design research and is conducted in three stages: 1. Identifying the current state of the architecture process for agile methodologies. This was studied based on both literature and empirical evidance collected via interviews and surveys and resulted in an initial list of the current architecture process phases and pain-points.

2. Improvement of the architecture process. Based on the outcomes of the first stage, a new architecture process was defined and appropriate tools were developed. This phase was done iteratively, applying the developed process and tools and refining them based on feedbacks from the field.

3. Implementation and Evaluation of the proposed solution. The first case-study of the solution implementation and evaluation takes place at an international large IT service provider firm. The research population includes software architects of all levels and architecture reviewers.

Page 79: Proceedings of the 6th Israel Association for Information

79

THE PROPOSED SOLUTION

Agile Architecture Process The main principle of the proposed architecture process is to align with the requirements of minimalism, flexibility to changes and collaboration of agile methodologies. To this end, a knowledge-based and constantly reviewed architecture process was developed. The main components of the proposed solution are an abstract specification document and corresponding tool.

Architecture Abstract Specification (AAS) The AAS is an architecture document, which includes the most relevant and updated information regarding the proposed architecture for the current release, and the current status of the product’s existing architecture. The shaping principle of this document is to keep it short and concise. The document consists of four sections: product overview, goals for upcoming release, architecture overview as a whole interweaved with the release changes, and non-functional requirements. The embodied architecture blueprint is presented via four conceptual layers: system components, common and cross concern components, external integration components, and functional components layer. The document captures relevant snapshots of blueprints and short explanations on the components involved. For the functional components layer, for example, the snapshot of the functional architecture layer extracted from the architecture model is captured as a blueprint and added to the document. The relative components are described in text as well, according to the blueprints sections, as well as a list of acronyms used in the document and links to related documents. In order to keep it short and focused, we developed it template-oriented, with word-count limitation. Architects creating this document insert relevant data in predefined fields rather than generating unstructured freeform text. The AAS allows facilitating a common language for architects. It simplifies and assists the communication between architects, and streamlines information so that reviewers can concentrate on the content and the design decisions captured in the blueprint and text within the document, rather than lengthily narrative statements, blurring the design intent and product architecture structure. Reviewing long architecture documents with 100-150 pages is not realistic, especially for a geographically dispersed review team in today’s enterprise environment. In particular, with agile approaches, the rapid and flexible nature of the changes prohibits the timely construction of such documents. Reviewing a short and focused document leads to constructive and targeted comments from the reviewers. Reviewers can easily follow up the changes made in the architecture in different releases, and assess the evolution of the architecture over time

Architecture Abstract Specification Tool The main purpose of the AAS tool is to help the architects in organizing relevant information while they create the architecture blueprints, thus reducing the amount of overhead required by documentation needs. This tool provides a form-to-fill and is bundled with an architecture modeling tool. Both facets share the same database, which allows the information in the AAS tool to be updated as necessary, either by the forms or directly via the modeling tool’s GUI. This reduces the information the architect is required to insert within the tool, captures the knowledge and design decisions during the architecture design phase, and enables its reuse or restructuring later on. For example, when a new component is added to the blueprint in the modeling tool, the architect inserts all required information within the modeling tool, and the relevant data is created by associated queries as part of the AAS tool.

EVALUATION OF THE PROPOSED SOLUTION

Until now, twenty-seven projects within the firm used the AAS document and tool, and submitted their architecture for review:

Page 80: Proceedings of the 6th Israel Association for Information

80

• In twelve projects, no review meeting was required, because the reviewers understood and approved the architecture as documented.

• Fifteen projects were discussed in review meetings. According to reviewers’ interviews: o It takes less time to prepare for the review when reviewing an AAS, and less time

to perform it. o The reviewers understood better the architecture than in previous projects where

the AAS was not used. • Architects reported that they invest less time on architecture development when using the

AAS.

CONCLUSION

The main concepts of agile development are flexibility, minimalism and collaboration. These are achieved when using the AAS tool for creating short and focused architecture document. The proposed document and tool are nimble and flexible to changes, bundled and integrated with an architecture modeling tool. This approach supports the minimalism in the agile development, facilitates common language, supports collaboration between stakeholders and fosters focused incremental architecture evolution steps.

REFERENCES

Abrahamsson, P., Babar, A.M., Kruchten, P. Agility and Architecture: Can They Coexist? IEEE Software, 27(2), 2010, 16-22.

Ambler, S.W. Lessons in Agility from Internet-Based Development. IEEE Software 19(2), 2002, 66-73.

Blair, S., Watt, R., Cull, T. Responsibility-Driven Architecture. IEEE Software 27(2), 2010, 26-32.

Boehm, B. Get Ready for Agile Methods, with Care. Computer 35(1), 2002, 64-69.

Booch, G. An Architectural Oxymoron. IEEE Software, 27(5), 2010, 95-96.

Nord R.L., Tomayko, J.E. Software Architecture-Centric Methods and Agile Development. IEEE Software 23(2), 2006, 47-53.

Page 81: Proceedings of the 6th Israel Association for Information

81

PROCESS CHOREOGRAPHY LANGUAGES EVALUATION VIA ONTOLOGY

Ofir Nimitz Pnina Soffer IS Department, University of Haifa, Israel IS Department, University of Haifa, Israel

[email protected] [email protected]

Keywords: Process Choreography, Ontology, Evaluation, BPMN, Interoperability.

INTRODUCTION

In the current business environment, collaborative business processes serve a variety of organizations for accomplishing their business goals. Such processes require an integration infrastructure, which, in turn, should build on a profound model encompassing the details of the collaborative process. A proper model gives the stakeholders a clear-cut understanding of the whole process and assists in discovering possible failures, optimizing and documenting the integration processes. Process choreography is a common term used for such models. Different choreography languages exist, each having advantages and drawbacks. These languages need to be evaluated for selecting a specific one and for the sake of further improvements. This paper proposes an evaluation framework which extends an ontology-based Inter-Organizational Conceptual Model using the more general ontological BWW model. The practicality and the necessity of the proposed framework are analyzed and reaffirmed through a case study.

OBJECTIVES

The paper has two objectives. First, to propose an evaluation framework for choreography languages based on theoretical models. Second, to examine the practicality and the necessity of the proposed theoretical framework via a case study. An effective evaluation can help in developing enhanced choreography languages and may assist organizations to choose a suitable language for their requirements.

OVERVIEW

This research focuses on the evaluation of process choreography languages. Process choreographies are a means to represent and manage cross-organizational interaction. Choreography defines the flow for a collaboration between the parties involved to such an extent that it describes when tasks and interactions may happen. Stakeholders that want a full overview of collaboration from their perspective can use choreography for this purpose (Dijkman & Dumas, 2004). The case study has been modeled via the standard Business Process Modeling Notation (BPMN) version 2.0 using two approaches: interaction model and a combination of interaction and interconnected models. Two main approaches exist for choreography modeling: interconnection models and interaction models. An interconnection model consists of communication activities and behavioral dependencies between these activities, while interaction models contain interactions and global behavior dependencies between them (Decker, Kopp, Leymann, & Weske, 2007). Interconnected models tend to be more expressive, while the interaction models tend to be more readable. There are two approaches for evaluating choreography languages: evaluation using required characteristics/patterns (Decker, Kopp, Leymann, & Weske, 2007), and evaluation using ontology (Green, Rosemann, & Indulska, 2007). The first approach provides a check-list of properties derived from common cases, which choreography languages should possess, e.g. one-to-many integration, exception handling, etc. In the second method, the requirements are theoretically derived from an ontology. Ontology, a well-established theoretical domain within philosophy,

Page 82: Proceedings of the 6th Israel Association for Information

82

intends to represent the real world and is used for evaluating model completeness. This paper proposes an ontology-based evaluation framework. The proposed framework relies on two models, the Bunge-Wand-Weber (BWW) model (Wand & Weber, 1990) which is a general ontological framework, not directly related to integration processes that is used for analyzing the strengths and weaknesses of modeling languages; and the Inter-Org. Conceptual Model (Ghattas & Soffer, 2007), which depicts the main ontological concepts involved in an inter-organizational process and is extended based on the BWW model. The proposed framework includes three categories of criteria: basis elements, runtime, and process validity. Every criterion is based on one of the ontology models. In this paper we use a subset of these requirements (six criteria) and demonstrate their applicability to the case study. These include: system structure construct (basis elements criterion); expected runtime behavior (runtime criterion); process governance, private validity criteria, stable state and history constructs (process validity criterion).

METHODOLOGY

The research methodology is a case study, addressing the integration between a supplier and customers. We use two choreography modeling approaches for representing the collaborative process. Then we analyze the extent to which each of these models satisfies the theoretically developed requirements. Using the evaluation process, the study will be concluded; we demonstrate how the requirements can be used to evaluate the integration models, thus their practicality is demonstrated along with their necessity.

PRODUCT PRICE UPDATE: A CASE STUDY

The case study deals with a process where an Israeli supplier updates a product price. The supplier provides goods and services to hospitals and medical institutions. A product price is changed by the supplier (recorded in its ERP system), and then a message is passed from the ERP system to the CRM system, where customers details are stored. Then the message is passed to the customers via a third party organization if several conditions are fulfilled. The customers, according to their business rules, update the price in their ERP and send an approval or an error message.

RESULTS

By means of the case study, we were able to examine the ontological criteria in real scenarios. The case study has been modeled twice: Figure 1 – BPMN 2.0 Interaction Modeling and Figure 2 – BPMN 2.0 Combined Interaction and Interconnected Modeling. Table 1 provides the analysis results of the models. Each ontology criterion's fulfillment and relevancy was examined in the models. For example, the knowledge of expected run time behavior criterion was not fulfilled in the Interaction Model; an update message might not be sent due to the supplier's business rules – this situation is not expressed in the model. On the other hand, we can clearly see that this criterion is fulfilled in the combined model. Using this case study, we can also see this criterion's high relevancy – without the knowledge of the expected runtime behavior, the participant can wait for the update futilely. We have also found that the system structure construct was not relevant or necessary in the models – there is a clear demarcation among the participants without this construct. Other criteria were examined in the same way.

Page 83: Proceedings of the 6th Israel Association for Information

83

Figure 1. Product Price Update – Interaction Model

Figure 2. Product Price Update – Combined Interaction and Interconnected Models

Cu

sto

me

r

ER

PE

RP

Su

pp

lie

r

CR

M

Page 84: Proceedings of the 6th Israel Association for Information

84

Table 1. Ontology Evaluation Requirements - Examination Results

CONCLUSIONS, LIMITATIONS AND FUTURE WORK

In this study we have shown that the theoretical ontology-based requirements are practical, i.e. they could be used for evaluating the choreography models, and except for the System Structure construct they are also necessary. Using the case study it was noticeable that the majority of the requirements are important; some of them were highly important (e.g. expected runtime behavior). The contribution of this study revolves around the practical requirements that can be used for the evaluation of choreography languages; evaluation that can lead, in turn, to enhanced languages and to effective decision in terms of furthering the right and suitable choice of modeling language by the different organizations. The research methodology, i.e., the case study, has its own limitations, mainly a low external validity. Yet, the case study is rich of technological and business aspects and therefore enables a significant qualitative examination. In future work, the ontology requirements will be examined on another case study, more languages will be examined, and, if required, one of the languages would be enhanced accordingly to the evaluation framework.

Page 85: Proceedings of the 6th Israel Association for Information

85

REFERENCES

Decker, G., Kopp, O., Leymann, F., & Weske, M. (2007). BPEL4Chor: Extending BPEL for Modeling Choreographies. Web Services, 2007. ICWS 2007. IEEE International Conference on Salt Lake City, UT.

Dijkman, R., & Dumas, M. (2004). Service-oriented Design: A Multi-view point Approach. International Journal of Cooperative Information Systems, 337-368.

Ghattas, J., & Soffer, P. (2007). Evaluation of inter-organizational business process solutions: A conceptual model-based approach. Information Systems Frontiers.

Green, P., Rosemann, M., & Indulska, M. (2007). Enhancing Interoperability and Web Services Standards Through Ontological Analysis. In ONTOLOGIES (pp. 585-606).

Wand, Y., & Weber, R. (1990). An Ontological Model of an Information System. Software Engineering, IEEE Transactions on, 16(11), 1282-1292.

Page 86: Proceedings of the 6th Israel Association for Information

86

COMPARING ONTOLOGY LANGUAGES FOR MODEL REPRESENTATION OF THE SITBAC ONTOLOGY VIA A

CONTROLLED EXPERIMENT- RESEARCH DESIGN

Dizza Beimel Department of Industrial

Engineering and Management,

Ruppin Academic Center, Emek Hefer, Israel

[email protected]

Mor Peleg Department of Information

systems,

University of Haifa, Israel

[email protected]

Keywords: Ontology Languages, Knowledge representation, OWL, Frames

INTRODUCTION

In recent years, we can see increased use of ontology languages, such as Frames [Minsky, 1974] and Web Ontology language (OWL) [Web Ontology Research Group, 2004] for representing knowledge. Knowledge representation has become necessary in various domains. One of these domains is the access-control domain. In the context of access-control, we recently have presented a conceptual model called Situation-Base Access Control (or SitBAC for short) [Peleg, Beimel, Dori, & Denekamp, 2008]. SitBAC enables to formally represent context-based Situations of data-access in the healthcare domain. The Situations describe which tasks a data requestor can carry out, with respect to various contextual factors (e.g., the location of the data requestor and the time of access). We created a Frames-Based implementation for the conceptual model of SitBAC, and evaluated its usability [Beimel, & Peleg, 2010]. However, we cannot use this version, as it does not support Reasoning. Reasoning is a necessary part of a complete framework that wishes to provide the following services: (1) to enable the representation of data-access Situations and (2) to deliver a real-time mechanism that evaluates new incoming data-access requests. As so, we created an additional implementation for SitBAC, this time it was based on Web Ontology Language (OWL) [Web Ontology Research Group, 2004]. The OWL-base implementation can fulfill both services that were mentioned in the previous paragraph, and has higher expressiveness ability than the Frames-base version. In particular, we created the SitBAC access-control framework [Beimel, & Peleg, 2011]. Within the SitBAC framework, a set of data-access scenarios can be represented as OWL-based Situation classes, thereby establishing the organization's data-access policy. The framework includes an inference method, which evaluates an incoming data-access request (represented as an individual of an OWL-based Situation class), against the data-access policy to produce an ‘approved/ denied’ response. The inference method uses a Description Logics (DL)-reasoner during the inference process, for knowledge classification and for real-time realization of the incoming data-access request as a member of an existing Situation class in order to infer the appropriate response. However, the cost for that is in its usability. Based on our experience, we believe that it is harder to use OWL than Frames for representing data-access Situations. For that purpose, we developed guidelines that ease the use of our OWL-based version when it comes to maintaining and extending the set of data-access Situations. So Far, we have the Frames-based version on one hand, which has high usability, but is restricted in its reasoning aspect. On the other hand, we have the OWL-based version, which is powerful and support reasoning but is probably poor in its usability aspect. Based on the above, an interesting question is: can we close the gap of the low usability of the OWL-base version by providing the user high-level and detailed guidelines and good practice? Or,

Page 87: Proceedings of the 6th Israel Association for Information

87

maybe we should let the users construct their data-access Situations in the Frames-based version and develop a translator into OWL? Our research is motivated by this interesting question. In Particular, we present in this abstract, the design of an experimental comparison between the two ontology languages, Frames and OWL, in the context of understating and synthesizing access-control Situations, where the subjects were trained and provided with high-level and detailed guidelines.

OBJECTIVES

We present a comparison between two ontology languages (Frames vs. OWL) that we used for representing knowledge with respect to the access-control Situations. The comparison between the two languages was made in the context of understanding and synthesizing of SitBAC Situations. In particular, the comparison criteria are: (1) (basic and complex) Understanding of given data-access Situations, (2) Synthesis of new Situations based on given knowledge, and (3) Synthesis of new Situations based on new created knowledge.

METHODS

The conceptual SitBAC model was represented via two ontology languages: (1) Frames and (2) OWL [Web Ontology Research Group, 2004]. Both languages are supported by the Protégé-2000 [Noy, & McGuinness, 2001], which is an open-source ontology-editor and a knowledge-modeling tool. Thus, the environment was identical in the two test groups.

Figure 2: a screen-shot of an instance of the SitBAC-model ontology created via Protégé-2000

Figure 1. a screen-shot of a Situation created in a Frame-based ontology language via Protégé-2000.

Figure 1 and figure 2 illustrate Situations that were created for each of the ontology languages. Both Situations represent the following scenario: “Family doctor is allowed to do any action on the patient's EHR even when the patient is inpatient”. We carried out the experiment as an exam in a "Knowledge-Representation and Decision-Support" course, which is an advanced elective course in the Management Information Systems (MIS) program at the Haifa University. The course can be taken by graduate students or by undergraduate student that are in their last year of studying. The students were exposed to knowledge representation tools, such as Protégé-2000, and were experienced in these tools, while attending to course classes that were held in a computer lab.

Page 88: Proceedings of the 6th Israel Association for Information

88

Figure 2. a screen-shot of a Situation created in an OWL-based ontology language via Protégé-2000.

For the experiment, the class was divided into two test groups: group A received an OWL-based SitBAC exam, while group B received a Frame-Based SitBAC model exam. Each group consisted of 8 students. The exam was carried out in a computer lab. Two Protégé projects (one for the OWL language and one for the Frames language), were created ahead and were uploaded to the course site. Each student received a textual exam, and was guided to download one of the Protégé projects. In addition, each student received a detailed guideline document, containing a full description of how to perform each possible task, starting with adding a new concept to the knowledge, and ending with a set of instructions, describing who to create a new Situation. The textual exam consisted of three assignments that the students had to accomplish, with respect to the three comparison criteria: 1. Understanding: assignment #1 was to view three Situations that were represented in the

Protégé project, and to describe in text its content.

2. Synthesizing Situations using given knowledge: Task #2 included textual descriptions of three new scenarios. The students were required to generate the corresponding Situations in the Protégé project.

3. Synthesizing Situations using new created knowledge: Task #3 was similar to the second mission. However, this time, the construction of Situations, involved the creation of new knowledge facts within the Protégé project. Therefore, this task was more advanced than the previous one.

Currently, we are at the stage of processing our findings, in order to receive the final results of the experiment. We plan to use the Wilcoxon matched-pairs signed-rank test [Siege, 1956], which is a nonparametric test that can be used to compare the achievement of two groups on tests with repeated measurements on a single sample. We used this test in a previous experiment that we conducted few years ago [Beimel, & Peleg, 2010]. The current experiment is a continuation of the previous experiment.

Page 89: Proceedings of the 6th Israel Association for Information

89

CONCLUSIONS

We present the design of an experiment that we ran that aims to compare two ontology languages, Frames and OWL, in the context of knowledge representation. The motivation to this experimental comparison stems from the interesting question that we deliberate with: whether to use the OWL-based version, which is more powerful but with less usability and provide the users training and helpful guidelines, or let the users create their access-control Situations within the friendly Frames-base version and develop a translator to OWL. We will present the statistical analysis results at the conference.

REFERENCES

Beimel, D., & Peleg, M. (2010). The context and the SitBAC models for privacy preservation - an experimental comparison of model comprehension and synthesis. IEEE Transactions on Knowledge and Data Engineering, 22 (10), 1475–1488.

Beimel, D., & Peleg, M. (2011). Using OWL and SWRL to represent and reason with situation-based access control policies, Journal of Data & Knowledge Engineering, 70, 596–615.

Minsky, M. (1974). A Framework for Representing Knowledge. MIT-AI Laboratory Memo 306, http://web.media.mit.edu/~minsky/papers/Frames/frames.html.

Noy, N., & McGuinness, D. L. (2001). Ontology Development 101: A Guide to Creating Your First Ontology. Stanford Knowledge Systems Laboratory Technical Report No.: KSL-01-05 and Stanford Medical Informatics Technical Report No.: SMI-2001-0880. http://www.ksl.stanford.edu/people/dlm/papers/ontology-tutorial-noy-mcguinness-abstract.html.

Peleg, M., Beimel, D., Dori, D., & Denekamp, Y. (2008). Situation-Based Access Control: privacy management via modeling of patient data access scenarios. Journal of Biomedical Informatics, 41(6), 1028–1040.

Siege, S. (1956). Non-parametric statistics for the behavioral sciences. New York: McGraw-Hill. 1956.

Web Ontology Research Group (2004). Web Ontology Language (OWL). http://www.w3.org/2001/sw/WebOnt/.

Page 90: Proceedings of the 6th Israel Association for Information

90

KNOWLEDGE AND SOCIAL NETWORKS IN QUESTIONS AND

ANSWERS SITES

Amit Rechavi Sagy Center for Internet Research

Graduate School of Management, Univ. of Haifa

Haifa, Israel [email protected]

Sheizaf Rafaeli Sagy Center for Internet Research

Graduate School of Management, Univ. of Haifa

Haifa, Israel [email protected]

KEYWORDS Q&A sites, social networks, knowledge-seeking, Yahoo! answers, social capital, wisdom of crowds 1. INTRODUCTION Q&A sites are online communities, where anyone could ask and answer and share knowledge, and all the data are public and immediately searchable [Madhavan and Sudharshan, 2005]. Participants’ activity, whether for knowledge sharing or socializing, create ties to other users through asking and answering questions and establish a social network even if not intended [Shah et al., 2008; Viswanath et al., 2009]. Though much of the content reflects unsubstantiated opinions, questioners prefer to obtain personalized answers to their individual queries [Bian et al., 2008] and having the opportunity to connect socially with others is a good enough reason to prefer these sites over a search engine [Liu and Agichtein, 2008]. Yahoo! Answers is the world’s largest question-answer system and it is a community site: anybody can ask and answer [Harper et al., 2009]. Unlike discussion groups where users can write on a specific subject and relate to each other’s words, in Yahoo! Answer, one can only ask a question or answer it. There is no conversation support mechanism and two networks co-exist in Yahoo! Answers; a knowledge-seeking network and a social activities network. We define the social network to consist of all activities not related to actually asking or answering a question such as; starring a question as “interesting”, “thumbing up” and “thumbing down” an answer, commenting a closed question and starring the Best Answer. Though Q&A sites are defined as communities [Li et al.; 2008; Rafaeli and Larose, 1993; Wellman, 2001] not much was written on the two co-existing networks that compose Q&A sites; the social network and the knowledge-seeking network. This study explores these networks and the relationship between them. 2. HYPOTHESIS Users may continue to participate in a site for different reasons than those that led them to the site [Lampe et al., 2010]. Asking or answering questions might lead to meeting new interesting people and vice versa, evaluating questions and answers might lead to participating in asking or answering. This possible diffusion leads us to explore the relationship between the knowledge-seeking network and the social network in Yahoo! Answers (H1). Since the social capital was found to be important in predicting contribution to the site [lampe et al., 2010] and was found to be influential on the user satisfaction [Huysman and V. Wulf, 2005; Morris et al., 2009, Shim, 2008, Viswanath et al, 2009] we will explore the correlation between

Page 91: Proceedings of the 6th Israel Association for Information

91

the social capital and two parameters: the community’s contribution and the asker’s satisfaction (H2). Finally we will explore the correlation between “pure” social activities (people who engage in social activities to the exclusion of other activities on the system) and two parameters: (a) the asker’s satisfaction and (b) the correlation between “pure” social activity and the level of overall-activities in the network (H3). 3. METHODOLOGY The data had been collected from Yahoo! Answers. Approximately 20 million knowledge-seeking activities were executed each month for 19 months in more than 1600 sub-categories. We built three data sets (A, B and C) and used them to explore our hypothesis. Correlation analysis was employed as the main analysis tool. The explanatory power of the correlations was evaluated by looking at the Pearson Correlation value. 4. RESULTS The findings support almost all of the hypotheses for the biggest categories (A and B) where many users are involved in the knowledge - seeking and in the social activities. In the small categories (C) with a few users and a minimum level of activity, the data was noisy and we couldn’t generate clear and significant statistical insights. 5. DISCUSSION More people who seek information might generate more social interactions. When people interact socially on Yahoo! Answers (in other words, provide ratings or “thumbs up” or vote), they are exposed to the content of the questions and answers, and therefore they might be drawn into the informational exchange as well. The positive correlation between the group’s evaluation of the quality of the answers and the asker’s evaluation might have two explanations: (1) since the group members and the asker are equally interested in the same issues, they might subscribe to similar opinions and, hence, think alike from the outset; (2) a second explanation might be related to the synchronicity of the inputs. Since the user chooses a “Best Answer” only after the group has judged the answers, the group’s input might influence the asker’s evaluation. The results also provide an indication that social capital plays a role underlying knowledge contribution. Consistent with previous work [Morris et al., 2010; Raban and Harper, 2008; Wasko and Faraj, 2005], the present study leads to the conclusion that once the individual is given a chance to share knowledge and the experience of sharing is positive, knowledge contribution will occur. The positive social atmosphere created by the “Thumbs up” and the “question stars” is correlated with the quality of the answers as seen by the asker. This is not a trivial finding. It points to a significant impact of the social sphere on the informational sphere. The strong positive correlation between the size of the group (number of participants in the social network who starred questions and evaluated answers) and the correlation between group evaluations and asker satisfaction supports the “Wisdom of Crowds” expectation, since increased numbers of people participating in the evaluation leads to a better ability to predict asker satisfaction. The weak negative correlation between the share of those who stick to social activity alone and the size of the knowledge-seeking activity (numbers of questions and answers) is explained by the phenomenon of people who interact only with social tools. These users participate in the social network and do not partake in the knowledge-seeking network. Though the knowledge-seeking network and the social network are positively correlated across categories and grow together, the relative part of those who only do social activities declines as the knowledge-seeking network grows larger.

Page 92: Proceedings of the 6th Israel Association for Information

92

The social activity in some categories is carried out by users who participate only in the social activities. As a consequence, in these categories we will find fewer people who answer questions. This leads to a relatively low average number of answers per question. In these categories the rate of asker satisfaction with the Best Answers will be low, as we found in [Rechavi and Rafaeli, 2012], compared to that in categories in which the knowledge-seeking activity and the amount of answers is average or above. 6. CONCLUSIONS The aim of this study was to investigate relationships between the two types of networks that compose Q&A sites. We explored social and informational networks and the cross-fertilization of user motivation between them. The results indicate that the knowledge network and the social network are correlated in size and growth. The results provide some evidence that the contributions by people who engage in social activities to the exclusion of other activities on the system, might come at the expense of knowledge-seeking activities. The results resonate with previous works [Morris et al., 2009; Raban and Harper, 2008; , Wasko and S. Faraj, 2005] and provide an indication that social capital plays a role underlying knowledge contribution and that maintaining a critical mass of active social participants is important for the knowledge-seeking network. Lastly, the social parameters indicating the group’s satisfaction with answers were positively correlated with the asker’s satisfaction with the answer, meaning that the social network can predict asker’s satisfaction. Consistent with theories of “Wisdom of Crowds,” the larger the group that evaluates answer’s quality, the more it is correlated with askers’ evaluation of the quality of the “Best Answer.” 7. ACKNOWLEDEMENT We thank Ricardo Baeza-Yates of Yahoo! research Barcelona and Yoelle Maarek and her team from Yahoo! Labs Haifa for their knowledge and support. We also acknowledge thankfully the financial support of The Sagy Center for Internet Research, Univ. of Haifa, Israel.

Page 93: Proceedings of the 6th Israel Association for Information

93

8. REFERENCES

[1] Bian, J., Liu, Y., Agichtein, E. & Zha, H. (2008), ‘Finding the right facts in the crowd: factoid question answering over social media’, Proceeding of the 17th international conference on World Wide WebACM, pp. 467.

[2] Harper, F.M., Raban, D., Rafaeli, S. & Konstan, J.A. (2008), ‘Predictors of answer quality in online Q&A sites’, Proceeding of the twenty-sixth annual SIGCHI conference on Human factors in computing systems ACM, pp. 865.

[3] Huysman, M. & Wulf, V. (2005), ‘The role of information technology in building and sustaining the relational base of communities’, The Information Society (TIS), vol. 21, no. 2, pp. 81-89.

[4] Lampe, C., Wash, R., Velasquez, A. & Ozkaya, E. (2010), ‘Motivations to participate in online communities’, Proceedings of the 28th international conference on Human factors in computing systems ACM, pp. 1927.

[5] Li, B., Liu, Y. & Agichtein, E. (2008), ‘CoCQA: co-training over questions and answers with an application to predicting question subjectivity orientation’, Proceedings of the Conference on Empirical Methods in Natural Language Processing Association for Computational Linguistics, pp. 937.

[6] Liu, Y. & Agichtein, E. (2008), ‘You've got answers: towards personalized models for predicting success in community question answering’, Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies, pp. 97.

[7] Liu, B.S.C., Madhavan, R. & Sudharshan, D. (2005), ‘DiffuNET: The impact of network structure on diffusion of innovation’, European Journal of Innovation Management, vol. 8, no. 2, pp. 240-262.

[8] Morris, M.R., Teevan, J. & Panovich, K. (2010), ‘What do people ask their social networks, and why?: a survey study of status message Q&A behavior’, Proceedings of the 28th international conference on Human factors in computing systems ACM, pp. 1739.

[9] Raban, D. & Harper, F. (2008), ‘Motivations for answering questions online’, New Media and Innovative Technologies.

[10] Rafaeli, S. & LaRose, R.J. (1993), ‘Electronic bulletin boards and’ public goods’ explanations of collaborative mass media’, Communication Research, vol. 20, no. 2, pp. 277.

[11] Rechavi, A. & Rafaeli, S. (2011), ‘Not All Is Gold That Glitters: Response Time & Satisfaction Rates in Yahoo! Answers’, Privacy, Security, Risk and Trust (PASSAT), IEEE Third International Conference on Social Computing (SocialCom)IEEE, pp. 904.

[12] Shah, C., Oh, J.S. & Oh, S. (2008), ‘Exploring characteristics and effects of user participation in online social Q&A sites’, First Monday, vol. 13, no. 9.

[13] Shim, J. (2008), ‘Social Networking Sites: A Brief Comparison of Usage in the US and Korea’, Decision Line, vol. 39, no. 5, pp. 16-18.

[14] Viswanath, B., Mislove, A., Cha, M. & Gummadi, K.P. (2009), ‘On the evolution of user interaction in facebook’, Proceedings of the 2nd ACM workshop on Online social networks ACM, pp. 37

[15] Wasko, M.M.L. & Faraj, S. (2005), ‘Why should I share? Examining social capital and knowledge contribution in electronic networks of practice’, Mis Quarterly, vol. 29, no. 1, pp. 35-57.

[16] Wellman, B. (2001), ‘Computer networks as social networks’, Science, vol. 293, no. 5537, pp. 2031.

Page 94: Proceedings of the 6th Israel Association for Information

94

INTEGRATING KNOWLEDGE AND INSIGHTS INTO BUSINESS INTELLIGENCE SYSTEMS

A. Pollak Ben-Gurion University of

the Negev [email protected]

A. Even Ben-Gurion University of

the Negev [email protected]

G. Ravid Ben-Gurion University of

the Negev [email protected]

Keywords: Business Intelligence, Knowledge Management, Metadata, Decision Support, Insight Management.

INTRODUCTION

Business Intelligence (BI) and Knowledge Management (KM) systems are often perceived as two independent IS domains. BI systems improve decision-making and the gaining of competitive advantage (Elbashir et al., 2008), while KM systems have also been identified by Herschel and Jones (2005) as useful for supporting decision making processes of the administrative leadership in organizations. Beyond that, we suggest that integration of BI and KM may offer substantial organizational benefits, above and beyond the contribution of each alone. To assess this proposition, this study explores a conceptual design of a system that integrates knowledge and insight into BI tools, under the premise that such integration can improve BI usability, better support decision making, and enhance the organizational knowledge derived from data usage. Several studies have examined the possibility of integrating BI and KM (e.g., Bolloju et al., 2002; Cody et al., 2002, Herschel and Jones, 2005), suggesting that such integration will improve DSS capabilities and promote the formation of “organizational memory” repository. The suggested integration is based on a metadata repository, which capture knowledge and insights accumulated during the usage of BI tools. This centralized BI-related knowledge repository will also enable the sharing of knowledge and insights among interested parties within the organization, and possibly with external stakeholders as well. The purpose of this study is to assess the extent to which such integration can enhance decision processes and improve their outcomes. This was examined by a laboratory experiment, which is currently in the result analyzing phase.

KNOWLEDGE CHARACTERISTICS AND INTEGRATIONS METHODS – A PRELIMINARY STUDY

As a first step toward establishing the integrated system, we conducted a preliminary study that aimed at detecting BI-related knowledge and insight characteristics that should be stored in the Metadata repository. Another important goal of that study was to identify preferred methods and tools for sharing and distribution of the accumulated knowledge. These questions were addressed by a 3-round Delphi study with 19 key figures from academy and industry, who are greatly involved in the BI and/or the KM disciplines. Delphi methodology is based on an iterative discussion process among a panel of experts regarding a well-defined question. In the first round, participants were asked to propose characteristics and tools for sharing them. In the consecutive rounds, the participants were asked to evaluate the items both lists, and rank their relative importance and expected contributions. The ranking was done at three data-hierarchy levels – data records, specific items within a BI display (e.g., a report or a digital dashboard), and the BI-display as a whole.

Page 95: Proceedings of the 6th Israel Association for Information

95

Generally, the items that were identified by the participants in both lists matched past research that examined knowledge distribution in KM systems. The degree of consensus among participants was evaluated with Kendall’s W - a non-parametric measure, which is a commonly used in Delphi studies. The results show a high degree of agreement among participants with respect to the relative ranking of instruments for distribution of knowledge and insights, where information-presence marks (visual cues for identifying additional information that are places on the BI display) were identified as the favorable mean. Organizational association and business context were identifies as the most important knowledge items to be captured – however, with a lesser extent of consensus. In both lists – the difference between the three data-hierarchy levels was found to be insignificant. All participants agreed with the preliminary proposition that the suggested integration between BI and KM has high contribution potential – both from research and practice perspectives.

EMPIRICAL STUDY AND RESULTS

Following the preliminary Delphi study, and based in its results, the next step of our research was to test a prototype of the integrated system in a laboratory setting, and examine its effect on usage style and the quality of the decisions made. The laboratory research tested the extent to which the KM integration into the BI tool affects end-users’ familiarity and expertise with the BI tool and, hence, improve their decision performances. During the experiment, the participants, undergraduate engineering students, was asked to perform a certain decision task, aided by a BI tool that supports knowledge collaboration and insight sharing between end users. The decision scenario, which is relatively complex, resembled a real-world situation in which students are required to decide on their near-future residence, workplace, and job given certain relevant pieces information (e.g., Job availability per location, expected salary, transportation and rent cost). To evaluate the different choices, the participants used a BI tool for retrieving and analyzing the problem space data, and were assisted by knowledge and insights sharing tool that was favored by the participants of the preliminary study. End-users’ performance was measured along two key aspects: Effectiveness, reflecting the ability to perform a decision task well, and Efficiency, reflecting the search intensity (e.g., the number of OLAP operations performed – “Drill Down”, “Slice and Dice”, etc.) as well as the time needed to reach the decision. The participants were divided into subgroups, each subgroup was provided with a different variation of the BI tool, each reflecting a different level of KM integration: 1. No Integration (control): this group was provided with the same tool in both sessions. 2. Non-Conducive: members of this group were provided with a BI tool that integrates some

KM capabilities; however, in a “read only” mode - meaning that users were able to view knowledge and insights gained by others, but not contribute their own.

3. Conducive: similarly to the former group, members here were provided with a BI tool that integrates KM capabilities, however, in a “read-write” mode, meaning that users were able to both contribute knowledge and insights of their own, and view contributions by others.

4. Incentive Conducive: Participants in this group were received the same privileges that was granted to the previous group (Conducive). However, the participants were given an additional incentive to contribute informative comments and insights to the knowledge repository.

The experiment results showed a significant increase in the user effectiveness for all the three experimental groups, while the control group’s participants didn’t show any major change. Regarding the participant’s efficiency, the findings show mixed results. On the one hand, the search intensity decreased in the experimental groups relative to the control group, but on the other hand the decrease in the decision time in the experimental groups was less significant than the control group. The experiment can be seen as a successful first step toward assessing the impact and the usefulness of the integration concepts presented in this study.

Page 96: Proceedings of the 6th Israel Association for Information

96

CONCLUSIONS

This research investigates the integration of KM into BI tools. The preliminary study’s findings indicating the panel’s agreement on the rating means for distributing the knowledge gained among the integrated system’s end users, and the lack of agreement on the ranking characteristics which linking knowledge and insights with the objects presented in the system. Along this rated lists, the Delphi study raises a number of conclusions and recommendations relating to the proposed integrated system design. We see the laboratory experiment as a first step toward evaluating the suggested integration which partially available on the IS market, and gaining insights on its impacts and outcomes.

REFERENCES

Bolloju, N., Khalifa, M., and Turban, E. (2002). Integrating knowledge management into enterprise environments for the next generation decision support. Decision Support Systems, 33(2), 163-176.

Cody, W. F., Kreulen, J. T., Krishna, V., and Spangler, W. S. (2002). The integration of business intelligence and knowledge management. IBM Systems Journal, 41(4), 697-713.

Elbashir, M. Z., Collier, P. A., & Davern, M. J. (2008). Measuring the effects of business intelligence systems: The relationship between business process and organizational performance. International Journal of Accounting Information Systems, 9(3), 135-153.

Herschel, R. T., and Jones, N. E. (2005). Knowledge management and business intelligence: The importance of integration. Journal of Knowledge Management, 9(4), 45-55.

Page 97: Proceedings of the 6th Israel Association for Information

97

USING INEQUALITY AND OVERLAP INDICES TO IMPROVE DECISION-MAKING PROCESSES IN BUSINESS

INTELLIGENCE ENVIRONMENTS

Adva Assido il.ac.bgu.post@assidoa

Adir Even il.ac.adireven@bgu

Edna Schechtman [email protected]

Department of Industrial Engineering and Management

Ben-Gurion University of the Negev

P.O.Box 653, Beer-Sheva 84105

Keywords: Business Intelligence (BI), Decision Support System (DSS), Data Warehouse, GINI

Index, ANOGI (Analysis of Gini)

INTRODUCTION AND BACKGROUND

Recent years have witnessed a major transition toward a decision-making culture that is based on data collection and analysis (Davenport, 2006). This shift leads organizations to collect vast amounts of data and make increasing investments in Data Warehousing (DW) and BI (Business-Intelligence) tools (March and Hevner, 2007; Watson and Wixom, 2007). The BI/DW infrastructure intensifies the ability to analyze the data and gain insights with high business value. Online Analytical Processing (OLAP) tools are a common form of BI applications. OLAP tools let end-users navigate effectively in large datasets (Tremblay et al., 2007). The navigation is often supported by an OLAP-cube – a multi-dimensional data model that classifies attributes (variables) into dimensions (key business entities– e.g., customers, products) versus facts (numeric measures– e.g., quantity, sales amount). To benefit from the use of OLAP tools, the end-user must be familiar with the different variables offered and their business meaning. However, the growing volumes and complexity of data warehouses often turn this need for familiarity into a great challenge. End-users are not always proficient enough in the search space that was created and often do not even know what questions can be asked and answered - what might turn the process of search and navigation in large datasets into complicated and inefficient.

OBJECTIVES

Our study addresses this challenge by developing an algorithm that detects data elements (explanatory variables, dimension attributes) of higher interest, it follows the concept of integrating navigation recommendations into BI tools (Sarawagi et all., 1998; Even et al., 2010). Detecting the more interesting and insightful data subsets would help the end-user navigate in the large and complex data repositories and, by that, increase the efficiency and the effectiveness of BI usage. The question that our algorithm aims at answering is what dimension should be included in queries during the analysis process, so that the output will show the interesting information, from the end-users’ standpoint, and help them improving their decisions. Our underlying assumption is that a dimension with high contribution is one that possesses the best separation between its levels with respect to the investigated fact (a response variable). This principle is demonstrated in the next example (Table 1).

Page 98: Proceedings of the 6th Israel Association for Information

98

Table 1. An Example of Inequality in the Separation Power

Dimension X Dimension Y Fact Z X1 Y1 100 X2 Y1 300 X1 Y2 1100 X2 Y2 900

Dimension X has a low separation power with respect to Fact Z, as both “X1” and “X2” are associated with a total of Z=1200. On the other hand, Dimension Y has a high separation power – “Y1” can be associated with a total of Z=400, while “Y2” can be associated with a total of Z=2000. If X and Y, for example, reflect two different categorizations of customers, and Z reflects the purchase power of each subset – it would be reasonable to suggest, in this case, that categorization Y is more useful than categorization X, in the context of assessing or predicting purchase power. Further, based on categorization Y, customers of category “Y2” appear to be more profitable; hence, should be targeted first.

METHOD - ANALYSIS OF GINI (ANOGI)

Our algorithm bases its analysis of separation and explanatory power on the Analysis of Gini (ANOGI) methodology (Yitzhaki and Schechtman, 2012), which is based on the decomposition of the Gini index - a well-known measure of inequality in a large population (Yitzhaki 1994, Yitzhaki & Schechtman, 2012). The ANOGI can be seen as a parallel to the ANOVA (Analysis of Variance) methodology, which is based on the decomposition of the overall variance into the “between” versus the “within” variances with respect to the different levels of the explanatory variables (a.k.a., categories). Similarly, the ANOGI is based on the decomposition of Gini’s index. In addition to the “between” and “within” components, the ANOGI reveals some additional information about the extent of overlap of the response variable between the different levels of the explanatory variable. It has been suggested that while the ANOVA is probably a suitable approach for symmetric distribution (e.g., the Normal distribution), the ANOGI would be a better fit for asymmetric ones (Yitzhaki and Schechtman, 2010). Our algorithm uses criteria for separation assessment, which are derived from the ANOGI methodology. The algorithm is iterative – it first selects the dimension with the best degree of separation and then selects the category (level) with the highest average within the selected dimension (or lowest, if “low” is better than “high”). Once the best-separating dimension and the favourable category are selected, the dataset records are being filtered accordingly. The iterations may repeat upon end-users’ request, to a point where the end-user reached the desired subset. This investigation process can be seen as a “drill-down” in the OLAP space – a common term for an OLAP-navigation style, in which the end-users select specific dimension and filters them in order to reach the subset that best explains the investigated fact.

RESULTS - A PRELIMINARY EVALUATION

For a preliminary evaluation of our algorithm’s performances, we compared it against a default “RANDOM” algorithm. In the first phase, the RANDOM algorithm selects a dimension randomly (where the ANOGI selects a dimension based on separation analysis), and in the second phase chooses the category with the highest average (similarly to the ANOGI algorithm). The purpose was to examine whether the criteria, which are based on the values of overlap and inequality, would indeed lead the user faster to a more valuable dataset. The evaluation was performed with Census data from 1994. The dataset contains 11,826 records with 12 explanatory variables and

Page 99: Proceedings of the 6th Israel Association for Information

99

one response variable (income). The comparison was based on the ability of each algorithm to select the dimension which gives the most interesting perspective on the data and help the user to identify a sub-group with average value of the response variable for a given number of observations.

Figure 1. Mean Income as a Function of Number of Records – ANOGI versus RANDOM

The comparison was performed 1000 different runs of the RANDOM algorithm, assessing the number of observations and their means per iteration. The results are shown in Figure 1, where the red dots in the graph represent the ANOGI results and the black circles represent the RANDOM results. As can be seen, at all ranges of records, the ANOGI algorithm yields a significantly higher mean income compared to the RANDOM algorithm.

CONCLUSION

As the use of BI and OLAP tools is constantly growing and their availability among organization increases, the potential benefits of techniques and methodologies that would help the end-user detect data subsets of higher interest are obvious. As a contribution to that end, our study explores a quantitative algorithm, based on the ANOGI methodology, for assessing the relative importance of the dimensions and their categories within a large dataset, and generating recommendation for the end-user regarding the next steps to take in order to help him drill down and navigate over large datasets efficiently. The preliminary evaluation results indicate that the algorithm indeed identifies correctly the data subsets of higher interest, and outperforms the heuristics that were tested in parallel. Obviously, a more thorough and robust evaluation is required, and we intend to do so in the next phase of our research using large real-world datasets.

Page 100: Proceedings of the 6th Israel Association for Information

100

REFERENCES

Davenport, T.H. (2006). Competing on Analytics. Harvard Business Review. 84, 11, 99-107

Even, A., Kolodner, Y., and Varshavsky, R. (2010), Enhancing Business-Intelligence Tools with Value-Driven Recommendations ,The International Conference on System Design Research (DESRIST), St. Gallen, Switzerland

March, S.T., and Hevner A.R. (2005), Integrated Decision Support Systems: A Data Warehousing Perspective, Decision Support Systems, 43, 3, 1031-1043

S. Sarawagi, R. Agrawal, R. Megiddo (1998), Discovery-driven exploration of OLAP data cubes in: Conf. Proc. EDBT ’98, London, UK, pp. 168–182.

Tremblay M. C., Fuller R., Berdt D., Studnicki J. (2007), Doing more with more information: Changing healthcare planning with OLAP tools, Decision Support Systems, 43, 4, 1305-1320

Watson, H. J., and Wixom, B. H. (2007), The Current State of Business Intelligence, Computer (40:9), 96-99

Yitzhaki, S. (1994), Economic Distance and Overlapping of Distributions, Journal of Econometrics, 61, 147-159

Yitzhaki, S., Schechtman, E. (2012), The Gini Methodology: A Primer on Statistical Methodology, In preparation.

Page 101: Proceedings of the 6th Israel Association for Information

101

MEASURING THE BUSINESS VALUE OF E-BUSINESS TECHNOLOGIES

Orit Raphaeli Department of Industrial

Engineering and Management, Ben-Gurion University of the Negev [email protected]

Gali Naveh Department of Industrial

Engineering and Management, Ben-Gurion University of the Negev

[email protected]

Michal Levi Department of Industrial

Engineering and Management, Ben-Gurion University of the Negev [email protected]

Sigal Berman Department of Industrial

Engineering and Management, Ben-Gurion University of the Negev

[email protected]

Lior Fink Department of Industrial

Engineering and Management, Ben-Gurion University of the Negev

[email protected]

Keywords: e-business technologies, Supply Chain, IT business value, Analytical Hierarchy Process (AHP).

INTRODUCTION

In the past decade, with the advent of new e-business technologies, firms have engaged in initiatives that link supply chain processes across enterprises, in order to increase their efficiency and effectiveness. E-business technologies refer to the different technologies that offer a sound infrastructure for doing business online (Ash and Burn, 2002). The thrust of investment in such technologies is motivated by the desire to create seamless integration of entities in a supply chain, which calls for sharing of accurate and timely information and for coordination of activities among business entities (Devoraj et al., 2007). In parallel with this evolution, mobile communications technology has emerged to play an increasingly important role in business and society (Barnes et al., 2006). According to a recent report by the Economist (2011), mobile technology has a far-reaching impact on both business and social environments. Until recently, e-business and mobile technologies have largely followed separate paths. Lately, a new trend of convergence between the two has been accelerating, resulting in a variety of wireless data communication capabilities (Barnes et al., 2006), such as wireless data services, based on mobile data access and electronic messaging on mobile devices (Siau and Shen, 2003). The combination of mobile devices with pervasive connectivity and plentiful online content is "raising citizens’ expectations of what personal technology can achieve" (The Economist, 2011, p. 4). As such solutions are widely adopted, it is important to understand the ways in which they can actually impact businesses (Lehman et al., 2004). However, determining whether a new and emerging technology can have business value is a challenging endeavor, given the complexity of the processes through which the deployment of technologies creates business value (Fink, 2011). Despite a wealth of research on the impact and value of investments in information and communication technology (ICT), poor estimations of the impact and value of new technologies still tend to be the norm rather than the exception (Gebauer et al., 2002). This study addresses this challenge by providing insight into how organizations can realize the benefits of emerging technologies, using an analytical method that has yet to be used in this context.

RESEARCH OBJECTIVES

The primary objective of this study is to examine the business value attributable to the implementation of emerging technologies in supply chain initiatives. Specifically, we are focusing

Page 102: Proceedings of the 6th Israel Association for Information

102

on a platform that employs Linked Data3 (e-business technology) as its back-end and mobile apps (mobile technology) as its front-end. This platform is developed as part of the ComVantage EU-FP7 project. The ComVantage platform is designed to provide an internet-based collaboration space facilitating communication and coordination among all stakeholders involved in industrial supply chains, including their end customers (www.comvantage.eu). In order to assess the business value of this platform, we rely on a process-oriented approach to the business value of ICT (Melville et al., 2004; Barua et al., 1995) and argue that ICT helps organizations to create business value through its direct impact on business processes (e.g. Tallon et al., 2000). This argument is supported by the fact that ICT typically provides automated support to business processes and inter-process linkages. As such, research on ICT process-level benefits does not simply demonstrate that value is created, but also sheds light on how value is created (Elbashir et al., 2008). Based on these considerations, we propose a research framework, composed of two high-level constructs (see Figure 3): ICT-based collaborative capabilities and business process performance. The ICT-based collaborative capabilities construct refers to the collaborative capabilities that are enabled by emerging ICT. Our fundamental premise is that using emerging technologies adds value to supply chain operations by enhancing the collaborative capabilities of supply chain partners. Based on this assumption, the capabilities were grouped into three categories, depending on the particular context in which collaboration is facilitated: collaboration with customers, internal collaboration, and collaboration with suppliers. The business process performance construct relates to the various aspects at the business process level, in which an improvement is expected as a result of collaborative capabilities. Based on the literature (e.g. Chen and Paulraj, 2004; Li et al., 2006), we have identified six commonly-assessed aspects of supply chain performance: cost, efficiency, quality, flexibility, innovation, and sustainability. Accordingly, the purpose of this specific study is to assess the impacts of the ICT-based collaborative capabilities on performance at the business process level.

Figure 3: The research framework - high level view

METHODOLOGY

This study is exploratory in nature because the emerging ICT under investigation have yet to be widely deployed in organizations. Thus, in order to assess the impact of ICT-based collaborative capabilities on business process performance, subjective data should be obtained. The most

3 The term “Linked Data” refers to a set of best practices for publishing and connecting structured data on

the Web, adopted by data providers, in order to create a global data space (Bizer et al., 2009).

Page 103: Proceedings of the 6th Israel Association for Information

103

suitable methodology for such an exploratory investigation is the qualitative methodology of using multiple case studies (i.e., multiple case designs). The exploratory case study approach is considered an appropriate choice of methodology when no prior theorizing on a topic exists (Eisenhardt, 1989), as for emerging technologies, or when a phenomenon is relatively new. The case studies represent different application areas in supply chain processes (Plant engineering and commissioning, Customer-oriented production and Mobile maintenance). Despite the qualitative characteristics of the data, ranking and quantifying the strength of the effect of a certain capability on the different performance aspects would provide a better understanding of the expected benefits. Hence, we plan to utilize the Analytic Hierarchy Process (AHP) as a methodology to compare the effects of ICT-based capabilities on business process performance. This methodology is a multi-criteria decision analysis tool, developed by Saaty (1980) that has been widely applied to support complex evaluation problems. In order to perform this analysis, we plan to construct an AHP instrument that presents a set of questions comparing the relative effect of a collaborative capability on pairs of operational performance aspects, for a given business process. Then, the instrument will be administered to stakeholders with a good understanding of the relevant business processes. The interviewees will be asked to perform multiple pairwise comparisons using Saaty's procedures and scales (Saaty and Alexander, 1989). Performing an AHP analysis using the collected data will result in a ranked list of performance aspects for each ICT-based capability. Aggregating the results over all the capabilities will enhance our understanding of how emerging technologies are affecting core business processes in organizations. This submission is considered research-in-progress. We are currently performing the interviews and collecting the data, and we plan to present our preliminary findings at the conference.

Acknowledgment

The research presented in this submission is funded by the European Community's Seventh Framework Programme under grant agreement no. FP7-284928 ComVantage.

Page 104: Proceedings of the 6th Israel Association for Information

104

REFERENCES

Ash, C.G., & Burn, J.M. (2002). A strategic framework for the management of ERP enabled e-business change. European Journal of Operational Research, 146(2), 374–388.

Barnes, S.J., Scornavacca, E.,& Innes, D. (2006). Understanding wireless field force automation in trade services. Industrial Management & Data Systems, 106(2), 172-181.

Barua, A., Kriebel, C.H., & Mukhopadhyay, T. (1995). Information Technologies and Business Value: An Analytic & Empirical Investigation. Information Systems Research, 6, 3-23.

Bizer, C., Heath, T., Berners-Lee, T.(2009). Linked Data - The Story So Far. International Journal on Semantic Web & Information Systems 5(3), 1-22.

Chen, I.J., Paulraj, A., & Lado, A.A. (2004). Strategic purchasing, supply management, and firm performance, Journal of Operations Management, 22(5), 505-523.

Comvantage project, available at http://www.comvantage.eu.

Eisenhardt, K. (1989). Building theories from case study research. Academy of Management Review, 14 (4), 532–550.

Elbashir, M.Z., Collier, P.A., & Davern, M.J. (2008). Measuring the effects of business intelligence systems: The relationship between business process and organizational performance. International Journal of Accounting Information Systems, 9(3),135-153.

Fink, L. (2011). How Do IT Capabilities Create Strategic Value? Toward Greater Integration of Insights from Reductionistic and Holistic Approaches. European Journal of Information Systems, 20(1), 16-33.

Gebauer J., Shaw, M.J., & Zhao, K. (2002). Assessing the Value of Emerging Technologies: The Case of Mobile Technologies to Enhance Business-To-Business Applications. In 15th Bled Electronic Commerce Conference eReality, Bled, Slovenia, June 17 – 19.

Lehmann, H., Kuhn, J., & Lehner, F. (2004). The future of mobile technology: findings from a European Delphi study. Proceedings of the 37th Hawaii International Conference on Systems Sciences, Big Island, Hawaii.

Li, S., Ragu-Nathan, B., Ragu-Nathan, T.S. (2006). The impact of supply chain management practices on competitive advantage and organizational performance. Omega, 34, 107-124.

Melville, N., Kraemer, K., & Gurbaxani, V.(2004). Review: information technology and organizational performance: an Integrative model of IT business value. MIS Quarterly, 28(2), 283-322.

Saaty T.L. (1980). The analytic hierarchy process. McGraw-Hill.

Saaty, T.L., & Alexander, J. (1989). Conflict Resolution: the analytic hierarchy process, Praeger.

Siau, K., & Shen, Z. (2003). Building customer trust in mobile commerce. Communications of the ACM, 46(4), 91-94.

Tallon, P.P., Kraemer, K.L., & Gurbaxani, V. (2000). Executives' perceptions of the business value of information technology: a process-oriented approach. Journal of Management Information Systems, 16(4), 145-173.

The Economist (2011), Personal technology beyond the PC. Special Report, Oct 8th 2011.

Page 105: Proceedings of the 6th Israel Association for Information

105

ASSESSING AND CORRECTING DATA ACCURACY DEFECTS USING THE MARKOV-CHAIN MODEL

Alisa Wechsler

Dept. of Industrial Eng. and Mgmt. Ben-Gurion University of the Negev

[email protected]

Adir Even Dept. of Industrial Eng. and Mgmt.

Ben-Gurion University of the Negev [email protected]

Keywords: Data quality, Data accuracy, Markov-Chain model.

INTRODUCTION AND OBJECTIVES

The management of data quality has long been identified as a critical issue for organizations [4]. Literature has identifies various perspectives (a.k.a. data quality dimensions), along which the quality of data can be assesed and measured. This study addresses the accuracy dimension - defined as the degree to which the data is free of error and reflects the real world state [1,3] . Accuracy defects are considered to be more difficult to assess and correct – as their correction requires some baseline for comparison, which is not always available. A unique persepctive through which we examine accuracy, in this study, is its association to another data quality dimenstion – currency (or recency), which reflects the extent to which the data is up-to-date. Our baseline research assumptions is that data values may change over time. Therefore, if the stored data is not maintained properly - some values may no longer reflect the correct real world state and the results might turn out to be a severe accuracy loss.

While maintining data at a high accuracy and currency level is certainly desireable from the end-user’s standpoint – it may introduce substantial cost-benefit tradeoff, from data quality management standpoint, as auditing exisint data records and validating their currency and correctness is often costly. Updating a dataset at a very high frequency may lead to nearly-perfect accuracty; however, it may turn out to be wasteful policy. The analytical framework that we introduce and evalute in this study, which is based on the Markov-Chain model, may help addressing this issue, by linking accurcy degradation to changes in the real-world state that are not reflected properly in the database. We show that, by using such a model, it is possible to predict accurcay decay and assess different data-quality imporvement treatments and policies, and terms of cost-benefit tradeoffs.

METHOD: A MARKOV-CHAIN MODEL FOR ASSESSING ACCURACY DEGRADATION

Our main purpose in the model development is to simulate the real world changes of entity’s attributes’ values over the time and by that to assess the accuracy of the parallel values in the dataset. For example, if the probability for real world attribute value to be changed in two years is P, then the quality level of data that had not been updated in the last two years is the complementary probability (1-P) which reflects the probability that the real world attribute value is still the same as documented in our dataset two years ago. Accordingly, the key factor to assess the data quality level is first to assess the probability for real world attribute value to be changed in 1 to n periods. By using Markov-Chain model we can overcome this chalange.

Page 106: Proceedings of the 6th Israel Association for Information

106

Our accuracy-assessment concept is based on seeing the values changes in a dataset as a Markov process, with descete stages, each representing a certain time period. We see each possible attribute value as a state, with a certain probability to transition to another state, within a certain period. The collection of transition probabilities can be caputred in a transition matrix, which can be estimated from a large-enough test set of data. To apply model with discrete states and discrete times, two baseline assumptions must be made: 1) memorylessness transitions – i.e., only the present state affects the future states, 2) fixed and known transition probabilities. In real-world scenarios, entities’ states can change continuously over time. However in the world of data quality management the assumption of discrete time-steps is not unresonable, as it is common to evaluate data quality in certain time perios with more-or-less fixed frequency (e.g, quarterly or annual evalutions).

EVALUATION

We performed a preliminary evaluation of the model, to confirm its feasibility and potential contribution. The evaluation was done with a real dataset from the Israeli “Central Bureau of Statistics”. The dataset contains economic indicators on 124 industrial sectors in Israel that were collected annualy over a 14-year period. In the preliminary evaluation we used 3 indicators: Sales, Income and Export. To reduce of the dependency between the results and a specific training-testing partition, we made ten permotations of 80/20 training/testing ratio and then considering the mean results. To evaluate the prediction accuracy, we used two commmonly used evalation metrics: the ‘Kullback-Leibler Distance’ (KLD) [2], and the ‘Absolute Percent Error’ (APE). The metrics were examined with respect to a) the number of periods between the last update year and the year of accuracy assessment (periods of prediction), and b) the number of periods between the first update and the last update (periods of learning). Overall, the results showed that the model is indeed feasible, and that it can lead to a reasonable prediction of accuracty behavior. Both distance metrics (the KLD and the APE) were close to zero, under certain test conditions – what indicates high potential contribution. However, it’s important to note that the model and its success likelihood are highliy context-dependent – as they rely on the business environment in which the data is collected and used.

CONCLUSIONS

The proposed data quality assessment model devaloped by using Markov-Chain model and evaluated using real dataset. The results show the feasiability of implementing such a model and its performance in different enviromantal setting. The main contributions of this research to the field of data quality management is by providing methods which easily can be developed to useful tool, for assessing the accuracy of the dataset through his different hierarchy levels and based on that assessment, providing also recommendations for optimal data accuracy improvement policy. The main research limitation is the Markov-Chain model assumptions which are not applicable in every database environments. Future research can apply this model on additional and different real datasets to strengthen the results of the preliminary evaluation. The additional interesting future research direction is integrating utility considerations and the data contribution to decision making processes into the model.

Page 107: Proceedings of the 6th Israel Association for Information

107

REFERENCES

Ballou, D. P., and H. L. Pazer. "Designing Information Systems to Optimize the Accuracy-Timeliness Tradeoff," Information Systems Research (6:1), 1995, pp. 51. Gelman, I. A. "Setting Priorities for Data Accuracy Improvements in Satisficing Decision-Making Scenarios: A Guiding Theory," Decision Support Systems (48:4), 2010, pp. 507-520. Luebbers, D., U. Grimmer, and M. Jarke. "Systematic Development of Data Mining-Based Data Quality Tools," 2003, pp. 548-559. Parssian, A., S. Sarkar, and V. S. Jacob. "Assessing Data Quality for Information Products," 1999, pp. 428-433.

Page 108: Proceedings of the 6th Israel Association for Information

108

CONTROLLING CRITICAL SPREADSHEETS

Nir Gabbay Faculty of Management,

Tel Aviv University [email protected]

Keywords: spreadsheets, errors, enterprise spreadsheet management, risk management, end user computing

INTRODUCTION

Spreadsheets are used in almost all businesses. Their use ranges from simple applications to the development of large and complex applications critical to the organization. Errors in data, formulas or the manipulation of spreadsheets can be costly and even devastating (Powell et al., 2009a). However, relatively little is known about what types of errors actually occur, how they are created, how they can be detected and how they can be avoided or minimized (Powell et al., 2008). Studies have found a very high incidence of spreadsheet errors, with approximately 1% to 5% of all formulas in spreadsheets containing errors. Thus, it may be assumed that large spreadsheets will contain numerous errors and even relatively small spreadsheets will have a high probability of error. Despite such evidence, organizations and users appear to be in denial and do not take even basic actions to detect errors (Panko, 2008b). Powell et al. audited 25 operational spreadsheets from five different organizations. They identified 117 errors, 70 of which had a nonzero impact on the output. Among the errors, seven exceeded $10 million (Powell et al., 2009b).

Users and organizations that rely on spreadsheets do not normally acknowledge the risk of error associated with spreadsheets. In fact, they ignore the very existence of spreadsheets as an important organizational asset, as well as some of the risk factors associated with it (McGill & Klobas, 2005). In addition, the untreated portion of organizational assets, which are not addressed by risk management (the spreadsheets) may be larger than the portion that is being meticulously treated by information systems personnel, thus posing a great risk to the organization (Panko, 2007). Finally, there appears to be little correlation between the importance of the application or the risk involved and the quality of the spreadsheet (Powell et al., 2009b).

Thus, there appears to be a "spreadsheet compliance paradox" that can be stated as follows: First, spreadsheets are widely used in financial reporting and many other business functions. Second, given the compliance requirements of Sarbanes-Oxley and several other laws, the existence of even a few spreadsheets that are incorrect to a material degree can have severe consequences for the organization. Third, it is reasonable to assume that each large spreadsheet contains a number of errors and that, accordingly, nearly every large spreadsheet contains significant accounting errors. Fourth, a number of companies have already had to report control and error weaknesses associated with spreadsheets. Given these facts, we would expect organizations and regulators to pay close attention to spreadsheet development, including the implementation of extensive testing. Paradoxically, however, corporations do not impose extensive testing or other development requirements on their spreadsheets and regulators do not insist on such measures. What we appear to have is sham compliance in which spreadsheets are not effectively controlled yet are being treated by everyone as if they are. However, this situation is not expected to last much longer, due to warning letters issued by regulators (Panko, 2006).

Spreadsheets in themselves are not the cause of the problem. Replacing them with other software does not eliminate errors and in many cases does not even reduce them (Panko, 2008a). The need

Page 109: Proceedings of the 6th Israel Association for Information

109

for user flexibility, as well as the fast-paced changes in the business environment, dooms any attempt to restrict the use of spreadsheets in critical applications. Therefore, organizations are advised to enforce a policy whereby critical spreadsheets are handled as a key organizational information system that requires proper care and which involves the use of computerized tools to support spreadsheet management (Baxter, 2008).

OBJECTIVES

The current research, carried out in Israel, attempts to determine whether the “spreadsheet compliance paradox" also exists in Israel or in other words whether there are gaps between patterns of critical spreadsheet usage and recommended practice and between recommended organizational policy and existing practice.

METHOD

The study included approximately 200 subjects. The survey, consisting of 48 questions, was based on a questionnaire developed by SERP (Spreadsheet Engineering Research Project) researchers (Powell et al., 2005). A comparison is made between the results of the questionnaire in Israel and those of the SERP research, which consists of approximately 1600 observations collected in the US and Europe and serves as a control group for this study. The current research focused on a group of 70 financial professionals from the banking and insurance sector (which is highly regulated and in which spreadsheets are frequently used in critical processes).

RESULTS

Table 1 indicates that spreadsheets constitute a key element in the work of the Israeli subjects, even more so than for SERP subjects. This is especially evident among financial professionals from the banking and insurance sector, with 63% of subjects in this group stating that they consider spreadsheets to be critically important (97% ranked spreadsheets as being critically important or very important). 56% of subjects in this group stated that they devote more than 75% of their work time to spreadsheets (with 72% devoting at least half of their time). In addition, the findings show that 96% of the subjects in this group tend to share their spreadsheets with others.

Table 1. The need to handle spreadsheets as an important information system

The need to handle spreadsheets as an important organizational information system

Israel – financial professionals

from the banking and insurance

sector

Israel – all subjects SERP

The proportion of subjects that consider spreadsheets to be critically important or very important 97% 89% 83%

The proportion of subjects that consider spreadsheets to be critically important 63% 52% 49%

Subjects who devote at least half their work time to spreadsheets 72% 48% 25%

Subjects who devote 76%-100% of their work time to spreadsheets 56% 30% 7%

Share spreadsheets with other users 96% 94% 86%

Table 2 demonstrates that not even the most basic principles of new file development were implemented according to the recommended practice in either study. Among Israeli subjects, patterns of new spreadsheet development appear to be even worse than among the subjects in other countries. In fact, most tend to enter data directly into the spreadsheet with no prior planning (such as creating a sketch on paper) and in many cases they are not strict about positive behavior patterns, such as separating input from formulas and module separation. A lack of adherence to the positive behavior pattern associated with the principle of pre-planning is even more widespread

Page 110: Proceedings of the 6th Israel Association for Information

110

among financial professionals from the banking and insurance sector (only 2% of this group tend to create a sketch on paper, as opposed to 17% of the SERP subjects). Additionally, 39% of subjects in this group tend to use an existing spreadsheet when creating a new one (which is a negative behavior pattern) versus only 23% in the SERP survey. 57% of subjects in this group also tend to input data directly into the spreadsheet (a negative behavior pattern) as opposed to only 49% in the SERP survey.

Table 2. Behavior patterns related to creating new spreadsheets

Creating new spreadsheets

Israel – financial professionals

from the banking and insurance

sector

Israel – all subjects SERP

Behavior pattern positive / negative

Behavior pattern in creating new spreadsheets Percent Percent Percent

Positive Always implement the principle of separating input from formulas 9% 9% 22%

Positive Always implement the module separation principle 8% 6% 20%

Positive Implement the principle of pre-planning (create sketches on paper) 2% 5% 17%

Negative Utilize an existing spreadsheet to create a new one (contradicting the pre-planning principle)

39% 27% 23%

Negative Input data directly into spreadsheets (contradicting the pre-planning principle) 57% 60% 49%

According to the Mann-Whitney Test, Israeli subjects devote significantly more time (sig<0.000) to checking spreadsheets than SERP subjects. However, the methods used to do so are less effective in Israel.Table 3 indicates that the rather ineffective method of using common sense is actually the only mechanism utilized by most subjects to check spreadsheets. 79% of the Israeli survey subjects utilize this method, as compared to 67% of the SERP subjects (a significant difference (sig=0.001) was found between the studies according to the Chi-Square test). One of the more effective methods is that of checking formulas one after the other. While this is one of the most common methods among SERP subjects (46%), only 21% of the Israeli subjects utilize it (the difference (sig<0.000) was found to significant according to the Chi-Square test). Additionally, both surveys showed that little use is made of tools that are considered to be relatively effective for checking spreadsheets (according to SERP researchers): "Formula auditing toolbar", "Display all formulas" and "Error checking option".

Table 3. Behavior patterns related to checking and monitoring spreadsheets

Checking and monitoring spreadsheets

Israel – financial professionals

from the banking and insurance

sector

Israel – all subjects SERP

Behavior pattern positive / negative

Behavior pattern in checking and monitoring spreadsheets Percent Percent Percent

Page 111: Proceedings of the 6th Israel Association for Information

111

Positive Percentage of time devoted to checking and monitoring is 20%-40% 26% 24% 5%

Negative Percentage of time devoted to checking and monitoring is 0% 1% 5% 4%

Positive Check formulas one after the other 16% 21% 46%

Negative Use common sense 89% 79% 67%

Negative Do not check at all 0% 9% 17%

Positive Formula auditing toolbar 27% 16% 28%

Positive Display all formulas 13% 14% 18%

Positive Error checking option 4% 5% 10%

Table 1 demonstrates that sharing is very common among the Israeli subjects. Thus, one would expect Israeli subjects to be more compliant with respect to protection, version control and appropriate change management procedures. However, Table 4 shows that in both surveys most users tend not to use protection at all. Likewise, negative behavior patterns such as the non-use of version control and implementation of changes by the users themselves are evident in the Israeli survey and even more prevalent than in the SERP survey. The problem is even more serious among financial professionals from the banking and insurance sector group, 70% of whom stated that they themselves make changes to the spreadsheets (as opposed to 52% of the all Israeli subjects and 26% of the SERP subjects). On the contrary, SERP subjects stated that changes are usually made by the original developer (who is generally thought to have more appropriate knowledge and skills to implement the task).

Table 4. Behavior patterns related to sharing and making changes

Sharing and making changes to spreadsheets

Israel – financial professionals

from the banking and insurance

sector

Israel – all subjects SERP

Behavior pattern positive / negative

Behavior pattern in sharing and making changes to spreadsheets Percent Percent Percent

Negative Non-use of protection 70% 64% 63%

Negative Non-use of version control 53% 45% 31%

Negative Changes made by users 70% 52% 26%

Table 5 shows that fewer Israeli subjects stated that their organization offers no spreadsheet-related training. Thus, only 11% of the financial professionals from the banking and insurance sector stated as such versus 32% of all Israeli subjects and 41% of the SERP subjects. On the other hand, there is more widespread utilization of spreadsheet experts who serve department users (24%) than among all Israeli subjects (15%) and among SERP subjects (5%). This form of training is highly recommended and SERP researchers noted that many users learn and improve their skills through informal help provided by their department “guru”.

Despite somewhat of an improvement in the training options offered by Israeli organizations to their employees, relative to other countries, there is greater unmet demand among Israeli employees for training. Among financial professionals from the banking and insurance sector, 64% stated that they would definitely participate in training if offered, as opposed to 48% of all Israeli subjects and only 20% of the SERP subjects.

Page 112: Proceedings of the 6th Israel Association for Information

112

Table 5. Organizational policy regarding spreadsheet training

Spreadsheet training Israel – financial

professionals from the banking and insurance sector

Israel – all subjects SERP

Organizational policy positive / negative

Organizational policy on employee spreadsheet training Percent Percent Percent

Negative No employee spreadsheet training 11% 32% 41%

Positive Spreadsheet expert (“Excel guru”) allocated to users/developers 24% 15% 5%

Negative Desire to participate in spreadsheet training, if offered (indicating that existing training does not sufficiently meet employee needs)

64% 48% 20%

Table 6 indicates that in both surveys, the percentage of subjects who stated that their organization has detailed written guidelines and protocols was miniscule (1%-3%). It is also apparent that, in most cases, the subjects stated that their organizations have no standards, or no written standards, in place at all. And if the organization does indeed have standards of some form, nearly half of the employees do not know whether they are implemented. Only 6% of the subjects in either survey stated that such standards are always implemented.

Table 6. Organizational policy regarding standards

Standards and policy regarding spreadsheets Israel – financial

professionals from the banking and insurance sector

Israel – all subjects SERP

Organizational policy positive / negative

Standards and policy regarding spreadsheets Percent Percent Percent

Negative No standards at all or no written standards 86% 88% 89%

Positive Existence of detailed written guidelines and protocols 1% 2% 3%

Negative Employees do not know if organizational standards/policy are implemented 52% 48% 49%

Positive Standards/policy are always implemented 6% 6% 6%

Table 7 indicates that despite the importance of spreadsheets to the organization, more than 90% of the subjects in both surveys noted that spreadsheets do indeed pose some risk. In addition, most of the subjects in both surveys stated that their organizations do not implement a policy geared to neutralize such risk or that they were not aware of such a policy. Moreover, some 73% of the Israeli subjects noted that they were not aware of who is responsible for their organization’s spreadsheet risk management or that no one fulfilled that function (as opposed to 48% of the SERP subjects). Finally, most of the subjects in both surveys stated that their organization does not utilize software to monitor spreadsheets or that they were not aware if such software is being used.

Page 113: Proceedings of the 6th Israel Association for Information

113

Table 7. Organizational policy regarding risk management

Risk management and risk neutralization policy

Israel – financial professionals

from the banking and insurance

sector

Israel – all subjects SERP

Organizational policy positive / negative

Risk management and risk neutralization policy Percent Percent Percent

The importance of spreadsheets in general to the organization is critical or very important

94% 82% 70%

Negative Spreadsheets pose a certain amount of risk 100% 93% 91%

Negative Their organization does not implement a policy to neutralize the risk posed by spreadsheets or the employee is not aware of such a policy

80% 84% 82%

Negative Employee does not know who is responsible for neutralizing spreadsheet risk or no such function exists in the organization

71% 73% 48%

Negative Non-use of monitoring software for spreadsheets or employee is not aware of such use

86% 91% 97%

DISCUSSION AND CONCLUSIONS

The current study found that spreadsheets constitute an important tool for the subjects and a dominant portion of their work involves spreadsheets, although gaps exist in patterns of usage and recommended practice. In addition, gaps were found between recommended organizational policy and actual practice.

Analysis of these gaps indicates that the situation in Israel is even more alarming than in other countries. Furthermore, the situation is more serious among financial professionals from the banking and insurance sector.

Beyond determining that spreadsheets serve as a critical organizational information system and exposing the problem related to their use ("the spreadsheet compliance paradox"), the research suggests that organizations need to enforce proper standards and policies for the use of spreadsheets, if they are to control their critical spreadsheets effectively (whether for reasons of regulation enforcement or for reasons related to wanting essential processes and decisions involving spreadsheets to be based upon reliable information).

Computerized tools (such as ESM – Enterprise Spreadsheet Management) can help organizations achieve this goal. Such technology allows organizations to manage and monitor spreadsheet activity and enables the enforcement of organizational or regulatory rules in order to prevent material errors or acts of fraud (Gabbay, 2011).

Page 114: Proceedings of the 6th Israel Association for Information

114

REFERENCES

Baxter, R. (2008). Enterprise Spreadsheet Management: A Necessary Good. Retrieved from Proc. European Spreadsheet Risks Interest Group website: http://arxiv.org/abs/0801.3116

Gabbay, N. (2011). Analysis and assessment of the information value of the ESM system-Enterprise Spreadsheet Management: System for overall management of organization spreadsheets (Unpublished Master's thesis). Tel Aviv University.

McGill, T., & Klobas, J. (2005).The role of spreadsheet knowledge in userdeveloped application success. Decision Support Systems, 39, 355–369.

Panko, R. (2007). A Framework for Controlling Spreadsheets for Regulatory Compliance and Good Practice. COGS, Working Paper. Retrieved from the website: http://panko.shidler.hawaii.edu/SSR/Mypapers/SS-ComplianceControls-COGSWP2007-1.doc

Panko, R. (2006). Spreadsheets and Sarbanes–Oxley: Regulations, Risks, and Control Frameworks.Whitepaper. Retrieved from the website: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.87.101&rep=rep1&type=pdf

Panko, R. (2008a). Thinking is bad: Implications of Human Error Research for Spreadsheet Research and Practice. Retrieved from Proc. European Spreadsheet Risks Interest Group website: http://arxiv.org/abs/0801.3114

Panko, R. (2008b). What We Know About Spreadsheet Errors. Journal of End User Computing's, Special issue on Scaling Up End User Development, Spring 1998, 10(2),15-21. Revised May 2008.Retrieved from the website: http://panko.shidler.hawaii.edu/SSR/Mypapers/whatknow.htm

Powell, S., Baker, K., & Lawson, B. (2008). A critical review of the literature on spreadsheet errors. Decision Support Systems, 46, 128–138. http://mba.tuck.dartmouth.edu/spreadsheet/product_pubs_files/Literature.pdf

Powell, S., Baker, K., & Lawson, B. (2005). A Survey of MBA Spreadsheet Users. Retrieved from SERP (Spreadsheet Engineering Research Project) website: http://mba.tuck.dartmouth.edu/spreadsheet/product_pubs_files/SurveyPaper.doc

Powell, S., Baker, K., & Lawson, B. (2009a). Errors in operational spreadsheets. Journal of Organizational and End User Computing, 2(3), 24-36. http://mba.tuck.dartmouth.edu/spreadsheet/product_pubs_files/Errors.pdf

Powell, S., Baker, K., & Lawson, B. (2009b). Impact of Errors in Operational Spreadsheets, Decision Support Systems, 47, 126–132. http://mba.tuck.dartmouth.edu/spreadsheet/product_pubs_files/Impact.pdf

Page 115: Proceedings of the 6th Israel Association for Information

115

IT OPERATIONAL RISK EVENTS AS COBIT CONTROL FAILURES: A CONCEPTUALIZATION AND EMPIRICAL

EXAMINATION

Michel Benaroch Whitman School of Management

Syracuse University edu.mbenaroc@syr

Anna Chernobai Whitman School of Management

Syracuse University edu.mbenaroc@syr

Keywords: IT operational risk event, IT governance, COBIT controls, strategic resource weaknesses.

EXTENDED ABSTRACT

In his highly discussed Harvard Business Review article, “IT Doesn’t Matter,” Carr writes (2003, p. 11):

The operational risks associated with IT are many – technical glitches, obsolescence, service outages, unreliable vendors or partners, security breaches, even terrorism – and some have become magnified as companies have moved from tightly controlled, proprietary systems to open, shared ones. Today, an IT disruption can paralyze a company’s ability to make its products, deliver its services, and connect with its customers, not to mention foul its reputation. Yet few companies have done a thorough job of identifying and tempering their vulnerabilities.

What Carr is talking about is IT operational risk, defined as any threat to the integrity, confidentiality, or availability (shortly C, I, or A) of data assets or IT assets that create, process, transport and store data (Goldstein, Chernobai and Benaroch, 2011).

Goldstein et al. (2011) have examined empirically how firm performance is impacted by the materialization of IT operational risk, and specifically by the occurrence of C-, I- and A-events. Counter to much IT literature on cybercrime and information security breaches, they theorized and demonstrated empirically that, compared to C-events, A- and I-events impact firm performance more negatively. Their theoretical arguments rest on the notion that IT operational risk is the result of strategic IT resource weaknesses. Whereas the resource-based view (RBV) of the firm traditionally concerns the development of competitive advantage and value creation using inimitable resource strengths (Melville et al. 2004), the discussion of resource weaknesses concerns the erosion of competitive advantage and value destruction due to failures to attend to resources that are neither rare nor inimitable but still necessary to maintain competitive parity (West and DeCastro 2001; Arend 2004). Resource weaknesses may develop concurrently with resource strengths. For example, IT resources that endow a company with a strategic advantage but are poorly implemented concurrently raise the risk that their inadequacies would fail business operations. Like resource strengths, resource weaknesses are strategic if they are scarce (idiosyncratic or firm-specific) and costly (destroy value).

Since the resource weaknesses lens informs only the discussion of strategic concerns over IT operational risk (e.g., balance between creating IT resource strengths and avoiding IT resource weaknesses), the present paper expands the discussion to the tactical IT management level. The latter is concerned with identifiable root causes of IT operational risk that can be managed and resolved. Towards this end, the paper takes three steps. First, it links the notion of IT resource weaknesses to IT governance, and frames IT operational risk events as the result of specific IT

Page 116: Proceedings of the 6th Israel Association for Information

116

governance deficiencies. Second, it operationalizes the notion IT governance deficiencies using a framework called COBiT (Control OBjectives for Information and related Technologies). Last, it reformulates and retests Goldstein et al.’s (2011) hypotheses and findings relative to specific COBiT control weakness at the root of IT operational risk events. We elaborate on these steps below.

The IT Governance Institute defines IT governance as “the leadership, organizational structures and processes that ensure that the enterprise’s IT sustains and extends the organisation’s strategies and objectives” (ITGI, 2007, p. 5). Going beyond the traditional link of IT governance to strategic business-IT alignment and value creation using IT resource strengths (Luftman 2003), this definition adds a facet that maps well to the notion of resource weaknesses. Specifically, it also links IT governance to control over the execution of IT decisions and avoidance of IT risk or value destruction due to IT resource weaknesses (Luftman 2003). In fact, IT governance is also about the binding of IT controls into the wider enterprise-level system of internal controls.

Several IT-centered frameworks – including: ITIL, COBIT, and ITCG (Parent and Reich, 2009) – have been developed to supplement the COSO framework, the de facto standard for enterprise-level governance and risk management. COBiT is among the most prominent of these frameworks. It relies on a process model containing four domains: Plan and Organize (PO), Acquire and Implement (AI), Deliver and Support (DS), and Monitor and Evaluate (ME). Within each domain there are specific control processes (or best practices) for managing IT resources, and each process is associated with control objectives defining the elements to be considered by management for effective control of each IT process. In total, COBiT identifies 34 processes and over 350 control objectives.

We argue that at the root of different types of IT operational risk events are different IT governance deficiencies that can be linked to specific COBiT control weaknesses or failures. In particular, we focus on the reasons A- and I-events are more detrimental than C-events. Goldstein et al. (2011) explain that I- and A-events signal the presence of more scarce IT resource weaknesses, where scarcity means lack of imitable solutions to resource weaknesses in the competitive environment (West and DeCastro, 2001). In other words, resource weaknesses are scare when they are idiosyncratic or by-products of investments in idiosyncratic IT resource strengths. Greater scarcity of resource weaknesses means greater remediation costs and difficulties. Upon mapping these ideas to the domain of IT governance, we hypothesize that A- and I-events have at their root COBiT control weaknesses that are more challenging to overcome. The degree of challenge depends on the number of COBiT controls at the root of an event and the degree of their inter-relatedness. Inter-relatedness of COBiT controls has been studied using network analysis, in order to identify “foundational” COBiT controls that companies ought to prefer implementing earlier (Singh 2010). Since every COBiT controls can be an input to or output from another COBiT control, network analysis reveals which COBIT controls are more “central” and more links with others COBiT controls. On this basis, more detrimental IT operational risk events would have at their root more “central” COBiT control weaknesses that contribute more to the weakening of IT governance. Hence, we hypothesize that: compared to C-events, A- and I-events have at their root more COBiT controls and those COBiT controls have a greater degree of inter-relatedness. Moreover, we postulate that the subsets of COBiT controls at the root of C-, I-, and A-events are mutually exclusive, and hypothesize that: COBiT controls for C-events fall into the DS and not AI domain, controls for I-events fall into the AI and not DS domain, and controls for A-events are different and fall into the AI and DS domains.

We test these hypotheses using a sample of 208 risk events that occurred from 1985 to 2010 in US financial services firms. The events come from a commercial operational risk events database, FIRST, containing a detailed narrative of each event. The events were classified as C, I or A by the authors, and each event was mapped to COBiT control processes at its root independently by

Page 117: Proceedings of the 6th Israel Association for Information

117

one author and two MBA students with over five years of experience with COBiT. The inter-rater reliability on both classifications was close to 1.0. The sample contained 83 C-events, 95 I-events, and 30 A-events. Of the 208 events, 118 and 81 were mapped to COBiT control processes in the AI and DS domains, respectively. We test our hypotheses by running univariate tests of the average numbers of COBiT controls at the root of every type of events as well as by running three separate logistic regression against the COBiT controls at the root of C-, I- and A-events. The results are statistically significant and strongly support our hypotheses.

In summary, this study’s contribution is twofold. It is first to link the notion of IT operational risk to IT control governance deficiencies and specifically to COBiT control failures. This opens the door to follow-up research based on COBiT, a framework widely used in practice but little studied in academic circles. More importantly, this study provides empirical evidence on the importance of specific groups of COBiT controls in terms of their contribution to the occurrence of C-, I, and A-events. With limited resources, IT organizations can’t focus with equal vigor on all 312 control-best-practices in COBIT. Although some practitioners prescribe various implementation paths through these best practices, our study is first to identify empirically small subsets of these practices that have the potential to create the biggest performance improvement, at least from an IT operational risk perspective.

REFERENCES

Arend, R. J. (2004). The definition of strategic liabilities, and their impact on performance. Journal of Management Studies, 41(6), 1003-1027.

Carr N.G. (2003). IT Doesn’t Matter. Harvard Business Review.

Goldstein J., Chernobai A., & Benaroch M. (2011). Event Study Analysis of the Economic Impact of IT Operational Risk and its Subcategories. Journal of the Association of Information Systems, 12(9), 2011, 606-631.

IT Governance Institute (ITGI) (2007). COBIT 4.1 framework. Rolling Meadows, IL: ISACA.

Luftman J.N. (2003). Managing the IT Resource. New York, NY: Prentice Hall, 2003.

Melville, N., Kraemer, K., & Gurbaxani, V. (2004). Information technology and organizational performance: An integrative model of IT business value. MIS Quarterly, 28(2), 283-322.

Parent M., & Reich B.H. (2009). Governing Information Technology Risk. California Management Review, 51(3).

West, G. P., & DeCastro, J. (2001). The Achilles heel of firm strategy: Resource weaknesses and distinctive inadequacies. The Journal of Management Studies, 38(3), 417-442.

Page 118: Proceedings of the 6th Israel Association for Information

118

THE ICOP FRAMEWORK FOR BUSINESS PROCESS MODEL MATCHING

Matthias Weidlich Israel Institute of Technology–on Techni

il.ac.technion.weidlich@tx

Keywords: Process Modelling, Model Analysis, Change Management

INTRODUCTION

In order to identify operational commonalities and differences, organisations compare business processes. Such comparison is driven, for instance, by a merger of organisational units or by standardization efforts that require an assessment to which extent operations conform to company-wide or industry-wide standards. Once business processes are documented by process models, process model matching evolved as a means to support comparative analysis of operations. Here, matching refers to the identification of correspondences between activities of two process models. Tools supporting the matching step are called matchers. In this abstract, we present the ICoP framework, which supports the development of such matchers.

CHALLENGES IN BUSINESS PROCESS MODEL MATCHING

It is a major challenge for matching business process models that those models rarely use the same level of detail. Also, variability with respect to the textual description of functionality is commonly observed. For illustration consider a business process model that comprises an activity `Check Invoice'. Another related model may split up this activity into several separate steps, e.g., `Verify Customer Data', `Decide on Correctness of Data', and `Approve Invoice'. Those differences relate to refinement and meronymy. Further differences stems from synonymy and homonymy. That is, one process model may describe an activity as `Check Invoice', whereas another model may relate to this activity with the label `Verify Bill'. Organisations rarely align their documentation of business processes. Consequently, heterogeneity in the representation and description of operations are likely to be observed in practice and hinder any comparative analysis (Dijkman, 2008). Differences related to refinement and meronymy impose particular challenges for the identification of correspondences. In contrast to a setting that shows only correspondences, it is not feasible to analyse the whole set of potential correspondences due to the implied combinatorial problem. Consider the possibility of correspondences between two process

models with n and m activities. Then, the number of matches is given by

with as the binomial coefficient that defines the number of x-element subsets of an n-element

set. As such, the number of possible combinations makes exploration of the whole set of potential correspondences infeasible and calls for heuristics that find suitable candidate correspondences.

The problem of matching business process models is close to matching of data schemas or ontologies (Rahm & Bernstein, 2001), (Euzenat & Shvaiko, 2007). For several decades, work in this research area has combined structural analysis with natural language processing to identify correspondences between entities of data schemas or ontologies. However, those works show a

Page 119: Proceedings of the 6th Israel Association for Information

119

predominant focus on 1:1 matches, such that `1:n and n:m mappings [..] are currently hardly treated at all' (Rahm & Bernstein, 2001, p. 339). Further, the few existing exceptions cannot be applied for process model matching since they have been tailored for data models and, e.g., exploit data instances for matching.

THE ICOP FRAMEWORK

To solve the problem of matching process models and cope with complex correspondences in particular, we presented in the ICoP framework (Weidlich, Dijkman, and Mendling, 2010). The framework proposes an architecture and a set of re-usable components for assembling matchers.

Figure 1: The ICoP architecture. The overall ICoP architecture is illustrated in Figure 1. It defines a multi-step approach to cope with the combinatorial challenges outlined earlier. Given two process models, searchers extract potential correspondences. Search heuristics implemented within this framework analyse the labels of activities (e.g., by a vector space scoring) and apply different heuristics to group activities (e.g., based on fragments obtained by applying graph decomposition to the process model). Second, scored potential correspondences are conveyed to boosters that aggregate correspondences and adapt their score. For instance, correspondences that subsume other identified correspondences will receive an increased matching score. A selector builds up the actual set of correspondences from the set of candidates. It selects the best candidates that are non-overlapping. This selection is guided in two ways: the scores of the candidates and an evaluation score. The latter is derived from an evaluator, which assigns a single score to a set of selected correspondences. This score may be derived with knowledge about the original process models. The selection of correspondences is done in an iterative manner. In each step, the selector selects a set of correspondences and the evaluator assigns a score to this set, which is then used within the selector to modify the set of correspondences. Upon completion of the selection procedure, the final set of correspondences between activities of the process models is produced.

CONCLUSION

In this abstract, we gave an overview of the ICoP framework to address business process model matching. It provides a flexible and adaptable architecture for the implementation of matchers along with a set of predefined components for assembling matchers. Experimental results on the evaluation of the framework can be found in (Weidlich et al., 2010). Those indicate that the framework is able to identify a significant amount of complex correspondences between business process models. Still, those experiments also show that a certain similarity of labels is needed to achieve good results. Also, in (Weidlich et al., 2010), we highlighted how existing matching techniques, e.g., based on the graph edit distance as proposed in (Dijkman, Dumas, and García-Bañuelos, 2009), may be incorporated in the framework.

Page 120: Proceedings of the 6th Israel Association for Information

120

In future work, we aim at extending the set of matching components available within the ICoP framework. Part of speech tagging has recently successfully be applied to labels of process models, cf., (Leopold, Smirnov, and Mendling, 2011), which suggests to leverage this information also for process model matching. The precision of generating candidate correspondences may benefit from an object or action aware handling of activity labels. In addition, external knowledge such as WordNet shall be incorporate in the matching procedure. The benefits of taking external knowledge into account have been demonstrated for matching data schemas (Dhamankar, Lee, Doana, Halevy, and Domingos, 2004). Note that external knowledge for the domain of business processes is available in terms of reference models like the MIT Process Handbook.

REFERENCES

Dhamankar, R., Lee, Y., Doana, A., Halevy, A., & Domingos, P. (2004). IMAP: Discovering complex semantic matches between database schemas. SIGMOD 2004, 383-394.

Dijkman, R. (2008). Diagnosing differences between business process models. BPM 2008, 261-277.

Dijkman, R., Dumas, M., & García-Bañuelos, M. (2009). Graph Matching Algorithms for Business Process Model Similarity Search. BPM 2009, 48-63.

Euzenat, J., & Shvaiko, P. (2007). Ontology matching. Springer-Verlag.

Leopold, H., Smirnov, S., & Mendling, J. (2011). Recognising Activity Labeling Styles in Business Process Models. EMISAIJ, 6(1), 16-29.

Rahm, E., & Bernstein, P.A. (2001). A survey of approaches to automatic schema matching. VLDB Journal, 10(4), 334-350.

Weidlich, M., Dijkman, R., & Mendling, J. (2010). The ICoP Framework: Identification of Correspondences between Process Models. CAiSE 2010, 483-498.

Page 121: Proceedings of the 6th Israel Association for Information

121

REPORT OF THE CHE COMMITTEE ON PROGRAMS OF STUDY IN INFORMATION SYSTEMS IN THE ISRAELI

ACADEMY Prof. Peretz Shoval

Department of Information Systems Engineering, Ben-Gurion University, Beer-Sheva [email protected]

Http://ilais.openu.ac.il/wp/wp-content/uploads/2012/05/che-is-report-2011-hebrew.pdf

Keywords: programs of study, CHE report, information systems, academia.

ABSTRACT

Due to the multiplicity and variability of programs of study in Information Systems (IS) in Israeli universities and colleges, the Council for Higher Education (CHE) appointed a professional committee to examine the programs of study at the undergraduate level, including 4-year programs in Information Systems Engineering (ISE) and 3-year programs in IS. The committee was instructed to create structures and clear definitions for such programs to be submitted as recommendations to the CHE subcommittee on Technology and Engineering and to the CHE subcommittee on Universities, who will discuss them and submit their recommendations to the Council for final approval. The members of the professional committee were Prof. Peretz Shoval from Ben-Gurion University (chair); Prof. Dov Te'eni from Tel-Aviv University and Prof. Judit Bar-Ilan from Bar-Ilan University. The committee worked a few months and submitted the report at the beginning of 2011. Following its presentation to the CHE subcommittees and some revisions, the final report was approved by the Council. Following that, the recommendations of the report were distributed by the CHE to all Israel Universities and colleges, who are now required to follow them. The report is published at the ILAIS website (in Hebrew). Here is its Table of Contents:

1. Introduction 2. Framework and boundaries of the committee's work 3. The IS academic discipline 4. Definition of framework programs of study in IS 5. Categories of courses:

• Information systems and computers • Management and organization • Mathematics and statistics • Engineering • General courses

6. Framework of 4-year programs in ISE a. Objectives b. Structure of M.Sc. programs in ISE c. Concentration in ISE

7. Framework of 3-year programs in IS a. Objectives b. Structure of B.A. programs in IS c. Structure of B.Sc. programs in IS d. Dual program in IS e. Concentration in IS