establishing a foundation for collaborative scenario...

19
Establishing a Foundation for Collaborative Scenario Elicitation Ann M. Hickey Douglas L. Dean Jay F. Nunamaker, Jr. University of Arizona Acknowledgement This is a substantially revised version of a paper that was published in the 1999 Hawaii International Conference on System Sciences. The authors wish to thank the reviewers of this paper for their very helpful suggestions. We also wish to thank the sponsors of this research, the DoD Environmental Security Corporate Information Management Program Management Office. Abstract Eliciting and integrating requirements from large groups of diverse users remains a major challenge for the software engineering community. Scenarios are becoming recognized as valuable means of identifying actions taken by users when executing a business process and interacting with an information system, and therefore have great potential for addressing requirements elicitation problems. A review of the scenario literature indi- cates that, although there is widespread agree- ment on the usefulness of scenarios, there are many unanswered questions about how to elicit scenario definitions from individual users and user groups efficiently. This research examines how increasing the struc- ture of scenario definitions affects scenario quaff- ty and the efficiency of scenario definition by indi- vidual users. During a laboratory experiment, sub- jects defined scenarios using a general-purpose GSS, GroupSystems Group Outliner, with one of three textual scenario formats that ranged from unstructured to very structured. Scenario quafity and the efficiency of scenario definition by users were compared across the formats. Results high- lighted the efficiency of the unstructured format but revealed that all formats produced incomplete scenario definitions. Recommendations are made for an iterative collaborative scenario process and a special-purpose GSS scenario tool that may overcome some of these problems. ACM Categories: D.2.1, H.4.1, H.5.2, H.5.3 Keywords: information requirements determina- tion, scenarios, group support systems, participa- tive design, IS development methods and tools Introduction Several decades of MIS research have clearly shown both the importance and the difficulty of developing complete and accurate information systems requirements. Many of the seemingly endless reports of spectacular system failures have been attributed to requirements problems (Standish, 1995). Other studies have shown that problems are 200 times more expensive to correct during testing than during the requirements phase (Boehm, 1981). Most researchers and practition- ers agree that user involvement is critical to the 92 The DATA BASE for Advances in Information Systems - Summer-Fall 1999 (Vol. 30, No. 3,4)

Upload: others

Post on 18-Oct-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Establishing a Foundation for Collaborative Scenario ...130.18.86.27/.../papers/HickeyEtAl1999_DBAIS3034_CollaborativeSc… · Scenario Elicitation Ann M. Hickey Douglas L. Dean Jay

Establishing a Foundation for Collaborative Scenario Elicitation Ann M. Hickey Douglas L. Dean Jay F. Nunamaker, Jr. University of Arizona

Acknowledgement

This is a substantially revised version of a paper that was published in the 1999 Hawaii International Conference on System Sciences. The authors wish to thank the reviewers of this paper for their very helpful suggestions. We also wish to thank the sponsors of this research, the DoD Environmental Security Corporate Information Management Program Management Office.

Abstract

Eliciting and integrating requirements from large groups of diverse users remains a major challenge for the software engineering community. Scenarios are becoming recognized as valuable means of identifying actions taken by users when executing a business process and interacting with an information system, and therefore have great potential for addressing requirements elicitation problems. A review of the scenario literature indi- cates that, although there is widespread agree- ment on the usefulness of scenarios, there are many unanswered questions about how to elicit scenario definitions from individual users and user groups efficiently.

This research examines how increasing the struc- ture of scenario definitions affects scenario quaff- ty and the efficiency of scenario definition by indi- vidual users. During a laboratory experiment, sub- jects defined scenarios using a general-purpose GSS, GroupSystems Group Outliner, with one of three textual scenario formats that ranged from unstructured to very structured. Scenario quafity and the efficiency of scenario definition by users were compared across the formats. Results high- lighted the efficiency of the unstructured format but revealed that all formats produced incomplete scenario definitions. Recommendations are made for an iterative collaborative scenario process and a special-purpose GSS scenario tool that may overcome some of these problems.

ACM Categories: D.2.1, H.4.1, H.5.2, H.5.3

Keywords: information requirements determina- tion, scenarios, group support systems, participa- tive design, IS development methods and tools

Introduction

Several decades of MIS research have clearly shown both the importance and the difficulty of developing complete and accurate information systems requirements. Many of the seemingly endless reports of spectacular system failures have been attributed to requirements problems (Standish, 1995). Other studies have shown that problems are 200 times more expensive to correct during testing than during the requirements phase (Boehm, 1981). Most researchers and practition- ers agree that user involvement is critical to the

92 The DATA BASE for Advances in Information Systems - Summer-Fall 1999 (Vol. 30, No. 3,4)

Page 2: Establishing a Foundation for Collaborative Scenario ...130.18.86.27/.../papers/HickeyEtAl1999_DBAIS3034_CollaborativeSc… · Scenario Elicitation Ann M. Hickey Douglas L. Dean Jay

success of the requirements process. A special challenge is determining how to involve groups of users in that process effectively.

Traditional requirements elicitation techniques depend on user interviews or group meetings as the primary mechanisms for user involvement. However, these techniques are notoriously ineffi- cient, especially when dealing with large, highly diverse user groups. This is not a new problem. The challenge of gathering accurate require- ments, the inefficiencies of user interviews, and the difficulty of achieving effective group meetings were early driving forces in group support sys- tems (GSS) research.

GSS have been highly successful in improving group meeting productivity and outcomes in real- world settings (Nunamaker, Briggs, Mittleman, Vogel, & Balthazard, 1996-97). On-going research on the use of GSS for collaborative requirements elicitation has led to the development of special- purpose GSS tools for activity modeling (Dean, Lee, Orwig, & Vogel, 1995) and data modeling (Lee & Dean, 1997) and an over-arching Collaborative Software Engineering Methodology (CSEM) (Dean, Lee, Pendergast, Hickey, & Nunamaker, 1997-98).

However, there is still a need for additional collab- orative tools to support such key aspects of the requirements elicitation process as developing a dynamic picture of the business processes that a system must support, i.e., the system's behavioral requirements. Recent research has suggested that scenarios may be an effective mechanism for eliciting such requirements (Carroll, 1995; Rolland et al., 1998; Weidenhaupt, Pohl, Jarke, & Haumer, 1998). However, although there is widespread agreement on the usefulness of scenarios, there are many unanswered questions regarding how to efficiently elicit high-quality scenario definitions from individual users and user groups.

The current laboratory experiment was designed to examine the extent to which increasing the structure of scenario definitions affects scenario quality and scenario definition efficiency and to establish a foundation for collaborative scenario elicitation by exploring the first step in that process, individual user definition of scenarios.

During the experiment, subjects individually defined contextual scenarios using a general*pur-

pose GSS, GroupSystems Group Outliner. Overall scenario quality and scenario definition efficiency were evaluated and then compared for three dif- ferent textual scenario formats. The experimental results were further analyzed to assess the impli- cations for a collaborative scenario definition process and to identify requirements for a special- purpose GSS scenario tool.

The next section provides additional background and motivation for this research. The following section summarizes the research methodology used and defines measures for scenario quality and scenario definition efficiency. Experimental results are presented next. Implications o f the results are discussed in, the following section. Research contributions and future research plans are summarized in the final section.

Background Scenarios

Scenarios are narrative descriptions of the sequence of actions that a user engages in when performing a specific task. They are being increasingly used in both the human-computer interaction and software engineering communities throughout all phases of development (Carroll, 1995), including requirements elicitation (Benner, Feather, Johnson, & Zorman, 1993; Hsia et al., 1994; Leite et al., 1997; Potts, Takahashi, & Anton, 1994; Weidenhaupt et al., 1998). Because scenar- ios may be used for a variety of purposes, sever- al different scenario types and formats are com- monly used (Rolland et al., 1998).

The software engineering community is probably most familiar with interaction scenarios, more often referred to as "use cases" (Jacobson, Christerson, Jonsson, & Overgaard, 1992) or "system use cases." As graphically shown in Figure 1, interaction scenarios focus on how users interact with a system to accomplish a spe- cific system-supported transaction (Kuutti, 1995). Software engineers also use a second scenario type referred to as "internal scenarios." Internal scenarios describe actions executed by software or hardware components within the system to perform a specific system task. For example, an internal scenario could be used to describe how an operating system memory management rou-

The DATA BASE for Advances in Information Systems - Summer-Fall 1999 (Vol. 30, No. 3,4) 93

Page 3: Establishing a Foundation for Collaborative Scenario ...130.18.86.27/.../papers/HickeyEtAl1999_DBAIS3034_CollaborativeSc… · Scenario Elicitation Ann M. Hickey Douglas L. Dean Jay

I

Business (Contextual) Scenario

Figure 1. Types of Scenarios

tine interacts with a hardware register to transfer a section of memory.

A third type of scenario exists, one having the purpose of describing a broader business per- spective. Such scenarios are referred to as "con- textual" (Pohl & Haumer, 1997) or "rich" (Kuutti, 1995) scenarios. Contextual scenarios are designed to capture the larger organizational con- text by describing all actions in which the user engages to accomplish a business task, as well as information about the user's business goals, resources required, and the social setting (Kuutti, 1995).

When the business task is supported by an infor- mation system, the actions included in the con- textual scenario definition will be a combination of both the user's interactions with the system and non-system interactions with other people, organ- izations, or objects. Contextual scenarios are especially useful during requirements elicitation when user groups seek to develop common defi- nitions of business processes, assess opportuni- ties for process improvement, and evaluate the business impacts of alternative system solutions. Pohl and Haumer (1997) surveyed contextual sce- narios modeling techniques to develop data mod- els defining the information required at the sce- nario and action levels.

Scenario formats also vary widely. Textual formats are the most common, ranging from unstructured natural language (e.g., Carroll, 1995) to structured natural language such as tabular scripts (Potts et al., 1994) to formal grammars (Leite et al., 1997). Various modeling techniques such as scenario trees (Hsia et al., 1994), other graphical represen- tations (e.g., Benner et al., 1993), or prototypes may also be used with or without textual descrip- tions.

A survey of current scenario practices in industry (Weidenhaupt et al., 1998) reports that both con- textual and interaction scenarios are being used and are primarily documented in a textual format using a word processor. A description of scenario usage in CSEM is provided next.

Scenarios in CSEM

The Collaborative Software Engineering Method- ology (CSEM) is defined in Dean et al. (1997-98). While that article describes the role of scenarios in CSEM, specific guidelines on how scenarios should be documented are still being developed through research efforts such as those document- ed in this paper.

The primary purpose of scenario usage in CSEM is to support collaborative requirements elicita-

94 The DATA BASE for Advances in Information Systems - Summer-Fall 1999 (Vol. 30, No. 3,4)

Page 4: Establishing a Foundation for Collaborative Scenario ...130.18.86.27/.../papers/HickeyEtAl1999_DBAIS3034_CollaborativeSc… · Scenario Elicitation Ann M. Hickey Douglas L. Dean Jay

tion. The goal is to use scenarios to support an evolutionary process that starts by capturing con- textual scenarios to describe business process behavior, then evolves the contextual scenarios into specific interaction scenarios to describe desired system behavior, and finally transforms the interaction scenarios into behavioral require- ments specifications. This is accomplished through an iterative process of individual user scenario definition, group refinement of scenarios, development team analysis and implementation, and group review and feedback until behavioral requirements are agreed upon. The scenario process is continuing to evolve based on user experiences like those described next.

Preliminary User Experiences

Early CSEM scenario sessions with Department of Defense (DOD) user groups were encouraging (Hickey & Lee, 1998). A general-purpose GroupSystems tool, Group Outliner, was used to capture scenarios because it allowed structuring of the definition process through decomposition of business activities into scenarios and provided word processor-like (~apabilities for describing the scenarios. Facilitators observed that users found Group Outliner easy to use and were able to define scenarios quickly for a wide range of busi- ness activities when given general instructions regarding scenario content.

Because of initial variations in the type and level of detail of information included in the scenario definitions, the authors developed more detailed instructions on what information should be cap- tured as part of a scenario definition that were based on Pohl and Haumer's (1997) contextual scenario data models. A complete contextual scenario definition should include information about the scenario as a whole (e.g., viewpoint, business goal, preconditions, and successful results) and detailed definitions of each action required to accomplish the business task. Action definitions identify the action (name and brief description), who is responsible for the action (i.e., the actor), resources required (e.g., data require- ments), exceptions which may occur, and alterna- tive ways of performing the action.

However, despite specific definition of desired scenario content, the format of the scenarios still varied greatly. For example, some scenarios were

written as free-format textual definitions, while others were clearly broken down into numbered steps. Because the impacts of different degrees of scenario definition structure on scenario quali- ty and definition efficiency were unknown, it was unclear which approach was better. Such ques- tions led to the current research.

Research Approach

Research Question and Propositions

To respond to the need for increased understand- ing of the impact of applying structure to scenario definitions, the current research focuses on initial individual user definition of contextual scenarios using the most common textual scenario defini- tion formats. The scenario literature previously summarized showed the wide variety of formats used to describe scenarios, the three most com- mon of which are (1) unstructured free-format nar- rative text, (2) separately numbered actions with free-format action definitions, and (3) more-struc- turedformats such as tables with numbered rows for each scenario action and separate columns for action information (e.g., actor, action name, description).

While all three formats include the same scenario and action information, the degree of structure increases from the unstructured format 1 to the structured format 2 to the even more structured format 3. However, little information is provided as to the differential pros and cons of these formats or the impacts on scenario quality and scenario definition efficiency of increasing the degree of structure. Therefore, the primary research ques- tion is:

How will increasing the structure of scenario definitions affect scenario quafity and the effi- ciency of scenario definition by individual users?

Because of the lack of research in this area, the analysis was primarily exploratory in nature. It seems that increasing the structure of scenario definitions by dividing them into numbered actions (as is the case in format 2) may increase the user's emphasis on separately defining each action in a scenario. This may increase the quali- ty of the scenario by increasing the probability that all actions will be included in the scenario definition. Increasing the structure even more by separating the list of actions from the definition of

The DATA BASE for Advances in Information Systems - Summer-Fall 1999 (Vol. 30, No. 3,4) 95

Page 5: Establishing a Foundation for Collaborative Scenario ...130.18.86.27/.../papers/HickeyEtAl1999_DBAIS3034_CollaborativeSc… · Scenario Elicitation Ann M. Hickey Douglas L. Dean Jay

each action (as in format 3) may even further increase the focus on the actions by allowing actions to be viewed separately from additional descriptive information about the actions. Being able to hide descriptive information about actions behind the action itself may make it easier for a user to focus on defining a complete list of actions. Consequently, we propose that:

Proposition 1: Increasing the degree of scenario structure to increase the emphasis on individual actions will increase the likelihood that more actions will be included in the scenario defini- tion, thereby increasing the quality of the sce- nario definition.

Increasing the focus on individually differentiated actions within a scenario may make it easier to provide definitions for those actions. For example, in formats 2 and 3, once the actions are identified, the user could easily review each of the separate- ly defined actions and add the necessary defini- tions that directly relate to those actions. Conversely, adding this information to an unstruc- tured paragraph (format 1) requires a much more extensive process (e.g., searching for each indi- vidual action, determining where to add the infor- mation into the narration, editing the paragraph to add the information, and possibly rewriting some sentences to maintain readability of the modified section). Therefore, we propose that:

Proposition 2: Increasing the degree of scenario structure to differentiate individual actions will increase the quafity of the action definitions.

However, structure may make it harder to include non-sequential information such as exceptions or alternative courses of action, because it may be harder for the user to determine where to include this information in a strictly sequential listing such as that imposed by the structured formats (for- mats 2 and 3). Therefore, we proposed that:

Proposition 3: Increasing the degree of scenario structure to identify individual actions sequen- tially will decrease the quafity of non-sequential action information (e.g., exceptions and alterna- tives).

Finally, the potential impact on efficiency is some- what clearer. Human-computer interaction stud- ies have shown that the time required to perform a task is a function of the amount of thinking time required for each 'chunk' of information plus the

time required for each mouse movement and key- stroke (Card & Moran, 1995; Card, Moran, & Newell, 1983). Because users will be constrained by, and have to think more about, the structure as well as the content of scenarios and will be required to perform more keystrokes to enter actions (formats 2 and 3) and action definitions (format 3) as separate items, we propose that:

Proposition 4: Increasing the degree of scenario structure will decrease the efficiency of scenario definition by users.

To summarize, the four propositions address two evaluation criteria (quality and efficiency) at two levels of analysis (scenario and action). The first proposition looks at overall scenario definition quality. The next two propositions drill down to the action level to take a more detailed look at individual action definition quality. The final proposition then returns to the scenario level to assess overall scenario definition efficiency. The research constructs and specific hypotheses developed to assess the impacts of the degree of scenario structure and analyze these propositions are described next.

Research Constructs and Hypotheses

The scenario literature is essentially silent on how to measure scenario quality. Since the primary purpose of using scenarios is to aid the require- ments definition process, the requirements litera- ture was reviewed to determine how quality has been measured. Several measures for the quality of a software requirements specification (SRS) (e.g., see summary in Davis et al., 1997) were identified, the most commonly mentioned of which is completeness.

Davis et al. (1993) state that an SRS is complete if "everything that the software is supposed to do is included in the SRS," which is generally measured as percent of total requirements included in the SRS [(number of requirements in SRS)/(total num- ber of requirements)]. Comparably, scenario com- pleteness can be measured by assessing whether all actions have been included in the scenario, calculated as the percent of total actions included in the scenario definition [(number of actions) /(total number of actions)]. Using this information, proposition 1 can be reformulated to hypothesize that:

HI : Increasing the degree of scenario structure to increase the emphasis on individual actions

96 The DATA BASE for Advances in Information Systems - Summer-Fall 1999 (Vol. 30, No. 3,4)

Page 6: Establishing a Foundation for Collaborative Scenario ...130.18.86.27/.../papers/HickeyEtAl1999_DBAIS3034_CollaborativeSc… · Scenario Elicitation Ann M. Hickey Douglas L. Dean Jay

will increase the percent of total actions includ- ed in the scenario definition.

Davis et al. (1993) evaluate the completeness of each individual requirement by measuring whether all information required to describe the requirement is provided. For scenarios, individual action completeness can be assessed in a similar manner. But in this case there are two types of descriptive information for actions: (1) information directly related to the action as part of the normal action sequence, and (2) information such as exceptions which may occur for an action or alter- native ways of performing an action which are non-sequential in nature. Therefore to assess the first type of action completeness, the complete- ness for each type of sequential action informa- tion (e.g., actor, description) will be calculated and added together to determine the average sequen- tial information per action. Based on proposition 2, we hypothesize that:

H2: Increasing the degree of scenario structure to differentiate individual actions will increase average sequential information per action.

To assess the second type of action complete- ness, the completeness for each type of non- sequential action information (e.g., exception, alternatives) will be calculated and added togeth- er to determine the average non-sequential infor- mation per action. We hypothesize using proposi- tion 3 that:

H3: Increasing the degree of scenario structure to identify individual actions sequentially will decrease average non-sequential information per action.

Efficiency is traditionally measured as the quanti- ty produced divided by the time or resources taken to produce that quantity. For scenarios, quantity has generally been measured as the total number of scenarios or the length of scenarios in lines or pages (e.g., Weidenhaupt et al., 1998). Since all subjects will have identical, but limited, time to define just a few scenarios, total length of scenario definitions will be used to measure effi- ciency. (Note: If subjects had worked different lengths of time, these totals would have been divided by each subject's time.) Based on propo- sition 4, we hypothesize that:

H4: Increasing the degree of scenario structure will decrease the total length of scenario defini- tions.

Experimental Design

The four research hypotheses were evaluated using results from a laboratory experiment. During the experiment, a convenience sample of under- graduate MIS students used GroupSystems Group Outliner to define contextual scenarios for their university's current course registration sys- tem using one of three scenario textual formats. GroupSystems Group Outliner is a general-pur- pose GSS and has been successfully used in early DOD scenario definition sessions. Group Outliner provides the same sort of functionality as word processors, the tool most commonly used for sce- nario definition in industry (Weidenhaupt et al., 1998). It also provides a flexible, easy-to-use inter- face which directly supports the information hiding and structuring needed in this experiment. Most importantly, it provides the collaborative support needed in the remaining collaborative steps of the scenario elicitation process. Therefore, Group Outliner was used during the experiment instead of a single-user word processor.

The experiment lasted approximately two hours and included a short Group Outliner training ses- sion, a practice exercise, definition of the course registration scenarios, and completion of a post- session questionnaire. A standard script was used to ensure that common instructions and time frames were used for each experimental ses- sion. Subjects spent 45 minutes defining up to four contextual scenarios, specifically describing all the actions students take to: (1) Register for spring semester classes, (2) Add a class after ini- tial registration, (3) Drop a class, and (4) Change section for a class. They were told to concentrate on developing the best possible definition for the first scenario and that they did not need to com- plete all scenarios. If they finished the first sce- nario, they worked on the other scenarios. All subjects were given a handout which provided (1) a common description of what information should be included in their contextual scenario defini- tions see Figure 2, (2) instructions on how to input the scenario definition for their specific treatment format, and (3) a sample scenario using their treatment format. For purposes of the experiment, the contextual scenario data model was simplified to focus on action-level information (see F. igure 2).

In addition, all subjects, regardless of treatment, where provided with an initial outline listing sce- nario categories (e.g., UA Course Registration

The DATA BASE for Advances in Information Systems - Summer-Fail 1999 (Vol. 30, No. 3,4) 97

Page 7: Establishing a Foundation for Collaborative Scenario ...130.18.86.27/.../papers/HickeyEtAl1999_DBAIS3034_CollaborativeSc… · Scenario Elicitation Ann M. Hickey Douglas L. Dean Jay

To develop a scenario description, you must specifically identify each action that must be accomplished. For each action or step in the scenario, you must identify who performs that step (e.g., a person or a system) and what spec i f ic ac t ion is done. You should also identify any data/ informat ion that is needed to complete that step, the reason the step is done, any major error check ing or except ions that may occur, as we// as alternative ways of accomplishing that step.

Figure 2. Scenario

Scenarios) and specific scenarios (e.g., Student Registers for Spring Semester Classes) (see top portions of Figures 3-5) as nodes in the outline. The treatments varied based on what format sub- jects used to input the definitions for those sce- narios. 1. Subjects assigned to the first treatment entered

their scenario definitions as unstructured text into Group Outliner's comment window, see Figure 3. Thus, all actions, action descriptions, and other items related to the actions such as data/information needed and identification of the actor performing the action were entered in one or more paragraphs in the comment win- dow corresponding to the scenario. Actions making up the scenario were not defined as separate nodes in the outline under the sce- nario name.

2. Subjects assigned to treatment 2 listed the

Experiment Instructions

scenario actions as numbered steps by adding the actions as individual outline nodes in Group Outliner, see Figure 4, indented under- neath the node containing the scenario name. In this treatment, all information related to the action was included in the outline rather than being displayed in the comment window. Thus, there was no information hiding behind action names. Complete action names and defini- tions were directly entered as nodes in the out- line itself. Group Outliner automatically gener- ated step numbers for each of these action nodes.

3. Finally, subjects assigned to treatment 3 defined structured steps by entering structured action names (who + verb + object, e.g., Student Starts Internet Browser) as individual outline items indented under the parent sce- nario name. The information comprising the

Figure 3. Unstructured Scenario Format

98 The DATA BASE for Advances in Information Systems- Summer-Fall 1999 (Vol. 30, No. 3,4)

Page 8: Establishing a Foundation for Collaborative Scenario ...130.18.86.27/.../papers/HickeyEtAl1999_DBAIS3034_CollaborativeSc… · Scenario Elicitation Ann M. Hickey Douglas L. Dean Jay

: iI ]

!}ii! ̧ ~l

- 1. Group Outliner PraCtice

-2. UA Info Scenarios

2.1.1 The student starts whatever Internes Browser (e.g., Netscape Navigator or Internes Explorer) is avatiable on a computer they are authorized to use which has World Wide Web access.

2.1.2 The system starts the browser and displays the browser's main screen. 2.1.3 The student tells the browser to go to the UA'S main web site (w~vw.artzona.edu).

2.1.4 The system then displays the UA~ main web page.

2.1.5 The student selects the "Student InformatJon" rrYberteki link from that page as the most likely location for grades based on its description. 2.1.6 The system displays the Student Information web page.

2.1.7 The student then selects the'Student Link," link, again based on fts description.

2.1.8 Temporarily, the system displays a web page which as ks the student to select Student Link Version t or Version 2 which are each briefly described.

2.1,9 Assuming the student selects Version 2 since the system says it has a nicer, simpler user interface, the system will then display the secure Iogln screen which asks the student to enter his/her Student ID Number and Personal Identitication Number (PIN) to ensure that students only access their own Information. i .

2.1.10 The student enters the ID and PIN and then clicks the Login button. 2.1.11 The system then validates the ID and PIN and requests the student re-enter them If they are not valid. If the student d•es n•t enter a va•id •D and PIN •r enc•unters any err•r in the pre•i•us steps that he•she can1•x` the studen• •ann•t took-up grades using UA Info's web access from that computer. The student can try another computer or use an alternative method for checking grades (e.g., RSVP, check with professor, walt for grades In mall, or check at Registrar's ~ ~ Omce)

2.1 .t 2 If the ID and PIN were entered conectiy, the system displays the main Student Link Screen.

2.1.13 The student then clicks on the Icon for Grades. ~IW

Figure 4. Numbered Step Format

-1. Group Outliner Practice

- 2. UA Info Scenarios =-3 - 2.1 Student Checks Semester Grades using UA Info

2.1.1 Student Starts Intemet Browser1 21.2 System Displays Browser's Main Screen

2.1.3 Student Requests UA's Web Site

2.1.4 System Displays UA's Web Page

! }-- 2.1.5 Student Selects "Student Information" I-2.1.6 System Displays Student Information Web Page

-2.1.7 Student Selects "Student Link"

I ~- 2.1 .B Student Selects Version 20! Student Lin k

/e Student Starts Intemet Browser

Figure 5. Structured Step Format

The DATA BASE for Advances in Information Systems - Summer-Fall 1999 (Vol. 30, No. 3,4) 99

Page 9: Establishing a Foundation for Collaborative Scenario ...130.18.86.27/.../papers/HickeyEtAl1999_DBAIS3034_CollaborativeSc… · Scenario Elicitation Ann M. Hickey Douglas L. Dean Jay

definition of the actions was entered as com- ments behind these action names, see Figure 5. Group Outliner automatically generated step numbers for each action node.

The three formats served as the treatment levels in a single factor randomized complete block design. Experimental session was used as the blocking factor to control for (1) possible differ- ences in experimental conditions between ses- sions and (2) the non-random assignment of sub- jects to sessions. Within each session, subjects were randomly assigned (without replacement) to one of the three treatments, resulting in each treatment being assigned to 3-4 subjects per experimental session. Seven experimental ses- sions with a total of 23 observations per treatment were conducted. The total sample size of 69 exceeded the 63 conservatively estimated as needed to achieve a power of .80 for o~=.05.

Questionnaire Development

As stated above, each student completed a post- session questionnaire at the end of each experi- ment. The post-session questionnaire was designed to collect:

1. Subjects' assessments of the quality of their scenarios and process efficiency. The thirteen quality and six efficiency questions directly paralleled the proposed quality and efficiency constructs.

2. Subjects' perceptions of ease of use of the tool used to define scenarios (GroupSystems Group Outliner) to ensure that it did not have a negative impact on scenario definition. These seven questions were taken directly from Fred Davis' widely used perceived ease of use instrument (Davis, 1989).

3. Demographic data, so that homogeneity of subjects across treatments could be assessed.

4. Self-reports on motivation to ensure that there were no significant differences between treat- ments.

Analytical Techniques

Analysis of the experimental results was based on quantitative analysis of scenario quality and user scenario definition efficiency measures and post- session questionnaires plus qualitative assess- ment of the scenario definitions and the researcher's observations from the experimental sessions.

Analysis of scenario content focused on the first scenario since that was the only one defined by all subjects. To ensure accurate and consistent counts of the scenario content, a master spread- sheet was developed which consolidated all sce- nario information provided by the subjects for scenario one. Scenario information included actors, actions, descriptions, data requirements, exceptions, and alternatives based on the defini- tion of a scenario provided in the experiment instructions (see Figure 2). Actions were decom- posed to the lowest possible abstraction level to ensure consistent and accurate counts of all actions explicitly identified by participants. The spreadsheet was compared to the university's course registration instructions to ensure that it accurately reflected the current process.

A total of 66 possible actions were identified in the master spreadsheet definition of scenario one. The large number of actions resulted from the detailed abstraction level of actions and the breadth of the contextual registration scenario (e.g., student actions to determine class needs, plan their schedule, as well as their interaction with the university's registration system to register for classes). Each scenario definition was then compared against the master spreadsheet to count the number of actions, actors, descriptions, data requirements, exceptions, and alternatives identified for that scenario. Measures of scenario quality and user scenario definition efficiency were calculated as follows: • Percent of total actions was calculated by

dividing the number of actions in the scenario by 66 (the total possible number of actions).

• Average sequential information per action was calculated by adding together the number of actions, actors, descriptions, and data require- ments (i.e., the sequential information) and dividing that sum by the number of actions.

• Average non-sequential information per action was calculated by adding together the number of exceptions and alternatives (i.e., the non- sequential information) and dividing that sum by the number of actions.

• Total scenario length was calculated by count- ing the number of words included in all sce- nario definitions. Length was measured in words to ensure a consistent count because of the variations in other traditional measures of length such as lines or pages. Although length is the only efficiency measure commonly used

100 The DATA BASE for Advances in Information Systems - Summer-Fall 1999 (Vol. 30, No. 3,4)

Page 10: Establishing a Foundation for Collaborative Scenario ...130.18.86.27/.../papers/HickeyEtAl1999_DBAIS3034_CollaborativeSc… · Scenario Elicitation Ann M. Hickey Douglas L. Dean Jay

for scenarios, questions about whether it is a valid measure could be raised. For example, does a scenario definition with 1000 words pro- vide twice the information and therefore twice the quality of one with 500 words or is it just more verbose? Also, is use of more words to express the same amount of scenario informa- tion required by any of the formats? Therefore, before deciding to use word count to measure efficiency, two analyses were performed. First, the correlation between the length of scenario one and total scenario one information was evaluated. Second, the ratio of word count to total scenario information was analyzed to assess format conciseness. Since correlation was high (.80) and there were no statistically significant differences in conciseness between treatments, total scenario length in words was used to measure overall efficiency.

Preliminary analysis focused on the demographic and motivation questionnaire results. A common factor analysis of the individual scenario quality, efficiency, and ease of use questions was con- ducted to ensure the questions loaded on the appropriate factors. -. Factor scores were used to compare treatments on the resulting factors.

Quantitative analyses of the scenario quality and efficiency measures began with evaluation of descriptive statistics followed by comparisons between treatments. Since experimental session was not significant as a blocking factor for any measure, it was eliminated from the analysis. All quality and efficiency measures were evaluated together using MANOVA to ensure control of the experiment-wide error rate (Hair, Anderson, Tatham, & Black, 1995). Tests for differences between means for individual measures were then

conducted using ANOVA with pair-wise compar- isons of means evaluated using Tukey's method of multiple comparisons to ensure control of the error rate (Hair et al., 1995; Neter, Wasserman, & Kutner, 1990).

Results

In general, all participants easily defined the main actions for each scenario. However, many did not provide all the requested information for each action (e.g., data requirements, exceptions, and alternatives). Problem areas and quality and effi- ciency differences between the treatments are highlighted in the following sub-sections.

Participant Demographics and Motivation

Analysis of the demographic questions showed that no significant differences existed between treatment groups on these characteristics. Subjects were primarily senior MIS majors who had taken multiple MIS courses, but had only lim- ited analytical expertise. On average, they used computers one or more times a day and had very good computer and word processing expertise with better than average typing skills. They also were somewhat familiar with GroupSystems. The other experimental check was for motivation and effort. On average, motivation and effort were very high (4.28 and 4.17 on a 5-point scale) with no significant differences between treatment levels.

Assessing Overall Scenario Quality and User Scenario Definition Efficiency

To develop a better understanding of subjects' ability to define scenarios, descriptive statistics for scenario quality were analyzed. All statistics showed a high degree of individual variability and identified important quality problems. For exam-

Scenario Measures Number of Actions Number of Actors Number of Descriptions Number of Data Requirements Total Sequential Information

Median 18 18 9 4

49

Min Mean Std Dev 18.62 5.06 17.77 5.40 9.17 4.54 4.13 2.58

49. 70 15.06 2.64 2.18 0.74 0.98 3.38 2.46

53.07 16.39 0.28 0.08

Scenario Statistics

6 5

2 0

15

Max

38 38 22 10

106

Number of Exceptions 2 0 11 Number of Alternatives 0 0 3 Total Non-Sequential Information 3 0 13

Total Scenario Information 53 15 115 % Total Possible Actions 0.27 0.09 0.58

Table

The DATA BASE for Advances in Information Systems- Summer-Fall 1999 (Vol. 30, No. 3,4) 101

Page 11: Establishing a Foundation for Collaborative Scenario ...130.18.86.27/.../papers/HickeyEtAl1999_DBAIS3034_CollaborativeSc… · Scenario Elicitation Ann M. Hickey Douglas L. Dean Jay

pie, as shown in Table 1, while the average num- ber of actions per scenario definition was 18.62, the individual counts varied from a low of 6 to a high of 38. Most other measures showed similar variability. These results were consistent with observations during the experiments indicating that some subjects seemed to be rapidly defining scenarios while others seemed to be struggling.

Statistics on action completeness were also inter- esting and surfaced some potential quality prob- lems. For example, while 95% of action specifica- tions included the actor, less than 50% provided a description of the action with even fewer identi- fying data requirements (22%), exceptions (17%), or alternatives (4%). Since Figure 6 shows similar problems for all treatments, it is apparent that the scenario format, process, or tool must be changed to improve action completeness.

Scenario definitions were also evaluated qualita- tively to analyze the types of errors and to identi- fy major ambiguities in the definitions. The major- ity of errors were errors of omission whereby sub- jects did not identify all actions for a scenario or all information requested for individual actions. These types of errors are accounted for in the scenario and action completeness quantitative measures. A few other errors in the scenario defi- nitions were primarily caused by incorrect action sequence. However, these errors often identified viable alternative action sequences or highlighted areas where the action sequence was not very logical, so they should be considered sources of

design information rather than errors to be ignored.

Although text is inherently ambiguous, there were some glaring examples of ambiguities in the sce- nario definitions that should be considered in attempts to improve scenario quality. The most common problem was the vague specification of data requirements. For example, many subjects used course number and course call number interchangeably in their scenario definitions when in fact they are distinctly different data items with totally different formats. Another common prob- lem area was ambiguous specification of actors as he/she/it. For example, sometimes it was not clear whether the actor was the student, instruc- tor, or registrar.

Comparing Scenario Quality and User Scenario Definition Efficiency

To respond to the research question, how will increasing the structure of scenario definitions affect scenario quafity and the efficiency of sce- nario definition by individual users, and evaluate the four hypotheses, this section compares the scenario quality and efficiency measures for the three treatment formats and the unstructured (treatment 1) vs. structured formats (treatments 2 and 3).

Multivariate Analysis. A multivariate analysis of variance (MANOVA) was conducted using all four quality and efficiency measures to assess the overall fit of the model. Wilks' lamda was used to

E o

O (n

0

U , ¢

1

0.8

0.6

0.4

0.2

0

Actors Desc Data Exc AIt

[] Unstructured ~ Numbered Steps Structured Steps

Figure 6. Action Completeness

102 The DATA BASE for Advances in Information Systems - Summer-Fall 1999 (Vol. 30, No. 3,4)

Page 12: Establishing a Foundation for Collaborative Scenario ...130.18.86.27/.../papers/HickeyEtAl1999_DBAIS3034_CollaborativeSc… · Scenario Elicitation Ann M. Hickey Douglas L. Dean Jay

assess this fit since it is the most commonly used test statistic for overall significance in MANOVA (Hair et al., 1995). As shown in Table 2, the overall model fit when comparing the three treatment for- mats was weakly significant (p=.0573). Statistical significance of the model improved to p=.0286 when the unstructured format was compared to the structured formats, as shown in Table 3. Given that these results showed at least weak signifi- cance for the overall model, analysis of the indi- vidual quality and efficiency measures followed.

Scenario Completeness. Hypothesis 1 states that increasing the degree of scenario structure to increase the emphasis on individual actions will increase the percent of total actions included in the scenario definition. This hypothesis was not supported. Comparisons of the Percent of Total Actions show no statistically significant differ- ences between treatments as seen in Table 2 or between the structured and unstructured formats as seen in Table 3. In fact, although not signifi- cantly, the treatment means seem to indicate the opposite effect, with unstructured treatment means that were °higher than the structured means.

Sequential Action Completeness. Hypothesis 2 states that increasing the degree of scenario structure to differentiate individual actions will increase average sequential information per action. The results contradict this hypothesis. The analysis shows that the means of Average

Sequential Information Per Action were statistical- ly different (p=.0386) when all three treatments were compared in Table 2 and weakly statistically different (p=.0611) when the unstructured and structured treatments were compared in Table 3. However, pair-wise comparison of the treatment means indicates that the action completeness of the unstructured treatment 1 was statistically greater than that of the structured treatment 3 (p<.05).

Non-Sequential Action Completeness. Hypothe-ses 3 states that increasing the degree of scenario structure to identify individual actions sequentially will decrease average non-sequential information per action. The results supported this hypothesis. Comparisons of the Average Non- Sequential Information Per Action show a weak statistical difference (p=.0573) when comparing the three treatments as seen in Table 2 and a sig- nificant difference (p=.0354) when comparing the unstructured and structured treatments as seen in Table 3. Pair-wise comparisons of the treatment means show that in this regard the treatment 1 mean was statistically greater than that for treat- ment 3 (p<.05) with the same outcome when the unstructured format (treatment 1) was compared with the structured formats (treatments 2 and 3).

Efficiency. Hypothesis 4 states that increasing the degree of scenario structure will decrease the total length of scenario definitions. The results strongly support this hypothesis. Comparison of

Model/Variable Overall Model (Wilks' Lamda) Percent of Total Actions Average Seq Info per Action Average Non-Seq Info per Action Total Scenario Length

Treat I Treat 2 Treat 3

'0.3017 0.28 0.27 1.7359" 1.6809 1.5588" 0.2208* 0.1768 0.1372" 800.30* 632.48 567.30*

F Value p value F(8,126)=1.9564 .0573 F(2,66) =1.16 .3187 F(2,66) =3.42 .0386 F(2,66) =2.99 .0573 F(2,66) =3.86 .0260

* Significant difference between means (p<.05)

Table 2. Comparison Of The Three Treatment Formats

Model/Variable Unstructured Structured (Treat 1) (Treat 2 & 3)

Overall Model (Wilks' Lamda) Percent of Total Actions Average Seq Info per Action Average Non-Seq Info per Action Total Scenario Length * Significant difference between means (p<.05)

0.3017 0.2724 1.7359 1.6198

0.2208* 0.1570* 800.30* 599.89*

F Value p value

F(4,64) =2.8994 .0286 F(1,67) =2.29 .1349 F(1,67) =3.63 .0611 F(1,67) =4.61 .0354 F(1,67) =7.19 .0092

Table 3. Comparison Of The Unstructured Vs. Structured Formats

The DATA BASE for Advances in Information Systems - Summer-Fall 1999 (Vol. 30, No. 3,4) 103

Page 13: Establishing a Foundation for Collaborative Scenario ...130.18.86.27/.../papers/HickeyEtAl1999_DBAIS3034_CollaborativeSc… · Scenario Elicitation Ann M. Hickey Douglas L. Dean Jay

Total Scenario Length shows a statistically signif- icant difference (p=.0260) when comparing treat- ments (Table 2) and a strong statistically signifi- cant difference (p=.0092) when comparing the unstructured and structured formats (Table 3). Pair-wise comparisons of mean scenario length showed statistically significant differences, with the treatment 1 mean greater than that for treat- ment 3 and the unstructured format mean greater than that for the structured format.

Analysis of Post-Session Questionnaire Results

To further explore the research question, subjects' perceptions of scenario quality, efficiency, and ease of use, collected as part of the post-session questionnaire, were analyzed. Analysis showed that subjects generally rated scenario quality, effi- ciency, and GroupSystems ease of use very high. There were no significant differences between treatments for scenario quality or GroupSystems ease of use. In contrast, subjects rated the struc- tured treatments significantly higher on (1) the ease of the scenario definition method and (2) whether the method allowed them to do what they needed.

Although the questionnaire results showed few treatment differences, the results did seem to indicate consistency between related questions. A common factor analysis was performed to explore this commonality. Significance tests reported using the maximum likelihood method of factor analysis with varimax rotation indicated that three factors were sufficient. Detailed factor

Ioadings are shown in the Appendix with the fac- tor analysis results summarized in Figure 4. The three factors directly map to the quality, ease of use, and efficiency concepts, in addition, Figure 4 shows that the Cronbach's alpha for all factors met or exceeded the recommended .80 standard for business research.

Means of standardized factor scores for each treatment and the unstructured versus structured treatments are summarized in Table 5. As expect- ed from the individual question analysis, the only statistically significant difference was between the unstructured and structured treatments, with the structured groups rating efficiency higher than the free-format group.

Discussion

The experimental results offered mixed support for the four hypotheses shown in Table 6. Hypotheses which claimed that the unstructured format would be better (hypotheses 3 and 4) were generally supported. However, those that stated that the structured formats would be better either were not supported (hypothesis 1) or were con- tradicted (hypothesis 2). Although there were dif- ferences in the degree of statistical preference, all quality and efficiency measures rated higher on the unstructured formats.

A broader analysis of the quantitative and qualita- tive results can be summarized into three main findings: (1) scenario completeness was low for all formats, (2) completeness was lower for struc- tured formats than for the unstructured format, and (3) results of higher efficiency measures for

N o .

1 2 3

Factor Description Proposed Items Actual Items Cronbach's Alpha Quality 13 11" .89 Ease of Use 7 8** .90 Efficiency 6 5** .80

* Questions on alternatives and exceptions dropped due to low commonal i ty with other variables.

** Because of vague question wording, one efficiency question loaded slightly higher on ease of use.

Table 4. Questionnaire Factor Analysis Summary

No. Factor Description

1 Quality 2 Ease of Use 3 Efficiency

Treat 1

0.10 -0.17 -0.28

Treat 2 Treat 3

0.14 -0.25 0.16 0.00 0.08 0.20

* Significant (p<.05)

Unstructured Structured (Treat 1) (Treat 2&3)

0.10 -0.05 -0.17 0.08 -0.28* 0.14"

Table 5. Comparison Of Questionnaire Standardized Factor Scores

104 The DATA BASE for Advances in Information Systems- Summer-Fall 1999 (Vol. 30, No. 3,4)

Page 14: Establishing a Foundation for Collaborative Scenario ...130.18.86.27/.../papers/HickeyEtAl1999_DBAIS3034_CollaborativeSc… · Scenario Elicitation Ann M. Hickey Douglas L. Dean Jay

the unstructured format contradicted subjects' perceptions of higher efficiency for structured for- mats. Possible reasons for each of these findings are discussed in the next section. Limitations of the study and their potential impacts are summa- rized next. Finally, implications for practitioners planning to use scenarios are discussed in the final section.

Interpretation of Results

The results clearly showed completeness prob- lems at both the action and scenario levels for all formats shown in Figure 6.

At the action level, only half of the actions includ- ed descriptions, with less than a quarter including data requirements, exceptions, and alternatives. One possible explanation is that, although the instructions requested this information, none of the formats specifically prompted for it, so sub- jects had no on-screen reminders as they devel- oped the scenario descriptions. In addition, sub- jects were asked to provide all the information at once, which may have caused an information overload problem. Most subjects did well identify- ing the main actions, but they might have done better with total action completeness if the defini- tion process had been split into at least two steps: (1) identifying the actions and then (2) adding the remaining detailed action information.

At the scenario level, completeness was also low, with an average of only 28% of the possible 66 actions included per scenario. However, most subjects did succeed in listing actions that cap- tured the essence of the high-level registration process, but not at the level of detail needed to define system behavioral requirements. Some users, without a lot of structure and prompting, simply do not think about these specific details.

The 28% finding may somewhat exaggerate the absolute level of incompleteness for two reasons. First, as described in a previous section, the over- all scenario action count of 66 possible actions was derived from a broad scope of potential actions that were related to course registration. Specifically, the fact that some participants included actions for planning their class schedule while others began action definition at the point of conducting the registration process itself meant that a broader range of total actions were includ- ed in the master action list used to evaluate com- pleteness. Had the contextual scenarios been more tightly bounded, it might have helped allevi- ate this problem. It might also have helped sub- jects focus their attention more narrowly and therefore fostered inclusion of additional details.

The second reason that incompleteness may be somewhat overstated is that the 66 actions in the master action list were made very specific to pro- vide accurate counting of actions to support valid treatment comparisons. Subjects who identified only abstract, high-level actions missed many of the more specific actions. However, this high number of specific actions highlights the tremen- dous amount of information that can be included in scenario descriptions.

In addition to the common completeness prob- lems described above, completeness was signifi- cantly lower for the structured formats when com- pared to the unstructured format (see Tables 2 and 3). This may have been caused by the struc- tured formats' action delineation that increased focus on the action sequence at the cost of dimin- ished attention to the other action information. Format 3's named and numbered steps may have magnified this problem by totally separating the action from the descriptive information about the action. Both structured treatments' focus on

No. Hypothesis Treatment Unstructured vs. Preferred Comparisons Structured Format Format

Scenario Completeness Not Supported Not Supported Unstructured

Seq Action Contradiction Weak Contradiction Unstructured Completeness Non-Seq Action Weak Support Supported Unstructured Completeness

Efficiency Supported Strong Support Unstructured

Table 6. Results summary

The DATA BASE for Advances in Information Systems - Summer-Fall 1999 (Vol. 30, No. 3,4) 105

Page 15: Establishing a Foundation for Collaborative Scenario ...130.18.86.27/.../papers/HickeyEtAl1999_DBAIS3034_CollaborativeSc… · Scenario Elicitation Ann M. Hickey Douglas L. Dean Jay

sequential actions also made it more difficult to represent non-sequential information such as exceptions and alternatives. Most importantly, the structured treatments provided no specific struc- tural support for increasing completeness, e.g., with prompts or templates.

Actual and perceived efficiency were somewhat contradictory. The efficiency metrics clearly indi- cated that the unstructured format was more pro- ductive (see Tables 2 and 3), while subjects felt more productive using the structured formats (see Table 5). One possible explanation for this contra- diction is that while the unstructured format didn't hinder efficiency, subjects may have perceived that entering information as a paragraph in a com- ment window did not give them as much support as participants who could use nodes to outline a list of actions and action definitions. In addition, subjects may have focused on the higher word count of the unstructured format without recog- nizing the associated increased content, thereby feeling less productive because more words were required for their scenario definitions.

Limitations of the Study

Some of the variability in the scenario descrip- tions may have been caused by vague definition of the scope of the experiment's scenarios. For example, while some subjects felt registering for classes started with calling the university's course registration system, others began with planning their schedules. Still others began with determin- ing the courses they needed to take, based on their major. While this may be useful when trying to develop an initial understanding of the overall problem domain, it may make it more difficult to focus attention on specific aspects of that domain. In the future, the scope of scenarios should be more clearly defined before beginning definition of the actions necessary to accomplish that scenario.

A second limitation of this study was that, although subjects were told what information to include in a scenario definition and how to docu- ment that information using Group Outliner, they were not given a specific process script for what to focus on in what order during scenario defini- tion. This lack of process guidance may have con- tributed to the action completeness problems observed. For example, the action completeness

results, as shown in Table 6, seem to indicate that subjects focused on identifying only the main actions. If they had been told to identify the main actions first, and then go back to add the other action information, action completeness might have been higher. Future research should evalu- ate whether an improved process can increase scenario quality and scenario definition efficiency.

A final limitation of the study was that the combi- nation of the selected scenario formats and the Group Outliner tool may have limited subjects' ability to identify action definitions easily that were incomplete because the tool provided no prompts or named fields to lead subjects' attention to information that had not yet been provided. Future research could investigate the combination of alternative scenario formats and tools as potential mechanisms for improving quality and efficiency.

Implications for Practitioners

This experiment has implications for practitioners planning to use scenarios during requirements definition. Working alone, with the limited support provided by the tools and formats used in this experiment, users can provide a portion of the scenario actions; but they also tend to miss a non-trivial percentage of actions. This was found for all formatsl Thus, other mechanisms will be required to help users achieve more complete scenarios.

First, as described above, a more specific process may help. A scenario definition process that directs users to enter information in stages may help users provide more complete scenarios by allowing them to focus on action identification followed by action definition. In the experiment, subjects were not directed to define all actions first before providing detail. This may have caused subjects in treatment 2 and 3 to identify an action and to define it first before considering other actions. This may have led to cognitive iner- tia or a tendency to focus on the details of one action at the expense of thinking about other actions. A process that directs users to define the broad set of actions before defining details about the actions may help overcome these problems. Second, better tools and templates may help. Use of more specific forms or templates that prompt users for specific information may help users

106 The DATA BASE for Advances in Information Systems - Summer-Fall 1999 (Vol. 30, No. 3,4)

Page 16: Establishing a Foundation for Collaborative Scenario ...130.18.86.27/.../papers/HickeyEtAl1999_DBAIS3034_CollaborativeSc… · Scenario Elicitation Ann M. Hickey Douglas L. Dean Jay

focus on providing more detail. Although Group Outliner supported structuring scenarios into named and numbered actions, a more specialized tool that provides specific prompts or a template that could be completed in an iterative manner may improve scenario completeness, but its impact on efficiency must be evaluated. A GSS scenario tool could be specifically designed to provide the necessary information prompts with added quality and efficiency aids such as pull- down lists to reduce ambiguity (e.g., for actors and data), and targeted support for iterative sce- nario definition.

Third, more active facilitation may be required. Having users work without facilitation and feed- back may not be sufficient to develop complete scenarios. Users may be able to work without much direction and feedback during the initial stage of scenario definition, but subsequent feed- back and guidance from a facilitator or analyst may be required to get the most from users. Specifically, the analyst could point out areas that are as yet undefined and make suggestion for how users should focus their efforts. In addition, analysts could assist users in decomposing broad scenarios into more specific sub-scenarios to fos- ter more consistent and focused action definition.

Finally, collaboration with other users may be very beneficial. Contribution by multiple users provides access to a broader experience base. Information sharing during collaboration may not only increase completeness but may also lead to vali- dation and reduction of ambiguity of the scenario actions and definitions. However, collaboration also requires improved processes. For example, users should first define and agree upon scenario goals and preconditions before action definition to ensure a common understanding of the scope of each scenario.

Research still needs to be done to determine the specific effects of all of these suggestions, but we have conducted some preliminary research to begin to explore an iterative, collaborative sce- nario elicitation process using a standardized sce- nario template. Based on the findings from these field studies, we have begun to use this approach with user groups and have seen some highly encouraging results. Using scenario templates with GroupSystems Group Outliner with active facilitation has noticeably increased the quality

and consistency of scenario definitions. Users seem to define more and we seem to get agree- ment upon those definitions. However, Group Outliner's limited support for templates has led to development of a GSS scenario prototype to be used in future research that may take advantage of the lessons learned during this experiment.

Conclusion

Scenarios have great potential for helping to solve many of the requirements problems that have been plaguing the software industry since its inception. Their primary value centers on their simplicity m users can easily describe concrete examples of what and how they do their jobs. These examples provide a rich source of informa- tion for supporting the discovery of requirements and for evaluating alternative ways for meeting those requirements. However, when used in an undisciplined manner, scenarios can also be incomplete, unfocused, and extremely difficult to define and analyze.

The purpose of this research was to take a first step and establish a foundation for a disciplined collaborative scenario elicitation process that can accrue the benefits of scenarios while avoiding their pitfalls. A major contribution of this research is its identification of the quality weaknesses of currently used scenario formats, processes, and tools. An assessment of this type was not found previously in the literature. Another contribution is that the learning from this experiment can inform additional research into possible new formats, processes, and tool concepts that may help prac- titioners during scenario definition.

Future research will focus on the formal evaluation and enhancement of the proposed collaborative scenario elicitation process and prototype. Plans are to refine the individual scenario definition for- mat, process, and tool and then move on to col- laborative scenario definition, refinement, and evaluation.

References

Benner, K.M., Feather, M.S., Johnson, W.L., and Zorman, L.A. (1993). "Utilizing Scenarios in the Software Development Process," Proceedings of the IFIP WG 8.1 Conference on Information System Develop Process,

The DATA BASE for Advances in Information Systems - Summer-Fall 1999 (Vol. 30, No. 3,4) 107

Page 17: Establishing a Foundation for Collaborative Scenario ...130.18.86.27/.../papers/HickeyEtAl1999_DBAIS3034_CollaborativeSc… · Scenario Elicitation Ann M. Hickey Douglas L. Dean Jay

North-Holland: Elsevier Science Publishers B.V., pp. 1174134.

Boehm, B.W. (1981). Software Engineering Economic,. Englewood Cliffs, N J: Prentice- Hall.

Card, S.K., and Moran, T.P. (1995). "User Ttechnology: Fom Pointing to Pondering," in R.M. Baecker, J. Grudin, W.A.S. Buxton, & S. Greenberg (Eds.), Readings in Human- Computer Interaction: Toward the Year 2000 (2nd Ed.), San Francisco, CA: Morgan Kaufmann, pp. 587-602.

Card, S.K., Moran, T.P., and Newell, A. (1983). The Psychology of Human-Computer Interaction, Hillsdale, N J: Lawrence Erlbaum Associates.

Carroll, J.M. (Ed.). (1995). Scenario-Based Design: Envisioning Work and Technology in System Development, New York: John Wiley & Sons, Inc.

Davis, A., Overmyer, S., Jordan, K., Caruso, J., Dandashi, F., Dinh, A., Kincaid, G., Ledeboer, G., Reynolds, P., Sitaram, P., Ta, A., and Theofanos, M. (1997). "Identifying and Measuring Quality in a Software Require- ments Specification," in R.H. Thayer & M. Dorfman (Eds.), Software Requirements Engineering, (2nd Ed.), Los Alamitos, CA: IEEE Computer Society Press, pp. 164-175.

Davis, A.M. (1993). Software Requirements: Objects, Functions, and States, (revised ed.). Upper Saddle River, NJ: Prentice Hall PTR.

Davis, F. (1989). "Perceived Usefulness, Ease of Use, and User Acceptance of Information Technology," MIS Quarterly, Vol. 13, No. 4, pp. 319-340.

Dean, D.L., Lee, J.D., Orwig, ROE., and Vogel, D.R. (1995). "Technological Support for Group Process Modeling," Journal of Management Information Systems, Vol. 11, No. 3, pp. 43- 64.

Dean, D.L., Lee, J.D., Pendergast, M.O., Hickey, A.M., and Nunamaker, J.F., Jr. (1997-98). "Enabling the Effective Involvement of Multiple Users: Methods and Tools for Collaborative Software Engineering," Journal of Management Information Systems, Vol. 14, No. 3, pp. 179-222.

Hair, J.F., Jr., Anderson, R.E., Tatham, R.L., and Black, W.C. (1995). Multivariate Data Analysis, (4th Ed.). Englewood Cliffs, N J: Prentice-Hall, Inc.

Hickey, A.M., and Lee, J.D. (1998). Group- Enabled Scenario Elicitation for IS

Development, (Technical Report ). Tucson, AZ: Center for the Management of Information, University of Arizona.

Hsia, P., Samuel, J., Gao, J., Kung, D., Toyoshima, Y., and Chen, C. (1994). "Formal Approach to Scenario Analysis," IEEE Software, Vol. 11, No. 2, pp. 33-41.

Jacobson, I., Christerson, M., Jonsson, P., and Overgaard, G. (1992). Object-Oriented Software Engineering: A Use Case Driven Approach, Reading, MA: Addison-Wesley.

Kuutti, K. (1995). "Work Processes: Scenarios as a Preliminary Vocabulary," in J.M. Carroll (Ed.), Scenario-based Design: Envisioning Work and Technology in System Develop- ment, New York: John Wiley & Sons, Inc.

Lee, J.D., and Dean, D.L. (1997). "Tools and Methods for Group Data Modeling: A Key Enabler of Enterprise Modeling," ACM SIGO- IS Bulletin, (Special Issue on Enterprise Modeling), August.

Leite, J.C.S.d.P., Rossi, G., Balaguer, F., Maiorana, V., Kaplan, G., Hadad, G., and Oliveros, A. (1997). "Enhancing a Require- ments Baseline with Scenarios," Proceedings of the Third International Symposium on Requirements Engineering, Los Alamitos, CA: IEEE Computer Society Press, pp. 44-53.

Neter, J., Wasserman, W., & Kutner, M.H. (1990). Applied Linear Statistical Models, (3rd Ed.). Burr Ridge, IL: Richard D. Irwin, Inc.

Nunamaker, J.F., Jr., Briggs, R.O., Mittleman, D.D., Vogel, D.R., and Balthazard, P.A. (1996- 97). "Lessons from a Dozen Years of Group Support Systems Research: A Discussion of Lab and Field Findings," Journal of Management Information Systems, Vol. 13, No. 3, pp. 163-207.

Pohl, K., and Haumer, P. (1997, June 16-17). "Modeling Contextual Information About Scenarios," paper presented at the Third International Workshop on Requirements Engineering: Foundation for Software Quality, Barcelona, Spain.

Potts, C., Takahashi, K., and Anton, A.I. (1994). "Inquiry-Based Requirements Analysis," IEEE Software, Vol. 11 No. 2, pp. 21-32.

Rolland, C., Ben Achour, C., Cauvet, C., Ralyte, J., Sutcliffe, A., Maiden, N. A. M., Jarke, M., Haumer, P., Pohl, K., Dubois, E., and Heymans, P. (1998). "A Proposal for a Scenario Classification Framework," Require-

108 The DATA BASE for Advances in Information Systems - Summer-Fall 1999 (Vol. 30, No. 3,4)

Page 18: Establishing a Foundation for Collaborative Scenario ...130.18.86.27/.../papers/HickeyEtAl1999_DBAIS3034_CollaborativeSc… · Scenario Elicitation Ann M. Hickey Douglas L. Dean Jay

ments Engineering Journal, Vol. 3, No. 1, pp. 23-47.

Standish. (1995). Chaos, Dennis, MA: Standish Group International, Inc.

Weidenhaupt, K., Pohl, K., Jarke, M., and Haumer, P. (1998). "Scenarios in System Development: Current Practice," IEEE Soft- ware, Vol. 15, No. 2, pp. 34-45.

About the Authors

Ann Hickey is an assistant professor of informa- tion systems at the University of Colorado at Colorado Springs. She received her M.S. and Ph.D. in MIS from the University of Arizona in 1990 and 1999. She has also worked as a pro- gram manager and senior systems analyst for the Department of Defense. Her interests include col- laborative requirements elicitation, systems analy- sis, and scenario and process modeling. Her work has been published in the Journal of Management Information Systems and several national and international conference proceed- ings. E-mail: [email protected]

Douglas Bean is an assistant professor at the school of accountancy and information systems at Brigham Young University. He received his Ph.D. in MIS from the University of Arizona in 1995, and a Master of accountancy with an emphasis in information systems from Brigham Young University in 1989. His research interests include requirements analysis, software project management, collaborative tools and methods, and creativity. His work has been published in Management Science, Journal of Management Information Systems, Group Decision and Negotiation, and IEEE Transactions on Systems, Man, and Cybernetics. E-mail: [email protected]

Jay Nunamaker, Jr. is Regents and Soldwedel Professor of MIS, computer science and commu- nication and Director of the Center for the Management of Information at the University of Arizona, Tucson. In 1996, Dr. Nunamaker received the DPMA EDSIG Distinguished IS Educator Award. The GroupSystems concepts and soft- ware resulting from his research received the Editor's Choice Award from PC Magazine, June 14, 1994. At the GroupWare 1993 conference, he received the GroupWare Achievement Award

along with recognition of GroupSystems as best of show in the GDSS category. In 1992, he received the Arthur Anderson Consulting Professor of the Year Award. Dr. Nunamaker received his Ph.D. in systems engineering and operations research from Case Institute of Technology, an M.S. and B.S. in engineering from the University of Pittsburgh, and a B.S. from Carnegie Mellon University. E-mail: [email protected]

The DATA BASE for Advances in Information Systems - Summer-Fall 1999 (Vol. 30, No. 3,4) 109

Page 19: Establishing a Foundation for Collaborative Scenario ...130.18.86.27/.../papers/HickeyEtAl1999_DBAIS3034_CollaborativeSc… · Scenario Elicitation Ann M. Hickey Douglas L. Dean Jay

Appendix Factor Loadings

# Question Description

Q2 Q4 Q6 Q8

Q10 Q l l Q12 Q14 Q15 Q17 Q18 Q20 Q22 Q3 Q5 Q9 Q13 Q16 Q21 Q23 Q24 Q25 Q26 Q27 Q28 Q29

Overall scenario quality Scenarios understandable by students Scenarios Scenarios Scenarios Scenarios Scenarios Scenarios Scenarios Scenarios Scenarios Scenarios

accurate unambiguous complete understandable by an analyst. included requested information correct included data used clear identified exceptions/problems understandable by registrar

Factor 1 Quality

.74

.70

.78

.45

.77

.56

.40

.77

.35

.84 ~r

.68

Factor 2 Ease of Use

-.02 -.01 .15 .11 .20 .24 .17 .03 .16 .04

~t

.14

Factor 3 Efficiency

.34

.14

.02

.15

.08

.23 -.03 .06 .13 .13

~t

.15 Scenarios identified alternatives Method supported scenario definition Defined scenarios quickly Defined scenarios efficiently Scenario definition method easy Used time efficiently Scenario definition speed GS easy to learn GS easy to do what wanted GS interactions clear/understandable GS flexible to interact with GS easy to become skillful GS didn't require a lot of mental effort GS easy to use

.34

.07

.06

.15

.33

.24

.33

.14

.19

.05

.26

.02

.05

.33

.24

.39

.40

.11

.25

.56

.88

.89

.67

.75

.47

.85

.47

.80

.66

.37

.43

.61

.14

.24

.13

.48

.21

.09

.32

* Questions on alternatives and exceptions dropped due to low commonality with other variables.

110 The DATA BASE for Advances in Information Systems - Summer-Fall 1999 (Vol. 30, No. 3,4)