pm capability levels

11
Project Management Capability Levels: An Empirical Study Tom McBride Brian Henderson-Sellers Didar Zowghi University of Technology, Sydney University of Technology, Sydney University of Technology, Sydney [email protected] [email protected] [email protected] Abstract This paper outlines existing maturity models of project management and their underlying constructs. Organizations involved in software development in Sydney, Australia were interviewed about their project management practices and their responses analysed to determine whether different project managers used different levels of project management practices and whether the practices were in accordance with a process based maturity model. This did not seem to be the case, yet the data suggested that, as a possible alternative, a systems theory based approach might be more tenable. The overall conclusion, that a system theory based maturity model appears to be better correlated with organizational size and software development maturity than a process based maturity model, is briefly discussed and additional research is suggested that could investigate this novel conclusion further. 1 Introduction Maturity models of software development processes are useful because they indicate different levels of performance (of the processes) and hence the direction for software process improvement (SPI). The most well known of them, the CMM, was developed in response to a request to provide the USA federal government with a method for assessing its software contractors [8]. However, maturity models such as those that underlie the CMM’s successor, CMMI (Integrated Capability Maturity Model) [9] or SPICE (Software Process Improvement and Capability dEtermination) [6] may not be the best models of maturity for management processes such as project management, as opposed to the technical processes of developing the software, nor are they necessarily the best model for distributed, globalised software development. Rather than simply seeking evidence of conformance to a particular model of maturity or project management, this research first seeks to establish the work practices of a sample of project managers, and then to deduce if those work practices conform to a pattern of increasing sophistication or maturity. As noted above, maturity scales require an underlying model. These are generally devised according to a view of how the desired results ought to be achieved. Depending upon which model is assessed, an organization might rate higher or lower. For example, if multi-lingual skills were thought essential to international business, then organizations whose personnel possessed those skills are likely to be assessed at a higher maturity level than if the underlying model did not consider multi-lingual skills to be important. Well-known maturity models (outlined in Section 2) such as CMMI and SPICE have an underlying process model that views software development activities in an industrial production-like fashion, focusing attention on the flow of work from one process to another. Alternative views, such as systems theory [1, 10, 12], focus attention on different aspects of software development, project management in particular (Section 5). To investigate if the monitoring and management activities of project managers conform to a capability scale, the following research question was proposed: Do the monitoring and controlling activities of project managers conform to a process based capability model such as SPICE ? Evidence was gathered (Section 3) to establish how project managers monitor and control their software development projects and then the responses were examined against the hypothesis. It was found that, although there was some support for a maturity model of project management, the support was not for the expected process based model (as is assumed for CMMI and SPICE) but a systems theory based model (Section 4). Threats to validity are considered in Proceedings of the 11th Asia-Pacific Software Engineering Conference (APSEC’04) 1530-1362/04 $ 20.00 IEEE

Upload: betterprojectsnet

Post on 28-Mar-2016

213 views

Category:

Documents


1 download

DESCRIPTION

Project Management Capability Levels: An Empirical Study 1 Introduction Proceedings of the 11th Asia-Pacific Software Engineering Conference (APSEC’04) 1530-1362/04 $ 20.00 IEEE Abstract Proceedings of the 11th Asia-Pacific Software Engineering Conference (APSEC’04) 1530-1362/04 $ 20.00 IEEE Section 5, with Section 6 offering conclusions and pointers to future research.

TRANSCRIPT

Page 1: PM capability levels

Project Management Capability Levels: An Empirical Study

Tom McBride Brian Henderson-Sellers Didar ZowghiUniversity of Technology,

SydneyUniversity of Technology,

SydneyUniversity of Technology,

[email protected] [email protected] [email protected]

Abstract

This paper outlines existing maturity models ofproject management and their underlying constructs.Organizations involved in software development in Sydney, Australia were interviewed about their projectmanagement practices and their responses analysed to determine whether different project managers used different levels of project management practices and whether the practices were in accordance with a process based maturity model. This did not seem to be the case, yet the data suggested that, as a possible alternative, a systems theory based approach might be more tenable. The overall conclusion, that a system theory based maturity model appears to be better correlated with organizational size and softwaredevelopment maturity than a process based maturity model, is briefly discussed and additional research is suggested that could investigate this novel conclusionfurther.

1 Introduction

Maturity models of software developmentprocesses are useful because they indicate different levels of performance (of the processes) and hence the direction for software process improvement (SPI). Themost well known of them, the CMM, was developed in response to a request to provide the USA federalgovernment with a method for assessing its software contractors [8]. However, maturity models such as those that underlie the CMM’s successor, CMMI(Integrated Capability Maturity Model) [9] or SPICE (Software Process Improvement and CapabilitydEtermination) [6] may not be the best models ofmaturity for management processes such as project management, as opposed to the technical processes of developing the software, nor are they necessarily the best model for distributed, globalised softwaredevelopment. Rather than simply seeking evidence of

conformance to a particular model of maturity or project management, this research first seeks to establish the work practices of a sample of project managers, andthen to deduce if those work practices conform to a pattern of increasing sophistication or maturity.

As noted above, maturity scales require anunderlying model. These are generally devisedaccording to a view of how the desired results ought to be achieved. Depending upon which model is assessed, an organization might rate higher or lower. For example, if multi-lingual skills were thought essential tointernational business, then organizations whosepersonnel possessed those skills are likely to beassessed at a higher maturity level than if theunderlying model did not consider multi-lingual skills to be important.

Well-known maturity models (outlined in Section 2) such as CMMI and SPICE have an underlying process model that views software development activities in an industrial production-like fashion, focusing attention on the flow of work from one process to another.Alternative views, such as systems theory [1, 10, 12],focus attention on different aspects of softwaredevelopment, project management in particular (Section 5).

To investigate if the monitoring and management activities of project managers conform to a capability scale, the following research question was proposed:

Do the monitoring and controlling activities of project managers conform to a process basedcapability model such as SPICE?

Evidence was gathered (Section 3) to establish how project managers monitor and control their software development projects and then the responses were examined against the hypothesis. It was found that, although there was some support for a maturity model of project management, the support was not for the expected process based model (as is assumed forCMMI and SPICE) but a systems theory based model(Section 4). Threats to validity are considered in

Proceedings of the 11th Asia-Pacific Software Engineering Conference (APSEC’04) 1530-1362/04 $ 20.00 IEEE

Page 2: PM capability levels

Section 5, with Section 6 offering conclusions and pointers to future research.

2 Maturity Models

Process maturity models generally describe acollection of processes relevant to the area of interest and a scale by which increasing maturity may be assessed. Maturity is associated with organizations whose processes are capable of producing betteroutcomes and were originally observed to be actionedin the more "mature" organizations. Capability isassessed as giving some indication of such maturity and is generally evaluated relative to efficientperformance, i.e. the shortest completion time fornominated requirements with fewest defects. Theassessment is of the organization's ability to perform the processes rather than any characteristics of the processes themselves i.e. the measure is not of the quality of the process as formulated in the “process handbook” but a measure of the enactment of that process.

A universal scale of capability has yet to beestablished. The original SEI Software CapabilityMaturity Model (SW-CMM) was based on principles that have existed "for the last sixty years" [8]. The maturity framework was originally described by Crosby [3] as having five evolutionary stages in the adoption of quality practices and this framework was lateradapted to software processes. The SPICE capability scale reflects much of the theory of Total Quality Management (TQM) with its orientation towardstatistical process control. The capability modelproposed by Ibbs and Kwak [5] uses statisticalrelationships between project management maturity and project performance. While each of these has differing views of the details of the capability scale, there is broad consensus that at higher capability levels the processes are to be performed with greater rigour.

The most well known capability models in software engineering are the Integrated Capability MaturityModel (CMMI) [9] and the ISO originated Software Process Improvement and Capability dEterminationmodel (SPICE) [6]. Other capability models have also been developed but are outside the scope of the present paper.

The dominant text for project management, the PMBOK [4], separates the whole of projectmanagement into eight knowledge areas. Activities from each knowledge area are performed as required at various times during a project. Each knowledge area is divided into the phases of initiating, planning,

executing, controlling and closing, which reflect the familiar sequential arrangement. However, theknowledge areas are not presented as processes and don't have the same production orientation of theSPICE and CMMI process models – for example, Ibbs and Kwak [5] developed their maturity model of project management to better understand the financial and organizational benefits of using project management tools and practices in organizations. Rather than being confined to software development, their model wasdeveloped from information gathered from a range of industries. Increasing levels of maturity appear to be based on performing key activities but with greater thoroughness and rigour as the maturity levelsincrease. This is in contrast to both CMMI and SPICE where increasing levels of capability are achievedthrough performing more, and different, tasks.

3 Research method

To investigate whether project managers’ activities conform to a process based capability model,structured interviews were conducted with projectmanagers from a number of software development organizations in Sydney, Australia between February and September 2003. Organizations were approached byphone initially and asked if there was a project manager involved in software development who was willing to be interviewed. Structured interviews allowedquestions and responses to be clarified or amplified during the interview and also allowed for unexpected information and findings to emerge rather than directing responses to preconceived models.

There were 49 questions asked in the structured interviews. Of these

• 4 questions categorized the organization and its software development processes,

• 7 questions established how the project manager monitored the project,

• 3 questions established how the organization adjusted the project (scope, schedule, quality requirements, performance requirements) as a consequence of monitoring the project,

• 8 questions established an approximate measure of organizational distance, defined as the administrative, geographical and cultural separation between the sections of the project team,

• 5 questions es tablished project monitoring processes for outsourced tasks and

Proceedings of the 11th Asia-Pacific Software Engineering Conference (APSEC’04) 1530-1362/04 $ 20.00 IEEE

Page 3: PM capability levels

• 2 questions established how the outsourced project tasks were managed in response to information revealed by project monitoring.

An expected range of responses was developed for each of the 49 questions, both to guide the questioning and responses, and to help guide later analysis. This range of responses was not shown to the interviewee but used to indicate the scope of the information sought. For example, one of the questions and the expected responses was

Is there a standard method or process formonitoring project tasks?

No – each project manager does their own thing.Yes, but informal and flexible.Yes, defined but not very extensive.Yes, defined and extensive.This reduced the tendency to answer in more detail

than was intended.Questions were generally of two types. The first

was intended to establish the organization’s position on some scale. For example, a question about the size of the organization was intended to establish if they were small, medium, large or multinational. Similarly, aquestion on the formality of their software development processes was intended to establish approximatelywhere they would likely be assessed on a CMMI or SPICE assessment of process maturity. The second type of question was more open and designed to elicit information on, for example, the range of subjects discussed at an internal project meeting. This type of question in surveys would usually have a space to respond with “Other” so that respondents couldexpand on any other relevant issues .

Each interview took between 30 minutes to just over an hour, depending on the loquaciousness of the interviewee. Most lasted about 45 minutes and were conducted at the interviewee’s worksite. They were audio taped and later transcribed. The transcript was sent back to the interviewee to check and correct. The interview responses were then encoded and analysed using the statistical evaluation package, SPSS 11.0. Since the encoded variables were nominal or ordinal, the only appropriate statistical test was Chi-square if both variables were of a nominal type, and Kendall’s tau-b if one of the variables was of an ordinal type.

Organizations face a number of demands on their time and resources, and academic research, no matter how well intended or potentially beneficial, mustcompete for the organization’s time and willingness. To help persuade project managers to participate in this current research, an offer was made to email the list of

questions to them prior to the interview and to send a report of the findings once the study was complete.

3.1 Sample characteristics

Organizational sizeOrganizational size was judged largely on the

number of personnel. This estimate included the whole organization, not just the software development part of it, because past experience indicates that a smalldivision within a large organization more closelyresembles the large organization than a small,independent company of similar size to the division. Table 1 gives the distribution of organization size. The size divisions were chosen because they reflectapproximately where organizations tend to changestructure, from direct supervision through simple,single layer management through to multi layermanagement.

Table 1: Organizational size

Small (< 30 staff) 12

Medium (31 – 120) 4

Large (>120 - 1000 single organization) 3

Multinational (> 1000 or Multinational) 12

Total 31

Process maturityThe process maturity is a very approximate guide

based on the ISO 15504 (SPICE) or CMMI scale of process maturity. The first author is familiar with, and practised at, such process assessments. These ratings would be the equivalent of a very low rigour SPICEassessment. The single instance of a maturity level of 5 (Table 2) came from an actual CMMI assessment.Organizations were adjudged at level 3 if they were ISO 9001 accredited or had undergone a SPICE or CMMI assessment and had achieved that rating. Level 2 was assigned if the organization had documented software development processes, particularly those dealing with project management and document control.

Table 2: Process maturity

Informal - Level 1 6

Managed - Level 2 8

Defined - Level 3 16

Measured - Level 4 0

Optimizing - Level 5 1

Total 31

Proceedings of the 11th Asia-Pacific Software Engineering Conference (APSEC’04) 1530-1362/04 $ 20.00 IEEE

Page 4: PM capability levels

3.2 Project Control Techniques

Project managers were asked if they would drop functionality, or move it to a later release, or if they would compromise quality goals or performance goals in order to meet the delivery schedule. The responses are summarized in Table 3-5.

One of the more interesting and revealing responses on this topic was that, although the quality of the actual delivered executable system would not becompromised, there was always the capacity to reduce the quality of associated documentation, eitherinternally in the code or externally in reference manuals or technical manuals. Similarly, time can be saved by reducing the rigour of code or documentation reviews.

Table 3: Is functionality dropped to meet the delivery schedule?

Frequency Percent

Always retained 5 16.1

Engineers decide 1 3.2

Project Review board decides 3 9.7

Marketing decides 2 6.5

Negotiated with stakeholders 19 61.3

No response 1 3.2

Total 31 100

Table 4: Are quality goals compromised to meet the delivery schedule?

Frequency Percent

Always retained 12 38.7

Engineers decide 6 19.4

Project Review board decides 1 3.2

marketing decides 2 6.5

Negotiated with stakeholders 9 29.0

No response 1 3.2

Total 31 100

Table 5: Are performance goals compromised to meet the delivery schedule?

Frequency Percent

No performance goals set 5 16.1

Always retained 9 29.0

Engineers decide 2 6.5

Project Review board decides 1 3.2

Marketing decides 2 6.5

Negotiated with stakeholders 10 32.3

No response 2 6.5

Total 31 100

3.3 Project monitoring

Project managers were asked how they monitored the project. In fact they were each asked “How do you know if the project is going well?” and “How do you know if the project is going badly?” The responses were grouped in common techniques of

• expert judgment• progress measure • earned value measure • risk monitoring • defect monitoring• test results • other.

There was no statistical relationship between any one of the monitoring techniques and either of the two variables of “organization size” or “process maturity”.However, it seemed, on first consideration, that the more experienced project managers used severalmonitoring techniques, as shown in Table 6. Pearson’s Chi-square test returned an asymptotic significance of 0.054 leading to the conclusion that there was a weakcorrelation between organizational process maturityand the number of monitoring techniques used byproject managers.

Table 6: Count of monitoring techniques vs. process maturity

Count of monitoring techniques used. Total

ProcessMaturity

1 2 3 4 5 6

Informal 1 3 2 6

Managed 5 1 2 8

Defined 2 5 7 1 1 16

Optimizing 1 1

Total 6 6 9 8 1 1 31

3.4 Higher levels of project managementcapability

Respondents were asked questions that would have given a very approximate indication of higher levels of project management process capability on a SPICE or CMMI scale.

Proceedings of the 11th Asia-Pacific Software Engineering Conference (APSEC’04) 1530-1362/04 $ 20.00 IEEE

Page 5: PM capability levels

Defined processA question was directed at establishing whether or

not the organization had a defined project management process. The existence and use of a defined process is one of the outcomes required for a SPICE level 3process. With it, an organization can achieve level 3; without it, they cannot.

The table of results for organization size vs defined project management process is shown in Table 7.

Table 7: Organization size vs. Defined PM process

Project monitoring process Total

No

proc

ess

Info

rmal

met

hod

Form

al, n

ot

exte

nsiv

e

Form

al,

exte

nsiv

e

Small 7 3 2 12

Medium 2 2 4

Large 1 2 3

Multinational 3 3 6 12

Total 12 6 5 8 31

A Pearson Chi-square test shows a correlation between Organization size and Defined PM process (asymptotic significance of 0.045) but not between Process Maturity and Defined PM process (asymptoticsignificance 0.232).

Measured ProcessSPICE level 4 capability concerns the degree to

which objective measures are used to control theprocess performance and, specifically, to detect the sources of process faults and inefficiencies. As an example of a process measure applicable to project management, respondents were asked if they recorded how much time they spent on different aspects ofproject management. The responses are shown in Table81.

Table 8: Organizational Maturity vs. PM Time Monitoring

PM Time Monitoring Total

No Broad Detailed

Informal 6 6

Managed 7 1 8

Defined 8 5 2 15

1 There are only 30 data points in this two tables because one organization gave no response.

Optimizing 1 1

Total 21 5 4 30

A Pearson Chi-square test shows no correlation between Organizational size and PM Time Monitoring (asymptotic significance 0.452) but shows a correlation between Organizational Maturity and PM TimeMonitoring (asymptotic significance 0.040).

Optimizing ProcessSPICE level 5 is measured by the degree to which an

organization antici pates process improvements due to changes in techniques and technology appropriate to the process and deploys selected improvementsthroughout the organization. As an indication of this, respondents were asked how they became better at project management, e.g. was this achieved mainlythrough training or some other means such as a formal process improvement initiative. The results are shown in Table 9 and Table 102.

Table 9: Organizational size vs. PM Process Improvement

PM Process Improvement Total

Nothingspecial

Training Other

Small 7 5 12

Medium 1 1 2 4

Large 2 1 3

Multinational 2 3 5 10

Total 12 5 12 29

Table 10: Organizational Maturity vs. PM Process Improvement

PM Process Improvement Total

Nothingspecial

Training Other

Informal 3 3 6

Managed 5 3 7

Defined 3 5 6 14

Optimizing 1 1

Total 12 5 12 29

Neither Organizational size nor OrganizationalMaturity were correlated with Project ManagementProcess Improvement (Pearson Chi-square asymptotic significance of 0.408 and 0.216 respectively).

2 There are only 29 data points in these two tables because two organizations gave no response.

Proceedings of the 11th Asia-Pacific Software Engineering Conference (APSEC’04) 1530-1362/04 $ 20.00 IEEE

Page 6: PM capability levels

3.5 Process based capability model

As can be seen from the statistical analysis of the data shown in the tables in Section 3.4, there is very little evidence that project management activitiesconform to a process based capability maturity modelsuch as SPICE or CMMI.

4 Systems theory capability model

As a consequence of the observed lack of supportfor a process based capability model, systems theory was reviewed and a capability model was developedbased on systems theory. The interview data were re-examined for evidence of constraint, feedback and project-directing activities. It is acknowledged that such post interview hypothesizing is weakly valid and any results must be treated as indications of arelationship requiring more rigorous data beforeclaiming any support for such a relationship.

Systems theory is founded on two pairs of ideas,those of emergence and hierarchy, andcommunication and control [2]. Systems may bedecomposed into a hierarchy of sub-systems, each more complex than the one below it in the hierarchy. Each level of the hierarchy is characterized by emergent properties that that do not exist at the lower levels. Leveson [7] gives the example of the shape of an apple“although eventually explainable in terms of the cells of an apple, [the shape] has no meaning at that lower level of description.” The operation of the processes at the lower level, that of the biology of the apple, “result in a higher level of complexity – that of the whole apple itself – that has emergent properties, one of which is the apple’s shape.”

Software development, viewed from the perspective of systems theory, places the project manager between the organization’s executive and the development teamin the organizational hierarchy. The project manager places constraints on the development activities and on the developed project, controls the developmentactivities and receives feedback about the development as well as emergent phenomena of, among other things, coordination between the various developmentactivities. Determining whether a project is coordinated cannot be done by examining one of the project’s activities. It requires examining the relationshipsbetween the activities within the constraints imposed by the project management level before being able to determine the degree to which the project iscoordinated.

Budget, available personnel and their expertise,delivery schedules, available tools and technologies, quality requirements are all examples of development constraints. Some would be set by the level in the hierarchy above that of the project manager, i.e., the organization’s executive; other constraints wouldemerge as part of requirements elicitation. Still others, such as the organizational culture or the development team’s professional culture, are typically an assumed part of the work environment. Obviously, there is the possibility that some constraints contradict each other and the project manager needs to decide how to resolve such contradictions. In other cases, the constraint will be a soft constraint, such as the increased error rate in developed work as daily work hours increase, rather than a hard constraint such as the available budget or the development team’s expertise. The project manager, among other things, directs development by assigning personnel, scheduling the work and communicatinginformation to where it is needed.

Feedback would be sought from a range of sources for a range of purposes. Obviously, the project manager needs to know how the project is progressing but also needs feedback on whether the development team’s expertise is sufficient to complete the development and whether the customer perceives their concerns arebeing heeded, as well as on a whole range of similar issues.

The project manager’s actions are unlikely to be uniform in response to the project characteristics and are, instead, more likely to exhibit some form ofcapability scale. Some of the capability is likely to be attributable to the demands of the project and some due to the project manager’s individual skill. There are a range of alternative capability scales. One is that the capabilities will conform to the CMMI and SPICE scale, based on increasing management control and process repeatability. Another possible capability scale is that key activities will be performed with increasingcomplexity or increasing rigour as demonstrated in Ibbs and Kwak’s project management capability model [5].

Within the context of systems theory, we believe activities are likely to be grouped around variousconstraints: those concerning feedback and thoseconcerning directing the work.

If the systems model were adopted, then project management would improve, or become more capable, as a result of the project manager’s endeavours as he/she:• Established and incorporated the project's

constraints into the project strategy andsubsequent plan during project planning.

Proceedings of the 11th Asia-Pacific Software Engineering Conference (APSEC’04) 1530-1362/04 $ 20.00 IEEE

Page 7: PM capability levels

• Modified constraints during project planning to better achieve the project outcomes.

• Established sources of feedback.• Established the type of feedback, subjective,

objective, political, etc.• Monitored the feedback during the project.• Monitored the constraints during the project.• Modified constraints during the project. • Established and practised control action.

4.1 Principles of capability

A general principle was proposed that greatercapability would be associated with greater awareness and greater complexity. A project manager who actively sought out and tested a number of project constraints would be judged more capable than a project manager who was unaware of the project’s constraints. Seeking multiple sources and different types of feedback would demonstrate greater capability than seeking a minimal number of feedback sources. Using such principles, a scale of capability emerged from the data analysis. Anexample scale for one project management activities is given in Table 11. Similar capability scales weredeveloped for each of the attributes.

Table 11: Example of capability attribute – Modifycontstraints during planning.

Determine if some constraints can or must be changed.

• Product related: Requirements, budget, schedule

• Personnel related: expertise, teamwork, social issues.

• Infrastructure: tools, logistics

Scale Indications

1 No interest. Accept the given constraints.

2 Some investigation into product related constraints.

3 Product related and some personnel related. Actively investigate constraints in limited areas

4 Limited investigation into constraints other than product and personnel

5 Knowledgeable investigating all constraints. Product, personnel, political, infrastructure, technology

4.2 Construct validity

The interview data were re-examined to assign a capability level to each project manager for each of the project management activities described above.

All items in the capability model appear to be highly correlated.

4.3 Correlations

For the purposes of analysis, activities wereseparated into those which would be performed during project planning and those which would be performed while the project was in progress. The measures of each activity were then correlated against organization size, process maturity and project size.

Both organization size and process maturity were highly correlated with all capability attributes butproject size was only weakly correlated with capability attributes, as shown in Table 12.

Table 12: Kendall’s tau-b correlations

Org size Processmaturity

Project size

Establish constraints .614** .624** .339*

Alter constraints .563** .541** .409**

Sources of feedback .507** .488** .277

Type of feedback .567** .559** .240

Monitor feedback .462** .546** .358*

Monitor constraints .623** .689** .477**

Modify constraints .525** .528** .315

Controlling actions .682** .670** .409**

* Correlation is significant at the .05 level (2-tailed).** Correlation is significant at the .01 level (2-

tailed).

It is readily evident that there is strong correlation between organization size and systems theory based project management capability, and betweenorganization process maturity and system theory based project management capability but weaker evidence of a correlation between project size and a systems theory based project management capability.

5 Threats to validity

Small sample size. The sample was relatively small at 29 and many statistical tests suffered from having insufficient cell counts, usually less than 10.

Non random sample. The participatingorganizations were those listed in the Sydney,

Proceedings of the 11th Asia-Pacific Software Engineering Conference (APSEC’04) 1530-1362/04 $ 20.00 IEEE

Page 8: PM capability levels

Australia, Yellow Pages who agreed to be interviewed when approached by telephone. Such accidentalsampling is considered to have very weak external validity and likely to be biased [11].

Weak external validity. Organizations with lowmaturity, chaotic project management processes are less likely to be willing to reveal to a researcher just how they manage projects – or, rather, don’t manage them. Consequently the findings of this research are likely to be biased toward the more matureorganizations. However, given the conclusions, the weak external validity is of less importance.

Localized sample . The research sample was taken from organizations in Sydney, Australia. While there were a significant number of multinational organizations in the sample, it is possible that the same research findings are similarly localized. The study would need to be replicated in another country to test this.

Post analysis hypothesis. Hypothesizing thatSystems Theory may be applicable as the basis for a model of project management capability was decided after the interview data had been gathered andanalysed. Any conclusion from the subsequentanalysis must be regarded as a possible indication of some relationship rather than proof of such arelationship. It could form the basis of a future study to explore the finding in greater depth and rigour.

6 Conclusion and further research

Capability models have been used verysuccessfully to guide process improvement and to provide indications of an organization’s maturity but their use of the same set of higher capability levelactivities for all processes has not previously been tested.

While a process based project management model had no correlation to the SPICE model, the systems theory based capability model appears to correlate well with the kinds of activities performe d by software development project managers.

The research strongly suggests that a Systems Theory view of project management is more likely toaccurately reflect what software development project managers actually do to monitor and control the project and provide a strong capability scale for those activities. Indeed, a capability scale based onincreasing scope and increasing complexity of theperformed activities appears to apply to the activities actually performed by project managers. A capability scale based on increasing scope and complexity of Systems Theory based activities is also highly

correlated to both organization size and organizational process maturity.

Since these conclusions suffer from weak validity, both internal and external, they need to be explored and validated by further research specifically directed at a correlation between a systems theory model of project management capability and the practices actuallyperformed by project managers.

7 Acknowledgments

This is Contribution numb er 04/11 of the Centre for Object Technology Applications and Research(COTAR).

8 References[1] P. Checkland, "Systems Thinking and

Management Thinking," American BehavioralScientist , vol. 38, pp. 75(17), 1994.

[2] P. Checkland, Systems Thinking, Systems Practice.Chichester: John Wiley & Sons, 1981.

[3] P. B. Crosby, Quality is Free: The Art of Making Quality Certain: McGraw-Hill, 1979.

[4] W. R. Duncan, "A Guide to the Project Management Body of Knowledge," Project Management Institute 1996.

[5] C. W. Ibbs and Y. H. Kwak, "Assessing Project Management Maturity," Project Management Journal, vol. 31, pp. 32-43, 2000.

[6] ISO 15504:1998 - Information Technology -Software Process Assessment

[7] N. Leveson, "A New Accident Model for Engineering Safer Systems," presented at MIT Engineering Systems Division Internal Symposium, Boston, MIT, 2002.

[8] SEI, "Capability Maturity Model for Software (Version 1.1)," Software Engineering Institute, Pittsburgh CMU/SEI -93-TR-024, 1993.

[9] SEI, "CMMI for Systems Engineering/Software Engineering, Version 1.02," Carnegie Mellon University/Software Engineering Institute, Pittsburgh CMU/SEI -2000-TR-019, 2000.

[10] L. Skyttner, General Systems Theory: Ideas & Applications. Singapore: World Scientific, 2001.

[11] W. M. K. Trochim, The Research Methods Knowledge Base. Cincinnati: Atomic Dog Publishing, 2001.

[12] G. M. Weinberg, Introduction to Systems Thinking , 3 ed. New York: Dorset House, 2001.

Proceedings of the 11th Asia-Pacific Software Engineering Conference (APSEC’04) 1530-1362/04 $ 20.00 IEEE

Page 9: PM capability levels

Proceedings

Asia-PacificSoftware

EngineeringConference

30 November - 3 December 2004Haeundae Grand Hotel, Busan, Korea

Sponsored byKorea Information Science Society

Page 10: PM capability levels

Proceedings

11th Asia-Pacific SoftwareEngineering Conference

November 30 - December 3,2004

Busan, Korea

Sponsored byKorea Information Science Society

In cooperation withKorea Science Engineering Foundation

Samsung ElectronicsSamsung SDSLG Electronics

Korea IT Industry Promotion AgencyElectronics and Telecommunications Research Institute

IEEE~

COMPUTERSOCIETY

http:// computer.orgLos Alamitos I California

Washington • Brussels • Tokyo

Page 11: PM capability levels

Copyright © 2004 by The Institute of Electrical andElectronics Engineers, Inc.

All rights reserved

Copyright and Reprint Permissions: Abstracting is permitted with credit to the source. Libraries mayphotocopy beyond the limits of US copyright law, for private use of patrons, those articles in this volumethat carry a code at the bottom of the first page, provided that the per-copy fee indicated in the code is paidthrough the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923.

Other copying, reprint, or republication requests should be addressed to: IEEE Copyrights Manager, IEEEService Center, 445 Hoes Lane, P.O. Box 133, Piscataway, NJ 08855-1331.

The papers in this book comprise the proceedings of the meeting mentioned on the cover and title page.They reflect the authors' opinions and, in the interests of timely dissemination, are published as presentedand without change. Their inclusion in this publication does not necessarily constitute endorsement by theeditors, the IEEE Computer Society, or the Institute of Electrical and Electronics Engineers, Inc.

IEEE Computer Society Order Number P2245ISBN 0-7695-2245-9

ISSN Number 1530-1362

Additional copies may be ordered from:

IEEE Computer SocietyCustomer Service Center

10662 Los Vaqueros CircleP.O. Box 3014

Los Alamitos, CA 90720-1314Tel: + 1-714-821-8380Fax: + 1-714-821-4641

E-mail: [email protected]

IEEE Service Center445 Hoes LaneP.O. Box 1331

Piscataway, NJ 08855-1331Tel: + 1-732-981-0060Fax: + 1-732-981-9667

htlp://shop.ieee.org/store/customer [email protected]

IEEE Computer SocietyAsialPacific Office

Watanabe Bldg., 1-4-2Minarni-Aoyama

Minato-ku, Tokyo 107-0062JAPAN

Tel: + 81-3-3408-3118Fax: + 81-3-3408-3553

[email protected]

Editorial production by Bob WernerCover art production by Joe Daigle/Studio Productions

Printed in the United States of America by The Printing House

"E£~

COMPUTERSOCIETY

+IEEE