design principles for the development of measurement ... · design principles for the development...

13
Based on a comprehensive literature review and the activities of numerous case study companies, it is argued in this paper that performance measurement in R&D is a fundamental aspect to quality in R&D and to overall business performance. However, it is apparent from the case companies that many companies still struggle with the issue of R&D perfor- mance measurement. Excuses for not measuring are easily found, but there are also empirical examples and literature available with suggestions how it can be done. In this article this literature is reviewed and placed within the context of general performance control and contin- gency theory. Furthermore, the main measurement system design parameters are discussed and some basic system requirements are described as well as several design principles that can be use- ful for those who accept the challenge of establishing a meaningful measurement system. Introduction Research and Development (R&D) 1 was once considered to be a unique, creative and unstructured process that was difficult, if not impossible, to manage and control. The standard management and control techniques used in other parts of the organization were therefore considered inappropriate for R&D (Roussel et al., 1991). However, recent changes in the business environment — intensified competition, splintered mass markets, shortened product life cycles, and advanced technology and automation, etc. — have focused senior managers’ attention on R&D’s contribution to competitive advan- tage (Kumpe and Bolwijn, 1994; Wheel- wright and Clark, 1992). Today, R&D is no longer considered to have a mere supportive role to the primary business processes, but to be a vital part of it (de Weerd-Nederhof et al., 1994). These changes challenge com- panies to improve their R&D processes in terms of efficiency, internal and external customer focus, time to market and innova- tiveness (Kumpe and Bolwijn, 1994, Wheelwright and Clark, 1992). Although managers still acknowledge that R&D pro- cesses have several characteristics that make them different from other business processes, they no longer accept that this should mean they are unmanageable. Several companies have now adopted management techniques traditionally used in other parts of the com- pany, like Total Quality Management and activity control, in R&D (Miller, 1995). One of the key aspects of quality management is the need for performance measurement (Miller, 1995; Schumann et al., 1995). Measurement drives behaviour and, even more importantly, behaviour change. Fur- thermore, it supports the prioritization of actions and enables comparing and tracking of performance changes and differences R&D Management 27, 4, 1997. © Blackwell Publishers Ltd, 1997. Published by Blackwell Publishers Ltd, 108 Cowley Road, Oxford OX4 1JF, UK and 350 Main Street, Malden, MA 02148,USA. 345 Design principles for the development of measurement systems for research and development processes Inge C. Kerssens-van Drongelen 1 and Andrew Cook 2 1 Faculty of Technology and Management, University of Twente, The Netherlands 2 CSC Consulting and Systems Integration, Preston, PR1 1RE, UK ——— 1 In this paper, we will use the abbreviation R&D as a general term to indicate activities varying from basic research, to product development and introduc- tion. If at any point basic research or any other specific R&D category is intended then it will be clearly distinguished in that context.

Upload: others

Post on 15-Mar-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Based on a comprehensive literaturereview and the activities of numerous casestudy companies, it is argued in this paperthat performance measurement in R&D isa fundamental aspect to quality in R&Dand to overall business performance.However, it is apparent from the casecompanies that many companies stillstruggle with the issue of R&D perfor-mance measurement. Excuses for notmeasuring are easily found, but there arealso empirical examples and literatureavailable with suggestions how it can bedone. In this article this literature isreviewed and placed within the context ofgeneral performance control and contin-gency theory. Furthermore, the mainmeasurement system design parametersare discussed and some basic systemrequirements are described as well asseveral design principles that can be use-ful for those who accept the challenge ofestablishing a meaningful measurementsystem.

Introduction

Research and Development (R&D)1 wasonce considered to be a unique, creative andunstructured process that was difficult, if not

impossible, to manage and control. Thestandard management and control techniquesused in other parts of the organization weretherefore considered inappropriate for R&D(Roussel et al., 1991). However, recentchanges in the business environment —intensified competition, splintered massmarkets, shortened product life cycles, andadvanced technology and automation, etc.— have focused senior managers’ attentionon R&D’s contribution to competitive advan-tage (Kumpe and Bolwijn, 1994; Wheel-wright and Clark, 1992). Today, R&D is nolonger considered to have a mere supportiverole to the primary business processes, but tobe a vital part of it (de Weerd-Nederhof etal., 1994). These changes challenge com-panies to improve their R&D processes interms of efficiency, internal and externalcustomer focus, time to market and innova-tiveness (Kumpe and Bolwijn, 1994,Wheelwright and Clark, 1992). Althoughmanagers still acknowledge that R&D pro-cesses have several characteristics that makethem different from other business processes,they no longer accept that this should meanthey are unmanageable. Several companieshave now adopted management techniquestraditionally used in other parts of the com-pany, like Total Quality Management andactivity control, in R&D (Miller, 1995). Oneof the key aspects of quality management isthe need for performance measurement(Miller, 1995; Schumann et al., 1995).Measurement drives behaviour and, evenmore importantly, behaviour change. Fur-thermore, it supports the prioritization ofactions and enables comparing and trackingof performance changes and differences

R&D Management 27, 4, 1997. © Blackwell Publishers Ltd, 1997. Published by Blackwell Publishers Ltd,108 Cowley Road, Oxford OX4 1JF, UK and 350 Main Street, Malden, MA 02148,USA.

345

Design principles for the development ofmeasurement systems for research anddevelopment processes

Inge C. Kerssens-van Drongelen1 and Andrew Cook2

1 Faculty of Technology and Management, University of Twente, The Netherlands2 CSC Consulting and Systems Integration, Preston, PR1 1RE, UK

———1 In this paper, we will use the abbreviation R&D asa general term to indicate activities varying frombasic research, to product development and introduc-tion. If at any point basic research or any otherspecific R&D category is intended then it will beclearly distinguished in that context.

(Schumann et al., 1995). If TQM is to beused effectively in R&D, a measurementsystem needs to be in place to give accurateand timely data on R&D performance. Fur-thermore, feeding back this performanceinformation should motivate the researchersto excel and improve. This approach tomeasuring R&D performance contrasts withthe traditional approach, which is oftenlimited to annual budgetary and informalprofessional control (Roussel et al., 1991).Thus, there is a clear need to develop newmeasurement systems that meet the require-ments of modern R&D management andaccomodate the peculiarities of R&D pro-cesses. This article focuses on the issuesinvolved in the development and content ofsuch measurement systems.

The article starts with an introduction ofthe research method, followed by a briefoverview of the theory of performance con-trol and performance measurement. Wecontinue with a discussion of the practicalproblems encountered when trying to applythese general control principles to R&Dprocesses. Taking these problems intoaccount, two critical issues that need to beconsidered to reduce the risk of failure whendesigning a measurement system for R&D— the purpose of the measurement andcontingency factors — are detailed.Although we argue that the specifications ofthe performance measurement system shouldbe developed by the users themselves to meettheir particular needs, we will give an over-view of measurement system requirementsand design principles that can be helpful tothis development process.

Research method

This article is based on a review of the R&Dperformance measurement literature. Itappeared that many authors have focused onone aspect of R&D performance measure-ment systems, mostly the metrics ormeasurement methods that were or should beused to measure R&D performance. Often,the positioning of the metrics in a moregeneral framework of control theory and aclassification of metrics and methods basedon application areas was lacking. To fill thisgap, we have placed R&D performance

measurement in the context of general con-trol theory and will present the findings fromliterature in such a way that they becomeuseable for a manager who is responsible fordeveloping an R&D measurement system.

The literature review is illustrated withempirical evidence and examples gathered intwo separate research projects. One projectwas carried out in the Netherlands, whereasthe other investigated British, Dutch, Swe-dish and Japanese companies. The Dutchproject focused on different aspects of R&Dperformance measurement procedures(metrics, methods, norms, frequency andtiming of measurement), the context inwhich they were applied (amongst otherscharacterized by the type of R&D and thehierarchical level at which was measured),and the perceived effectiveness of the pro-cedures. It consisted of a survey which wasreturned by respondents from 48 large andmedium-sized companies, and nine in-depthinterviews with R&D managers. A detaileddiscussion of the research method and thefindings is presented elsewhere (Kerssens-van Drongelen and Bilderbeek, 1996). Theother research project consisted of semi-structured interviews with managers andengineers involved in R&D in a variety ofindustries. Although the research addressednew product introduction performance indifferent operating scenarios, performancemeasurement issues were also investigated.

The role of performance measurementin performance control

In the broadest sense, control can be definedas: ‘any form of goal-directed influence’ (deLeeuw, 1982). Thus, performance control oforganizations and processes can be seen asensuring that the combined efforts of thepeople involved, using multiple resources,are in line with company objectives andplans. In the literature, two basic approachesto performance control can be identified(Ansari, 1977; Bruggink, 1989). First, thereis the structural or measurement approachadopted in the literature about cybernetics,accounting and management informationsystems. Here, control is seen as decisionmaking on the basis of feed forward andfeedback of (quantitative) information about

Inge C. Kerssens-van Drongelen and Andrew Cook

346� R&D Management 27, 4, 1997 © Blackwell Publishers Ltd 1997

actual performance and influential externalfactors. Thus, measurement is at the core ofthis approach. The other part of the controlliterature is dominated by the behaviouralapproach. Behaviourists see control as a setof formal and informal mechanisms for co-ordination, supervision and motivation ofpeople that are applied to direct their behav-iour towards organizational goals. At firstglance, performance measurement does notseem to have a clear role in this approach.The decision which control mechanism toapply in a specific situation could be basedon general views, assumptions or personalpreferences. However, it seems more reliableto base this decision on clear informationabout the current and expected status oforganizational goal attainment and aboutfactors that might influence goal attainment.In that case, the behavioural approach needsto be supplemented with measurement.Combining the behavioural and the measure-ment perspectives in this way, performancecontrol can be defined as a process consistingof the following activities:

the acquisition and analysis of informationand the interpretation of this informationto determine what to do and how to do itand the application of the chosen measuresto influence people so that their efforts arealigned to company objectives and plans.

In accordance with this definition, perfor-mance measurement can be interpreted as aspecific part of the control process, namely:the acquisition and analysis of informationabout the actual attainment of companyobjectives and plans and about factors thatmay influence plan realization. Finally, aperformance measurement system can bedefined as a set of tools and proceduressupporting the measurement process. It is themechanism by which the performance infor-mation is gathered, recorded and processed(see Figure 1). In the next paragraph, we willdiscuss the problems encountered whentrying to set up such a system in R&D.

Problems with performance control inR&D

Several problems with designing and imple-menting R&D measurement systems havebeen reported in the literature (Kerssens-vanDrongelen, 1994). The most prominent ofthese problems are listed and some appro-aches to address them are described below.

The problems

Obviously, it is desirable to measure R&Dperformance in terms of the standard

Measurement systems for R&D processes

© Blackwell Publishers Ltd 1997 R&D Management 27, 4, 1997 �347

Figure 1. The measurement and control process.

financial performance measures applied toother business functions — profit, return oninvestment, return on net assets, etc. How-ever, when this is attempted two mainstructural problems are encountered. The firstproblem is the difficulty of isolating fromother business activities the contribution ofR&D to company performance; it is alsoproblematic, with traditional accountingmethods, to identify the contribution of R&Dto the profits resulting from individual newproducts. The second problem is the time lagbetween R&D efforts and the potentialfinancial rewards, which makes it difficult touse this information for timely decisionmaking. This is especially applicable to basicresearch, but can also apply to appliedresearch or development projects. Forexample, companies designing automotivecomponents for vehicle manufacturers some-times have to wait for up to two years for theproduction start and the first product sale. AtBell Labs there is a typical time lag ofbetween 7 to 19 years for basic researchprojects (Pappas and Remer, 1985). In addi-tion, it can often not be foreseen in whichproducts or processes basic and appliedresearch output may be used. For example,several years ago a venture capital companythat participated in our empirical researchinvested heavily in developing core pencomputing technology that is still beingincorporated in today’s new products. Over a5-year period they have launched more than20 products based on the core technologyand more are planned. Nobody could havereliably predicted this 5 years ago.

Related to the problems of selecting per-formance metrics is the determination of thecorrect norms for comparisons. Some inter-viewees in our empirical research indicatedthat they found this an even more difficultissue than the selection of metrics. By theirvery nature R&D projects are unique andinvolve non-repetitive processes. It is conse-quently difficult to compare and contrast twoprojects as they are always different, andresearchers tend to object to targets based onpast projects, or use project uniqueness as anexcuse for not meeting those targets. How-ever, it should be noted that projectuniqueness is more applicable to (basic)research projects than to product develop-ment projects; usually, the activities of

development projects can largely be definedand scheduled in advance. Setting norms formetrics at a more aggregated level (e.g. theperformance of the R&D department ingeneral) can be problematic as well. Manycompanies lack records of past performancefrom which trends and norms can be derived,and relevant public data on these subjects arealso hard to obtain.

The last problem we wish to discuss hereis the acceptance of performance measure-ment in R&D. Many engineers and scientistsbelieve the design and implementation ofsuch a system is counterproductive since thevery act of measurement is thought to dis-courage creativity and to reduce motivationamong highly educated technical people(Pappas and Remer, 1985). Brown andSvenson (1988) give two possible explana-tions for this negative attitude towardsmeasuring R&D. The first explanation is thatengineers and scientists may fear suchsystems because they may highlight theirinadequacies and lack of productivity. Thesecond explanation, which they regard asmore significant, is that they have bad exper-iences with inappropriate measurementsystems leading to improper decision mak-ing, and consequently believe that allmeasurement systems do not work.

How to deal with the problems

However, the problems mentioned aboveshould not be used as excuses for notmeasuring R&D performance at all. As thevice presidents responsible for R&D at Gen-eral Electric and at Alcoa have argued: acredible R&D organization needs to provethat it is working on the right things in theright way, with as much quantifiable argu-ments as possible (Bridenbaugh, 1992);Robb, 1991). Furthermore, performancemeasurement is at the heart of any (R&D)quality management system (Francis, 1992;Miller, 1995; Schumann et al., 1995). Thus,measuring R&D performance is necessaryboth for external and internal reasons. Thechallenge is to overcome the problems dis-cussed and to find solutions that can copewith the peculiarities of R&D processes andthe attitude of the people working there.

For example, the behavioural problems ofR&D measurement acceptance might be

Inge C. Kerssens-van Drongelen and Andrew Cook

348� R&D Management 27, 4, 1997 © Blackwell Publishers Ltd 1997

reduced by having the R&D employeesinvolved in the design of the system (Brown-ell, 1985). In organizations with empowered(multifunctional) teams, employees mighteven ask themselves for an appropriatemeasurement system that supports them intheir own decision making (Meyer, 1994).

The problem of norm setting should beaddressed first of all by building up a struc-tured project data system and getting startedwith measuring performance. Many com-panies we visited in our empirical researchstill lacked structured project data systems,making it difficult to learn from the past.Once measuring and recording have beenstarted, records of past activities and perfor-mance will become available that can beused to forecast new project performancemore accurately and to develop realisticnorms (Patterson, 1983). To ensure that thenorms will not be merely inward looking andnot competitive in the marketplace, managersshould also actively engage in benchmarkinginitiatives as currently offered by consultancyfirms or should themselves try to find bench-marking partners.

To overcome the success definitionproblem, many R&D managers and academicresearchers have looked at alternatives forsuccess measurement, e.g. process or directoutput measurement (see for example Brownand Gobeli, 1992; Griffin and Page, 1993;Kerssens-van Drongelen and Bilderbeek,1996; Moser, 1985; Packer, 1983; Pappas andRemer, 1985; Patterson, 1983). Unfor-tunately, this theoretical body of knowledge israther fragmented. Different measures andmeasurement methods have been proposed,but a general framework that defines whatconcepts to use in what context (e.g. purposeof the measurement, organizational level, typeof research) is lacking. Developers ofcompany-specific R&D measurement systemsshould take care adopting one of these appro-aches without in-depth situation analysis, asthey may run a serious risk of failure. Instead,they should first address two crucial issues:what is the specific purpose of the measure-ment procedure (i.e. incentive or diagnosticpurposes) and what are the contingencies?Only after these issues have been addressedproperly, system developers should turn tometrics, measurement techniques, frequency,etc. matching with their specific situation

(Kerssens-van Drongelen, 1994). If differentpurposes or contingencies are at stake at thesame time, the last step should be to align thedifferent measurement procedures with oneanother, resulting in an efficient performancemeasurement system. In the next section of thepaper we will discuss the crucial issues ofmeasurement purpose and contingency factorsin more detail.

Purposes of measurement

There are effectively two clusters of purposesfor performances measurement; each requir-ing its own approach to measuring. First,performance measurement can serve thepurpose of motivating people. The assump-tion is here that by feeding back informationabout their performance, possibly coupledwith incentives, people will be motivated tochange their behaviour. The second group ofpurposes are associated with diagnosingactivities (e.g. projects) and organizationalunits. For example, a diagnostic approachcan be used to assess if problems can beexpected, or if organizational changes havehad an effect. Measurement systems for thesepurposes include, for example, projectprogress-monitoring systems, post projectevaluations, and organizational audits. Amajor difference between motivational anddiagnostic purposes is that the former needsto include only those factors that can becontrolled by the (group of ) person(s) sub-jected to the measurement, but the lattershould preferably assess the combined effectsof personnel, technology and environment(Pritchard, 1990). Furthermore, if for moti-vational purposes an incentive or rewardprogramme is coupled with the measurement(e.g. bonuses, promotions), one shouldensure that all the important influentialaspects of work are measured. This isbecause there is a human tendency to directmore energy towards tasks that are beingmeasured, giving less attention to taskswhich are not. The diagnostic purposes canfrequently be satisfied with incomplete per-formance measures (Pritchard, 1990).

The purpose or purposes of the measure-ment system determines the ‘how, what,when and where’ of the data collection,analysis and reporting. Thus, the purpose

Measurement systems for R&D processes

© Blackwell Publishers Ltd 1997 R&D Management 27, 4, 1997 �349

must be defined before any attempt is madeto design the measurement procedure.Clearly, the data collection, analysis andreporting methods ideal for one purpose willnot necessary suit the other. At Philips Cen-tral Research, researchers involved in thedevelopment of a self-audit tool for projectteams were very suspicious about whatwould be done with their audit data. Theysaid they would not use the tool if the datawould also be used by higher level manage-ment without any additional consultation.They felt that, although the data would suittheir own purpose very well, it could easilylead to misunderstandings and improperdecision making by others (Deetman, 1994).But whenever possible, one should obviouslytry to couple the data collection step ofdifferent measurement procedures into onemeasurement system to minimize duplicationof effort and reduce the administrativeburden. The collected data can then be ana-lysed and reported according to the differentindividuals’ needs. Spreadsheet packages orrelational database management systemsseem useful tools for this.

Contingency factors for measurementprocedures

The contingency model of organizationssuggests that organization design should bein harmony with the contingency factors inthe environment. As argued by the contin-gency theory of management accounting,these external factors, as well as specificinternal factors, also have an impact on thedesign of measurement systems (Emmanuelet al., 1990). External influential factorsinclude, for example, market predictability,the nature of competition and the number ofdifferent product-markets faced. Relevantinternal features include the size of theorganization, organization structure and theinterdependence between units, the degree ofdecentralization, the organization’s strategy,the control policy and the nature of theprimary business process (Emmanuel et al.,1990; Snellenberg, 1995). For a specificmeasurement procedure, which could be partof a larger measurement system, the aggrega-tion level at which is measured can also beregarded as an influential factor.

In the R&D performance measurementliterature as well as in our own empiricalresearch, the influence of several of thegeneral contingency factors on R&Dmeasurement system design has beenconfirmed. For example, Loch et al. (1996)found evidence that the nature of competi-tion, especially the degree in which technicalcompetence is a competitive factor, has animpact on the choice of R&D process perfor-mance measures. Furthermore, Griffin andPage (1996) have confirmed the influence ofstrategy on R&D metrics selection. Theyfound that the most appropriate programme-level and project-level success measuresdepend on the company’s innovation strategyand the project strategy respectively.

A specific instance of the contingencyfactor ‘nature of the primary business process’in an R&D environment could be the type ofR&D. R&D activities can be classified in anumber of ways. The most used classificationis the one by the OECD, which identifies threecategories: basic research, applied researchand (product and process) development.Product development efforts can be furthersubdivided into runners, repeaters and stran-gers (Cook et al., 1996). A runner is arelatively trivial product change that involvesminimal effort. A repeater is a product that theorganization is familiar with and a stranger isa radically different product for the organiz-ation and consequently requires more effortand is more risky. Wheelwright and Clark(1992) use a similar classification, but labeledthem breakthrough, platform and derivativeprojects. Several authors have suggested thatthe type of R&D influences one or moremeasurement system design parameters suchas the metrics, measurement techniques,norms and frequency of measurement (Brownand Gobeli, 1992; Pappas and Remer, 1985;Quinn, 1960). In basic research projects it isoften not known when or for which productsor processes the outputs will eventually beused, if at all. In such a situation, criteriaconcerning market opportunities, economicbenefits, etc. are usually not obtainable orrelevant. The measurement technique pro-posed for this type of research is ‘qualitativeevaluation’ (e.g. peer reviews). Conversely,for development projects economic and mar-ket outcome measures are relevant andimportant, which can and should be measured

Inge C. Kerssens-van Drongelen and Andrew Cook

350� R&D Management 27, 4, 1997 © Blackwell Publishers Ltd 1997

quantitatively. Figure 2 shows a classificationof measurement techniques that Pappas andRemer (1985) advocate. In spite of the factthat these prepositions seem evident, they arenot proven with substantial empirical data.While the differences between the twoextremes, basic research and productdevelopment/improvement, may instinctivelybe accepted, it becomes more problematicwith the R&D types in between. In our Dutchempirical research for instance, we found nodifference between the measurement pro-cedures for applied research and for productdevelopment/improvement. The intervieweesexplained that they did not have differentdepartments for these types of R&D. Thesame researchers worked on both types ofR&D and in some organizations these types ofR&D were also combined within one project.Thus, it was not feasible to have separatemeasurement procedures for applied researchand development.

In our Dutch empirical research project, thecontingency factor ‘aggregation level at whichis measured’ appeared to have a profoundinfluence on the design of measurement pro-cedures. The metrics, ‘norm setting methods’,‘frequency and timing of measurement’ and‘measurement techniques’ were all found tobe quite different in kind and/or in frequencyof occurance, depending on the level at which

was measured. For example, we found that atthe individual level subjective assessment bysuperiors was the most applied measurementtechnique, measuring metrics such as workingspeed, accuracy, ability to work in teams andachievement of milestones. On the other hand,at the organizational level the most usedmeasures were: number of projects completed,customer satisfaction, percentage of sales dueto new products and time to market. Themeasurement techniques most frequentlymentioned for this level were quantitativemeasurement and satisfaction questionnaires/verbal feedback from customers (Kerssens-van Drongelen and Bilderbeek, 1996).

The performance measurement system

In this last section of the paper, we will dealwith the actual design of a performancemeasurement system. First, the basic systemrequirements will be presented. Then themajor system design parameters will bediscussed.

Measurement system requirements

Any measurement system must satisfy somebasic criteria. It should allow the right infor-mation, at the right time, to be collected

Measurement systems for R&D processes

© Blackwell Publishers Ltd 1997 R&D Management 27, 4, 1997 �351

Figure 2. The relevance of different measurement techniques (based on Pappas andRemer, 1985).

reliably and economically. Many measurementsystems we have seen do not do this. Somesystems have developed in an uncontrolledfashion over a number of years. The result isan unwieldy measurement system that does notfit the business structure or activities. In othercases the processes and the organization havechanged significantly, but the measurementsystem has remained the same. In both situa-tions the system creates an unnecessaryoverhead burden that collects information thatmay be irrelevant to business needs.

A measurement system should be designedusing a holistic approach. All relevant factorsshould be taken into consideration — thecost of running the system, the timing of thereporting, both the normal and ad hoc infor-mation requirements of the decision makers,available benchmarking criteria, etc. Infor-mation technology can play a useful role inreducing the data collection and analysisoverhead, increasing the frequency of report-ing and expanding the set of possiblemetrics, thus giving the ability to have adhoc reports as and when required.

Other writers support these views. Packer(1983) identified five criteria that themeasurement system should achieve. Theinformation must be understandable and easyto interpret, it must be relevant, and it mustbe reliable (that is representative, unbiasedand verifiable by others). Furthermore, thesystem must be acceptable and cost effective.

The sophistication and motivation of thestakeholders involved (the people who aresubject to the evaluators and the users of theoutcomes) will also impose requirements; inmost cases any one group of people mighthave more than one role. If, for example, thepeople subject to evaluation have a positiveattitude towards performance measurement,they will be more inclined to co-operate in aquantitative assessment, but if they are resist-ant to evaluation, qualitative assessmentmight be the only option. The requirementsof the users will differ with the organiz-ational level. The higher the level, the moreaggregated and the more quantitative(financial) the metrics tend to be.

Major design parameters

Measurement system design parameters canbe defined as the topics that need to be

addressed when developing a measurementsystem. The main parameters are: metrics(performance measures), measurementsystem structure, standards against which tomeasure performance, measurement tech-niques, reporting formats and the frequencyand timing of measurement and reporting.

Metrics. As stated previously, the measuresof performance (metrics) must align with thepurpose of the measurement and the contin-gency factors. Furthermore, they must reflectthe objectives for and responsibilities of theperson(s) or activities that are beingmeasured. It is widely recognised that themeasures of performance used, have a stronginfluence on business activities and results(Kaplan and Norton, 1992). The old adage‘you get what you measure’ holds also true inR&D. It would be fundamentally wrong tomeasure laboratories that have objectives thatinclude identifying future technologies andundertaking basic research that may first beincorporated in products in 20 years time ontraditional short term oriented criteria such aspayback or return on investment. If a basicresearch laboratory is measured on suchcriteria it would undoubtedly change overtime and turn towards work, such as incre-mental product development, that has animmediate return.

Many different metrics are used in practice.In fact, there is an unlimited number of cri-teria that can be used as measure of perfor-mance. All the metrics that have beenidentified in the literature and in the case studycompanies can be categorized as operation-alizations of one or more of the five top levelmeasures — cost, quality, time, innovative-ness and contribution to profit. As argued byBolwijn and Kumpe (1994), efficiency (cost),quality, flexibility and innovativeness reflectthe critical responses of a firm to the increas-ing demands of its customers. (‘Flexibility’ inR&D is translated into time issues likethroughput time and timeliness). Performanceon these four aspects should eventually resultin performance on the fifth metric:contribution to profit. The five top levelmetrics also align with the four perspectivesmentioned by Kaplan and Norton in their‘balanced scorecard’ (1992, 1993, 1996):‘quality’ corresponds with the ‘customerperspective’; ‘efficiency’ and ‘timeliness’ with

Inge C. Kerssens-van Drongelen and Andrew Cook

352� R&D Management 27, 4, 1997 © Blackwell Publishers Ltd 1997

the ‘internal business process perspective’;‘innovativeness’ with ‘innovation andlearning’; ‘contribution to profit’ with‘ financial perspective’. With measures fromthe first three perspectives, R&D managers(or empowered teams) can diagnose whetherindividual researchers, teams, departments,etc. are currently focusing and performingwell on those aspects of their work that areassumed to be critical to business success. Inaddition, measures from the financial perspec-tive help to analyse — be it with a time lag— whether the R&D strategy itself wasright.�

Although the application of all top fivemeasures have been found in practice, mostcompanies often only use a limited sub-set.In a study of 124 R&D managers Moser(1985) found that from a given list ofmetrics, the ones mentioned in Table 1 wereused most frequently and consistently inpractice. In Table 1 we have indicated towhich top measures we think the metricsreported by Moser are linked. Innovativenessmeasures (e.g. number of patents, numbersof professional rewards) were also used, butless frequently. Contribution to profitmeasures were not included in the survey andno distinction was made between the differ-ent types of R&D or organization levels. Itcan be observed that ‘soft’ measures are theones most commonly used, even though theyare difficult to calculate accurately and reli-ably in practice. This finding was confirmedin our empirical research.

Griffin and Page (1993) identified a some-what different list of metrics for diagnosingproduct development success and failure.

They made a distinction in firm levelmeasures, programme level measures andproject level measures (subdivided indevelopment aspects (cost/time), financialaspects (contribution to profit indicators) andcustomer acceptance (quality)). The onesmost often mentioned are presented in Table2. Again, the innovativeness measures werelacking. The contribution to profit measureswere found to be used quite often. In fact,they found that managers use on averageabout four criteria, mostly from the qualityand contribution to profit categories.

Measurement system structure. As more thanone metric is often and should be used, theproblem is posed if and how these measuresshould be systemized in some kind of aframework. This systemization should helpto check whether all the important aspects ofperformance are measured, and to elucidatecompensation effects between measures. Inour case study companies, we rarelyobserved such systematic approaches.

In business economics, a pyramid struc-ture is usually preferred for a metricsframework. Such a framework has oneaggregated primary measure on top (e.g.return on R&D investment), from whichseveral elucidating and supporting variablesare derived (Bruggink, 1989). The lowerlevel metrics could serve performancemeasurement and decision making at thelower levels of the organization, whereas theaggregrated primary measure is used by topmanagement. Although attempts to apply thisapproach in R&D have been reported (Fosteret al., 1985), this mathematical deduction

Measurement systems for R&D processes

© Blackwell Publishers Ltd 1997 R&D Management 27, 4, 1997 �353

Table 1. Measures of performance used in practice (adapted from Moser, 1985).

Top level measureActual measure (as defined by Moser)—in order of descending use Quality Cost Time

Quality of output or performance 3

Unit’s degree of goal attainment sometimes sometimes sometimes

Amount of work done on time sometimes 3

Unit’s level of efficiency 3

Percentage of project completions sometimes sometimes

Percentage of results adopted by the company 3

method is not widely used in R&D becauseof the problems mentioned earlier (accep-tance, time lag, etc.).

Another systemization concept which wethink could be more suitable for R&D per-formance measurement, is the balancedscorecard we already mentioned in the pre-vious section (Kaplan and Norton, 1992,1993, 1996). In this approach, strategicobjectives are formulated looking from fourperspectives (financial, customer, internalprocess and innovation and learning perspec-tive). In Figure 3, an example is given of abalanced scorecard for R&D. For eachobjective, one or more metrics can be formu-lated. Some of the objectives can easily betranslated into a metric, e.g. ‘currentdevelopment time’ divided by ‘referencedevelopment time’ reflects performance on‘time to market improvement’. But for otherobjectives, a construct has to be chosenwhich is assumed to reflect the performanceon the objective (e.g. ranking by keyaccounts as a metric for customer satisfac-tion). In fact the objectives themselves arealso assumed to have a causal relationshipwith better financial performance. This isalso recognized by Kaplan and Norton(1996), who present it as an instrument ofstrategic learning about the validity of thestrategy and the quality of its execution. Formotivational purposes, the top levelbalanced scorecard might be translated intounits’ or even individual scorecards. Thiscould be done through translation of thestrategic objectives into objectives that can

be influenced by the people at that organiz-ational level and formulating correspondingperformance measures.

Standards to measure performance against.Performance measures are meaningless unlessthey are compared against generally acceptedreference points (Kerssens-van Drongelen,1994). Industry standards, self-establishedtargets (for example budget or scheduleadherence) or externally generated bench-marks from competitors or companies outsidethe industry can all be used. In our empiricalresearch among Dutch companies involved inR&D, we found that the procedures for normsetting depended largely on the organizationlevel at which was measured. Further, it mustbe noted that if performance is measuredagainst self-defined goals then these goalsmust be meaningful. For example, the owner-manager of an automotive company thatparticipated in our empirical research in theUK, imposed time schedules that the stafffound to be unrealistic. This consequentlycreated a fire fighting style when projectsinvariably slipped from target.

Measurement techniques. A metric may bemeasured qualitatively or quantitatively, or ahybrid of the two methods (Pappas andRemer, 1985). Qualitative methods inher-ently result in subjective and intuitivemeasures. This method is often used forindividual performance appraisal and stagegate project assessment. An expert, normallya line manager, evaluates the performance

Inge C. Kerssens-van Drongelen and Andrew Cook

354� R&D Management 27, 4, 1997 © Blackwell Publishers Ltd 1997

Table 2. Five categories of metrics for product development (adopted from Griffin and Page, 1993).

Firm-level Programme-level Project-level

Development aspects Financial aspects Customer acceptance

% of sales by newproducts

Strategic fit withbusiness

Programme met newproduct objectives

Impact of new productprogramme oncorporate performance

Development cost

Launched on time

Product performance

Met quality targets

Speed to market

Break-even time

Attain marketgoals

Attain profitabilitygoals

IRR/ROI

Customer acceptance

Customer satisfaction

Met revenue goals

Revenue growth

Met market share goals

Met unit sales goals

with criteria such as pass or fail, very good,good, poor or very poor. In our field com-panies we also found this practice a lot(Kerssens-van Drongelen and Bilderbeek,1996). The better companies use an unbiasedstaff manager to assess project performancein this situation. It is difficult to compare onequalitative appraisal with another as theanswers are often arbitrarily awarded andnever scored or ranked. Semi-quantitativemethods convert a qualitative answer to a‘number’ by applying a rank or a score. Theaccuracy and the reliability of the methodmay be improved by averaging the score for ametric from a number of experts. Personalpreference and bias is a major problem withnon-quantitative methods and should conse-quently be avoided if possible. To prevent acompany bias, it is also recommended toinvolve the actual customer or a customer testgroup in the assessment when they can beidentified. Some of our case companies usedsuch an approach. In general, they were verysatisfied with this measurement technique.

Quantitative methods use pre-definedalgorithms to generate numeric measures thatcan be compared with reference points. Afurther distinction can be made between‘bean-counting’ bibliometric methods andeconomic methods. Examples of metrics usedin bibliometric analysis are number of pat-ents approved per employee, number of newproducts per dollar spend on R&D. The mainproblem with these bibliometric methods isthat they explicitly assume that there is adirect causal relationship between the metricand overall business success. Several peoplehave argued against this (Brown and Sven-son, 1988; Packer 1983). For example, it isoften assumed that having more patents froma given level of input is good for a company.However, a company may have more ‘ideas’than it can turn into new revenue earningproducts. In such a situation R&D resourcesare wasted. This example also illustrates the‘cause and effect’ aspect of metrics and theold adage ‘you get what you measure’. Inorder to look good, management may direct

Measurement systems for R&D processes

© Blackwell Publishers Ltd 1997 R&D Management 27, 4, 1997 �355

Figure 3. Example of a balanced scorecard for an R&D organization. (Adapted from R.S. Kaplan and D.P.Norton, Using the balanced scorecard as a strategic management system, HBR January–February 1996.)

resources to maximise the number of patentapprovals and not to maximise R&D’s contri-bution to business performance. Thus, suchmetrics should not be used as the singleperformance measure. In a balanced combin-ation with other metrics, they might beuseful.

Economic methods use more sophisticatedalgorithms than the bibliometric methods.Economic methods assume that both inputsand outputs can be translated into monetarycosts and benefits. Economic methods are,due to the problems already given, used byonly a small number of companies. Forexample, Alcoa uses the ‘present value of anaccomplishment’ as a performance measure(Patterson, 1983) and Hitachi monitors R&Dperformance with the ratio ‘R&D’s contribu-tion to profit / R&D costs’ (Kuwahara andTakeda, 1990).

Frequency of measurement and reporting. Itis generally accepted that, unlike some pro-duction activities, continuous measurementof R&D performance is not appropriate.Performance should be measured for keydecision points, for example project mile-stones, annual budgets and periodic progressreviews. The milestones should be set so thatprompt corrective action can be taken but nottoo frequent that they become a burden onthe project. This applies especially for basicresearch where project deliverables usuallyoccur sporadically. If the frequency ofmeasurement is too great, staff may bedemotivated if ‘no progress’ is repeatedlyreported, or conclusions will be drawn toosoon. Interesting in this respect is the findingby Bean (1995) that corporate labs in highlyproductive firms had substantial longer plan-ning and review horizons (4–6 and 2–3years respectively) than less productive firmswhich operated on an annual cycle. Of coursethis does not mean that project progress,project efficiency etc. should not bemeasured between the formal reviews.

Discussion

In this paper we have explored the state ofthe art in R&D performance measurement.Starting from general control theory, wehave identified the most dominant problems

with performance measurement in researchand development processes. In our field workwe have sometimes encountered the attitudethat, as it is so difficult to measure, one isbetter off not measuring at all. In some othercompanies, people were aware that theirmeasurement system was not accurate, butstill they used it because the metrics wereeasy to measure. However we strongly disa-gree with both opinions, as there are bothexternal and internal demands for accurateperformance measurement. Externaldemands come from top management (orother (internal) customers if a market controlmechanism is used for R&D funding), towhich the R&D organization has to qualifyfor resources by proofing that it is workingon the right things rightly. Within the R&Dorganization, the need for performancemeasurement will also increase if qualitymanagement programs are to be imple-mented effectively. Without an effective,efficient, appropriate and meaningful perfor-mance measurement system, R&D may beisolated from the true business needs and itsrole in the business may be questioned.

To support developers of R&D perfor-mance measurement systems, we have listedsome basic system requirements, the maindesign parameters and several design princi-ples for the development of meaningfulmeasurement systems. We have illustratedwith empirical evidence found in literatureand in our own empirical research projects.However, we have also argued that managersshould not unthinkingly copy the conceptsproposed by others but design their owntailor-made system, suiting the purpose(s) ofmeasurement and the peculiarities of theirR&D setting (type of R&D, organizationalsize, etc.).

From the literature overview it has alsobecome apparent that although quite a lot hasalready been written about R&D performancemeasurement, there are still many areas thatneed further research. For example, researchon the influence of contingency factors onR&D measurement system design is only juststarted and so far has mainly focused onmetrics. However, from our empirical Dutchresearch we learned that contingency factorsalso influence the other system design para-meters (Kerssens-van Drongelen andBilderbeek, 1996). This seems a new area for

Inge C. Kerssens-van Drongelen and Andrew Cook

356� R&D Management 27, 4, 1997 © Blackwell Publishers Ltd 1997

further research. Furthermore, we would liketo mention the relationship between measure-ment purposes and R&D measurement systemdesign as a future research topic. As has beensummarized in this article, general perfor-mance control theory suggests that differentmeasurement purposes require differentmeasurement procedures. However, in theR&D management literature this topic hasreceived little attention so far. Finally, weremind of the need for a general frameworkthat would give a clear overview of whatmeasurement concept to use for what purposeand in which context. Our empirical researchhas taught us that such a practical tool wouldbe warmly welcomed by those assigned todevelop a measurement system for their R&Dprocess.

References

Ansari, S.L. (1977) An integrated approach to control systemdesign. Accounting, Organisation and Society, 2, 2.

Bean, A.S. (1995) Why some R&D organizations are moreproductive than others. Research • Technology Management,January–February.

Bridenbaugh, P.R. (1992) Credibility between CEO and CTO - aCTO’s perspective. Research • Technology Management,November–December

Brown, M.G. and Svenson, R. (1988) Measuring R&D productiv-ity. Research • Technology Management, July–August

Brown, W.B. and Gobeli, D. (1992) Observations on the measure-ment of R&D productivity: a case study. IEEE Transactions onEngineering Management, 39, 4.

Brownell, P. (1985) Budgetary systems and the control of function-ally differentiated organizational activities. Journal of AccountingResearch, 23, 2.

Bruggink, A. (1989) Performance control in banking, theory andapplication. Thesis University of Twente, Enschede.

Cook, A. et al. (1996) The CSC Manufacturing Industry Handbook.Solihull: CSC.

Deetman, G. (1994) Project evaluation within Philips Research.Course Book ‘Managing the R&D Process’ - Part II. Enschede /Milan, pp. 277–288.

Emmanuel, C., Otley, D. and Merchant, K. (1990) Accounting forManagement Control. London: Chapman&Hall, pp. 57–67.

Foster, R.N., Linden, L.H., Whiteley, R.L. and Kantrow, A.M.(1985) Improving the return on R&D — I. Research Manage-ment, January–February.

Francis, P.H. (1992) Putting Quality into the R&D process.Research • Technology Management, July–August.

Griffin, A. and Page, A.L. (1993) An interim report on measuringproduct development success and failure. Journal of ProductInnovation Management, 10, 291–308.

Griffin, A. and Page, A.L. (1996) PDMA success measurement

project: recommended measures for product development successand failure. Journal of Product Innovation Management, 13,478–496.

Kaplan, R.S. and Norton, D.P. (1992) The balanced scorecard -measures that drive performance. Harvard Business Review,January–February.

Kaplan, R.S. and Norton, D.P. (1993) Putting the balancedscorecard to work. Harvard Business Review,September–October.

Kaplan, R.S. and Norton, D.P. (1996) Using the balanced scorecardas a strategic management system. Harvard Business Review,January–February.

Kerssens-van Drongelen, I.C. (1994) Design of a measurementsystem for R&D: an overview of the issues. Course Book‘Managing the R&D Process’ — Part II. Enschede / Milan, pp.261–276.

Kerssens-van Drongelen, I.C. and Bilderbeek, J. (1996) R&Dperformance measurement in large and medium-sized Dutchcompanies. Paper presented at the 3rd International ProductDevelopment Conference, Fontainebleau, 15–16 April 1996.

Kumpe, T. and Bolwijn, P.T. (1994) Towards the innovative firm-challenge for R&D management. Research • TechnologyManagement, January–February, pp. 38–44.

Kuwahara, Y. and Takeda, Y. (1990) A managerial approach toresearch and development cost-effectiveness evaluation. IEEETransactions on engineering management, 37, 2.

Leeuw, A.C.J. de (1982) Organisaties: management, analyse,ontwerp en verandering: een systeemvisie. Assen: Gorcum.

Loch, C., Stein, L. and Terwiesch, C. (1996) Measuring develop-ment performance in the electronics industry. Journal of ProductInnovation Management, 13, 3–20.

Meyer, C. (1994) How the right measures help teams to excel.Harvard Business Review, May–June.

Miller, R. (1995) Applying quality practices to R&D. Research •Technology Management, March–April pp. 47–53.

Moser, M.R. (1985) Measuring performance in R&D settings.Research • Technology Management, 28, 5.

Packer, M.B. (1983) Analyzing productivity in R&D organizations.Research Management, January–February.

Pappas, R.A. and Remer D.S. (1985) Measuring R&D productivity.Research • Technology Management, May–June.

Patterson, W.C. (1983) Evaluating R&D performance at the AlcoaLaboratories. Research Management, April–May.

Pritchard, R.D. (1990) Measuring and Improving OrganizationalProductivity: a Practical Guide New York: Praeger.

Quinn, J.B. (1960) How to evaluate research output. HarvardBusiness Review, March–April.

Robb, W.L. (1991) How good is our research?. Research •Technology Management, March–April.

Roussel, P.A., Saad, K.N. and Erickson T.J. (1991) Third Gener-ation R&D, Managing the Link to Corporate Strategy Boston:Harvard Business School Press.

Schumann, P.A. Jr., Ransley, D.L. and Prestwood, D.C.L. (1995)Measuring R&D performance Research • Technology Manage-ment, May–June.

Snellenberg, J.A.N.M. van (1995) Rapportage: ontwerpvariabelenvoor de managementrapportage. In: Handboek ManagementAccounting, D1510, Samson Bedrijfsinformatie, The Netherlands(in Dutch).

Weerd-Nederhof, P.C. de, Kerssens-van Drongelen, I.C. andVerganti, R. (1994) Part I: R&D Management. CoursebookManaging the R&D Process, Twente Quality Centre, Enschede/Milan.

Wheelwright, S.C. and Clark, K.B. (1992) Revolutionizing ProductDevelopment New York: Free.

Measurement systems for R&D processes

© Blackwell Publishers Ltd 1997 R&D Management 27, 4, 1997 �357