Transcript
Page 1: Productivity in the engineering disciplines

Productivity in the Engineering Disciplines

A technique for balancing staff and workload, engineering performance analysis can help

organizations give their engineers more time to do engineering.

Keith A. Bolte

R i s i n g costs, strengthening foreign competition, and increasing demands for profit improvement are forcing American businesses to examine a wide vari- ety of new management techniques to enable them to produce more products and services at a higher level of quality for less cost. Implicit in this shift toward “scientific management” is the understanding that the real productivity gains of the future will be in the white-collar environment as opposed to the traditional blue-collar efforts. Further, the definition of white collar has been expanded from the rather narrow focus on the administrative/clerical realm to include such functions as data processing, engineering, sales, mar- keting, and materials. Essentially, white-collar em- ployees are those employees who do not directly man- ufacture a product. Under this definition, over 65 percent of the employees in American business today are working in white-collar areas.

Productivity measurement in engineering

Within both the manufacturing and service in- dustries, a significant percentage of the total staff are engaged in such engineering disciplines as mechani- cal, quality, test, design, process, electrical, soft- wadhardware, and line maintenance. Historically, the most common form of productivity measurement for an engineering organization has been performance against schedule (PAS). But while PAS is important, it should not be considered as the primary measure- ment of the effectiveness and efficiency of an engi- neering organization, for these reasons:

1. PAS does not measure productivity. It sim- ply tracks the completion of the milestones within a

134 National Productivity Review

Page 2: Productivity in the engineering disciplines

A high-payback area might be an engineering group spending 65 percent of its time on

administratiue functions.

project against a schedule, and the schedule is only as good as the time estimates used for developing the tasks and milestones within the project.

from traditional “work measurement” programs, which tend to deteriorate with time as measured jobs change. The six phases are:

2. PAS does not measure quality. Because a project is completed on time does not ensure that the end product of that project has any value.

3. PAS cannot reflect the efficiency or ineffi- ciency of the engineering staff involved in the project. What is needed is a method of developing perfor- mance indicators that will allow management to as- sess accurately the efficiency and the effectiveness of engineering and other technically oriented operations.

Engineering performance analysis

During this author’s four years as manager of productivity for Intel Corporation, he and his staff de- veloped a technique for assessing the overall effi- ciency and effectiveness of an engineering organiza- tion and for establishing ongoing measures of performance. Since that time, this technique has been used successfully in other industries. The approach is called engineering performance analysis (EPA).

Essentially, EPA is a management-control system that ensures that jobs are performed by the ap- propriate skill levels in the most efficient manner pos- sible, given the constraints of the working environ- ment, and that staffing levels are balanced against the amount of work to be done. Once implemented, this control system can:

0 Highlight problem areas in manpower utili-

0 Evaluate alternate methods and systems; 0 Balance staff and work load; 0 Measure group performance; 0 Forecast personnel requirements; and 0 Load and schedule work.

zation;

The six phases of EPA analysis

The six phases of the EPA ensure its continu- ing value. In combination, these phases set EPA apart

1. Position content analysis; 2. Review of functions and methods; 3. Elimination/devolution of tasks; 4. Development of reasonable expectancy

goals; 5. Establishment of task function relation-

ships; and 6. Implementation of continuous performance

reporting.

Position content analysis (PCA)

Position content analysis is accomplished through a series of interviews with the management and supervision of each group within the organiza- tion. The overall mission(s) of each group is broken down into the basic work functions as each relates to specific outputs of the group, and the percentage of time expended on each function is estimated. Each function is then broken down into its respective tasks. This stratification allows the separation of tasks asso- ciated with one type of output from similarly named tasks associated with other outputs. Stratification also provides a rational basis for combining tasks within or across output lines when time values have been deter- mined.

Review of functions and methods

Once the functions for each group have been identified and the percentage of total time expended on each function has been established, it is possible to identify “high-payback” areas for productivity im- provements. An example of a high-payback area might be a group that is spending 65 percent of its total time on administrative functions and only 35 percent on engineering-related activities. In such a group, every 1 percent of time that can be eliminated from administrative functions or the transfer of those administrative activities to technical or clerical people

Spring 1986 135

Page 3: Productivity in the engineering disciplines

can buy more engineering resources at no additional cost.

To put this concept in perspective, imagine an engineering group of thirty-five full-time engineers spending 55 percent of its time on administrative functions and 45 percent on engineering-related func- tions. If, through productivity improvements, the ad- ministrative load could be reduced by 10 percent, in effect, the labor hours equivalent to 3.5 engineers would be released for work on engineering-related ac- tivities.

Once several high-payback areas have been identified, small “blue-ribbon” task forces are imple- mented. They are charged with the responsibility for correcting the major problems identified. The task forces are given immediate training on whatever tech- nique might be the most effective in solving the prob- lems- for example, work simplification, value anal- ysis, or cause-and-effect analysis.

Each blue-ribbon task force is then given a charter and a time frame in which to solve a problem. The solutions are divided into two categories-short term and long term. The short-term category includes corrective actions that can be accomplished within ninety days. The long-term category generally in- cludes those solutions that will require assistance from other departments, such as data processing.

Eliminatioddevolution of tasks

While the various task forces are engaged in solving the major operational problems identified, in our capacity as consultant we work with the managers and supervisors to evaluate the tasks being performed. The two questions continually asked during this pro- cess are: (1) Should this task be done at all? and (2) If it must be done, is it being done by the right skill level? The overall objectives of this phase are to elim- inate unnecessary work and to devolve nontechnical work from engineers to technicians or clerical staff.

Thus, there are two major objectives for phases 1, 2, and 3 of the EPA. First, identify and do a preliminary quantification of the actual work per- formed by a group; and, second, strip out the “bu- reaucracy” and get the organization back to doing what it was chartered to do.

Phases 4, 5 , and 6, discussed next, are de- signed to prevent the problems of unnecessary or in- appropriate work from recurring in the future and to establish reliable measures of performance.

Development of reasonable expectancy goals

Traditional work-measurement programs will not be effective in most analytical disciplines, such as engineering. Staff in these operations do not do the same thing hundreds of times each hour. Their work involves asking questions, solving problems, doing research, and making decisions. Therefore, attempt- ing to set work standards in this type of group is like trying to teach a pig how to sing. It wastes time and annoys the pig. However, it is reasonable to set ex- pectancy goals for how long, on the average, it should take to perform a task or function. Without this type of measurement it is virtually impossible to determine whether or not available human resources are being utilized in the most efficient manner.

Expectancy goals for each task are determined by the manager or supervisor, with input from his or her staff. When establishing these time goals, the manager or supervisor should review each task in light of three criteria. What is the least amount of time it could take to do this task? What is the maximum amount of time it could take to do this task? What is the realistic average time it takes to do this task? In- dustrial engineers find this methodology most dis- tasteful because of its lack of precision. However, one must bear in mind that finite measurements are not the objective. The objective is the development of reliable performance measures that apply reasonable stan- dards, rather than standards carried out to four deci- mal places.

Having the managers and supervisors deter- mine the time values overcomes the biggest obstacle to any measurement program-ownership- since they are working with their own numbers.

Establishment of tasWfunction relationships

Once time-expectancy goals have been estab. lished for each task, it is possible to group the tasks

136 National Productivity Review

Page 4: Productivity in the engineering disciplines

Time-expectancy goals hme limited oalue unless management is periodically informed of the group’s peqormance against these goals.

by function and establish overall expectancy goals at the function level. Combining tasks simplifies the re- porting and moves the measurement activity from in- dividuals to groups.

Implementation of continuous performance reporting

Expectancy goals, in and of themselves, have limited value for a company unless management is pe- riodically informed of the group’s performance against these goals. EPA goals are always tied into some type of reporting system. The reporting period and method of expressing performance are determined by the length of the work cycle and the specific needs of individual managers.

A case example of EPA

Now that the reader has a basic literacy re- garding EPA, the following real-time case example of its implementation should help in understanding how EPA works. First, here is a description of the organi- zation:

Type of organization: test engineering. Mission: develop and conduct reliability tests

for electrical components. Stafing: 1 manager, 7 supervisors, 46 engi-

neers, and 2 clerks.

This organization requested the assistance of a productivity consultant to help it in several areas. First, although the organization was able to stay cur- rent on tests that needed to be performed on stable products, it was continually operating in a backlog sit- uation in the development of tests to support new products. Second, the organization could not stay cur- rent on the release of engineering change orders (ECOs). Third, the organization felt that it spent too much time troubleshooting failed tests.

During phases 1, 2, and 3 we were able to de- termine that the organization performed the following six major functions:

i . Designing tests; 2. Performing tests;

3. Writing ECOs; 4. Providing document control; 5. Troubleshooting; and 6. Performing administrative work.

The top graph in Figure 1 reflects the time allocated to each function. Thirty-two percent, or fifteen engi- neers out of a total of forty-six, were involved in ad- ministrative activities, such as writing reports, attend- ing meetings, photocopying, etc. None of these administrative activities were in direct support of en- gineering activities, that is, meetings, reports, or other types of administrative work done in support of a project were not included under administrative time. The high-payback opportunity for this organization was now quite clear: reduce the percentage of engi- neering time spent on administrative activities and reallocate that time to designing and performing tests.

To accomplish this objective, four task forces, of four individuals each, were created to review four specific administrative areas. The areas selected, and some of the issues addressed, were:

1. Meetings 0 What type of meetings are held each

month and what is the frequency of these meetings?

0 Who attends? 0 Who should or should not attend? 0 Are there agendas? 0 What are the tangible benefits to attend-

ing? 2. Reports

How many and what type of reports are written or otherwise generated each month? What problem or requirement initiated the generation of each report? Does the problem or requirement still exist? Does anybody read the report? If so, who? If the report is necessary, must it be done by engineering? Could a technical/ clerical generate the report?

3 . Writing ECOs 0 What type of action generates an engi-

neering change order? Should all of these actions generate ECOs?

Spring 1986 137

Page 5: Productivity in the engineering disciplines

Figure 1 Time Allocated to Each Function in

Test Engineering, before and after EPA

Before EPA

After EPA

16%

32%

Administrative

Perform tests

Design tests

Write ECOS

= Document control = Troubleshooting

138 National Productivity Review

Page 6: Productivity in the engineering disciplines

Ouer four weeks, each of the four blue-ribbon task forces met to make indiuidual assignments

and reuiew progress.

0 How many of the ECOs really must be written by an engineer?

4. Troubleshooting 0 What are the type and frequency of the

calls? 0 How many of the problems could be

solved on the line with better training and documentation?

0 Could a technical person handle most of the calls?

Over a four-week period, each of the four blue-ribbon task forces met for an average of two hours a week to make individual assignments and re- view progress. At the end of the four weeks, a staff meeting was called and each task force presented its findings and recommendations to the total staff. The overall achievements of the groups are shown in the bottom graph in Figure 1. These results were achieved through a combination of the following changes:

0 Specific criteria were developed for deter- mining who would attend which meetings; Specific tasks that could be performed by properly trained technicians and clerks were identified and quantified, and five techni- cians and two additional clerks were hired. The tasks were identified as follows: -31 percent of the current reports being

generated were eliminated; 41 percent of the remaining reports could be generated automatically by a technical person, and 18 percent could be generated routinely by a clerk

-53 percent of the ECOs could be written by technicians

-troublecalls could be reduced by 42 per- cent through better test documentation; 67 percent of the remaining troublecalls could be handled by technicians;

0 Four additional fully cost-benefited person- al computers were purchased; and

0 A telephone answering machine was pur- chased, and “no-hassle” times were imple-

mented. Except in emergencies, engineers were not to be disturbed between 8:OO and 9:OO A.M. This allowed them time to catch up on paperwork, to plan the day, and then to work on the plan.

Even with the additional staff and equipment, the end result was highly cost effective, as shown by the com- putations in Table 1.

Additional savings

Excluded from this cost-benefit analysis are the real savings to the corporation made possible by the additional engineering time spent on the design and conduct of the reliability tests for the corpora- tion’s products. Another result that cannot be mea- sured is the improved morale of the engineering staff. During phase 4, the supervisors and several lead engi- neers developed time-expectancy goals for each of the ninety-one tasks that had been identified as being per- formed by this organization. The tasks were then rolled up against their respective functions. As an ex- ample, the roll-up for the overall troubleshooting function is shown in Table 2. (To preserve confiden- tiality, the actual task names and data are not shown.)

The data in Table 2 is used to perform ratio- ing, the purpose of which is to simplify reporting re- quirements by establishing mathematical relationships between tasks and functions. First, all of the tasks as- sociated with the overall function of performing trou- blecalls were identified. It was then determined how

Table 1 Annualized Labor Savings

1. Engineering time gained: 29% 2. Number of equivalent engineers saved:

3. Gross dollars saved (in annual salary + fringe):

4. Net dollars saved:

29% X 46 = 13

13 x $43,850 = $570,050

$570,000 savings - $144,600 additional staff - $ 20,000 personal computers - $ 168 answering machines

$405,282 annualized labor savings

Spring 1986 139

Page 7: Productivity in the engineering disciplines

Table 2 Relationship between the Troubleshooting Function and Its Component Tasks

Expectancy Weekly Tlme/Call % of Whole Adjusted

Tasks Volume (minutes) Call Volume Mlnutes Line calls 21 42 18% 7.5 Board calls 68 23 58% 13.3 Integrated circuit calls 12 67 10% 6.8

11.7 Refit failure calls - 17 81 14% - Trou blecalls 118 100% 39.3

many times each of these tasks occurred on a weekly basis and how long it took to complete one call in each task category, on the average-which estab- lishes the expectancy times for the tasks.

The overall function “Troublecalls” in Table 2 is nothing more than the aggregate value of all tasks together. Therefore, the total number of troublecalls performed each week averaged 1 18. By dividing this

number into the individual task volumes, one can de- termine what percentage each individual task is of the whole-e.g., the 21 line calls are 18 percent of all troublecalls. Multiplying each task’s percentage by its respective expectancy times, one can determine an adjusted time based on its percentage share. Adding up the values for all of the tasks produces one time value that can be applied to all troublecalls regardless

Table 3 Test Engineering Group: Monthly Management Report

Productlvlty Development Systems Staff Requirements Planning System Group: Test Englneerlng

Management Annual Information Jan Feb Mar Apr May Jun Jut Aug Sep Oct Nov Dec Average

1 Actual hours worked: 7104 7360 7728 6768 7360 7344 6560 6752 7680 7536 6944 7456 7216 2 Earned hours: 5784 6268 5712 5600 5784 6268 5712 5600 5252 5120 6184 5308 5716 3 Effective utilization: 81% 85% 74% 83% 79% 85% 87% 83% 68% 68% 89% 71% 79% 4 Payroll headcount: 46 46 46 46 46 46 46 46 46 46 46 46 46 5 Available headcount: 44.4 46 48.3 42.3 46 45.9 41 42.2 48 47.1 43.4 46.6 45.1 6 Required headcount: 36.2 39.2 35.7 35.0 36.2 39.2 35.7 35.0 32.8 32.0 38.7 33.2 35.7 7 Overhnder staffed: 8.3 6.8 12.6 7.3 9.9 6.7 5.3 7.2 15.2 15.1 4.8 13.4 9.4

8 Staff requirements forecast: 37 37 36 37 37 37 35 33 34 36

Report annotations:

1 Actual hours worked as opposed to hours paid 2 Amount of time expended on measured/approved work 3 Percent of time expended on productive work 4 Number of staff actually on the payroll 5 Number of staff actually available to work 6 Amount of staff required according to the staffing algorithm 7 Variance between staff available and staff required 8 Self explanatory

140 National Productivity Review

Page 8: Productivity in the engineering disciplines

By using a ratioing technique, the Organization no longer had to monitor ninety-one tasks to

determine its effectiveness.

of their type-in this case, 39.3 minutes. This value makes it unnecessary to track troublecalls by type, since it is based on the average time per call for each task type, and the percentage of total troublecalls rep- resented by each task type. A troublecall is a trouble- call.

By using this type of ratioing technique, the organization no longer must monitor ninety-one tasks to determine its effectiveness. That can now be deter- mined by monitoring the six major functions per- formed. After phases 4 and 5 were completed, the or- ganization was able to determine that the engineers were effectively utilized for only 81 percent of the time. Management’s goal was 95 percent. Therefore, there was an additional 14 percent of engineering time available to commit to projects. In other words, this organization was overstaffed by slightly more than six engineers. Or, stated more appropriately, this organi- zation could increase its output by 14 percent without adding any staff. And the cost avoidance savings? Another $280,640. This amount, coupled with the savings identified in phases 1, 2, and 3 raises the total annual savings to nearly $700,000 in an organization that employs only sixty-three people.

The implementation of phase 6, ongoing per- formance reporting, can take many forms, depending on how creative one gets in phases 4 and 5. Table 3 is an actual monthly management summary for this or-

ganization. Note that this type of reporting system not only tracks how well staff are utilized on an ongoing basis but also builds in forecasting capability and the ability to keep staffing levels in line with work vol- umes.

Conclusion

The type of project described here usually takes from fourteen to eighteen weeks to complete, and another three to six months for full implementa- tion. However, the payback is very high and very quick. The techniques and sequencing discussed here can be used effectively in any organization employing highly analytical staff.

Keith Bolte is senior vice-president of the Produc- tivity Development Systems Group with URS Cor- poration in Seattle, Washington. The PDS opera- tion special izes in the implementat ion of “knowledge worker” productivity-improvement programs. Before joining URS, Mr. Bolte was cor- porate manager of administrative productivity with Intel Corporation of Santa Clara, California.

Spring 1986 141


Top Related