measurement and confidence in od

27
How do we know what works? Ilmo van der Löwe CHIEF SCIENCE OFFICER iOpener Institute for People and Performance

Upload: ilmo-van-der-loewe

Post on 19-Jun-2015

209 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Measurement and confidence in OD

How do we know what works?

Ilmo  van  der  LöweCHIEF  SCIENCE  OFFICER

iOpener Institute for People and Performance

Page 2: Measurement and confidence in OD

Lord KelvinPHYSICIST

To  measure  is  to  know.“

Page 3: Measurement and confidence in OD

• OD  interven3ons  must  be  measured–Did  the  interven3on  have  an  impact?–Were  the  effects  were  posi3ve  or  nega3ve?–What  were  the  success  factors?

Page 4: Measurement and confidence in OD

A simple example

• Ques3on:–Does  training  managers  create  more  produc3ve  workers?

• Interven3on:–Train  10  managers  to  be  beEer  leaders

• Measurement  plan:–Measure  the  produc3vity  of  the  managers’  direct  reports  before  and  aIer  the  training  (a  total  of  400  people)

Page 5: Measurement and confidence in OD

Pre-­‐interven3onmeasurement

Training Training  put  into  prac3ce

TIME

Plan #1

• If  direct  reports  are  more  produc:ve  at  work  in  the  end,  does  it  mean  that  the  training  worked?

Post-­‐interven3onmeasurement

Page 6: Measurement and confidence in OD

Not necessarily...

• Increased  scores  could  be  caused  by:–Economy  gePng  beEer,  local  team  winning  championship,  seasonal  weather  differences,  a  friendly  new  hire...

• Decreased  scores  could  be  caused  by:–Fear  of  layoffs,  the  coffee  machine  being  broken,  serious  injuries  to  team  members,  recession...

Page 7: Measurement and confidence in OD

Change over time

• Outside  factors  other  than  training  can  change  scores

• Mere  change  in  scores  is  not  evidence  of  efficacy–Measurement  must  take  into  account  outside  factors

Page 8: Measurement and confidence in OD

Control group

• Revised  plan–Include  a  control  group  that  is  similar  to  the  experiment  group  in  all  aspects,  except  the  training• Ideally,  same  loca3on,  same  work  hours,  same  work,  same  tenure,  same  seniority  etc.

• Ra3onale–If  outside  factors  influence  scores,  then  their  effect  should  be  the  same  for  both  groups  because  both  experienced  them–If  training  influences  iPPQ  scores,  the  scores  of  the  control  group  should  differ  from  the  experimental  group

Page 9: Measurement and confidence in OD

Training Training  put  into  prac3ce

Plan #2

• If  the  group  scores  differ,  how  can  we  tell  if  the  difference  is  significant?

Business  as  usual

EXPERIMENTAL  GROUP

CONTROL  GROUPTIME

Pre-­‐interven3onmeasurement

Post-­‐interven3onmeasurement

Page 10: Measurement and confidence in OD

Statistical significance

• Sta3s3cal  significance  is  the  confidence  you  have  in  your  results

• Sta3s3cs  put  confidence  into  precise  terms–"There's  only  one  chance  in  a  thousand  this  could  have  happened  by  coincidence."  (p  <  0.001)

Page 11: Measurement and confidence in OD

confidence =signalnoise

sample size×

How  big  of  a  difference  will  training  create  between  groups?

How  many  people  in  each  group?

What  other  factors  can  create  differencesbetween  groups?

• To  maximize  confidence– Increase  interven:on  quality  (boost  signal)–Minimize  other  differences  between  groups  (reduce  noise)– Increase  sample  size

Page 12: Measurement and confidence in OD

confidence =signalnoise

sample size×

• Is  the  sample  size  10  or  400?– 10  managers  get  trained– 400  employees  get  surveyed

Page 13: Measurement and confidence in OD

Although  the  employees’  produc:vity  at  work  is  being  measured,it  is  the  efficacy  of  the  training  interven:on  that  maSers.

Page 14: Measurement and confidence in OD

Each  manager  is  different  and  will  put  the  training  into  prac3ce  differently.

Page 15: Measurement and confidence in OD

Most  managers  will  do  an  okay  job.

Page 16: Measurement and confidence in OD

Some  will  be  excep3onally  good.

Page 17: Measurement and confidence in OD

Some  will  be  excep3onally  bad.

Page 18: Measurement and confidence in OD

Each  manager  creates  variabilityin  data  that  cannot  be  controlled.

Page 19: Measurement and confidence in OD

Thus,  the  effec:ve  sample  size  is  10,although  400  people  are  measured.

Page 20: Measurement and confidence in OD

Small  samples  are  more  likely  to  be  biased(In  a  sample  of  three,  you  may  have  two  bad  ones  and  a  mediocre  one,  for  example)

Page 21: Measurement and confidence in OD

 (Or  the  other  way  around.)

Page 22: Measurement and confidence in OD

• Results  should  not  change  depending  on  who  happens  to  respond.

• The  sample  should  be  large  enough  to  reduce  unintended  biases.

Page 23: Measurement and confidence in OD

Training Training  put  into  prac3ce

Plan #3

• To  reduce  the  impact  of  manager  variability,  recruit  larger  number  of  managers  to  both  experimental  and  control  groups–With  large  numbers  of  managers,  extremes  cancel  each  other  out

Business  as  usual

EXPERIMENTAL  GROUP

CONTROL  GROUPTIME

Pre-­‐interven3onmeasurement

Post-­‐interven3onmeasurement

Page 24: Measurement and confidence in OD

Getting close, but...

• Even  sta3s3cally  significant  differences  between  the  experimental  and  control  groups  do  not  automa3cally  speak  for  the  efficacy  of  training–Placebo  effectBelief  in  efficacy  creates  changes–Hawthorne  effectSpecial  situa3on  and  treatment  of  the  measurement  creates  changes

Page 25: Measurement and confidence in OD

Training Training  put  into  prac3ce

Plan #4

Business  as  usual

EXPERIMENTAL  GROUP

CONTROL  GROUP

TIME

Pre-­‐interven3onmeasurement

Post-­‐interven3onmeasurement

Fake  training Training  put  into  prac3cePLACEBO  GROUP

Page 26: Measurement and confidence in OD

Three-way comparisons

–Experimental  group• If  significantly  different  from  the  control  group,  outside  factors  did  not  account  for  the  effect.• If  significantly  different  from  the  placebo  group,  the  effects  were  unique  to  training,  not  just  different  treatment.

–Control  group• If  not  different  from  experimental  group,  training  had  no  effect  at  all.

–Placebo  group• If  not  different  from  experimental  group,  training  had  no  real  effect  beyond  the  special  treatment  given  to  the  group.

Page 27: Measurement and confidence in OD

Measurement in OD practice

• Measurement  is  important

• Measurement  must  be  carefully  planned  and  executed

• The  bare  minimums  are  a  proper  control  group  and  a  large  enough  sample  size