satisfaction fm national_standards

Upload: saiful-hadi-mastor

Post on 04-Apr-2018

213 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/30/2019 Satisfaction Fm National_standards

    1/19

  • 7/30/2019 Satisfaction Fm National_standards

    2/19

    National standards of customersatisfaction in facilities

    managementMatthew Tucker and Michael Pitt

    School of the Built Environment, Liverpool John Moores University,Liverpool, UK

    Abstract

    Purpose The purpose of this paper is to provide an overview of the first stage of primary researchundertaken to establish generic customer satisfaction benchmarks for the facilities management (FM)industry, and test whether the benchmarks can be strategically implemented into individual FM

    provider organisations to further enhance their existing performance measurement processes andsubsequent service provision.

    Design/methodology/approach The study proposes the development of a conceptualframework, the Customer Performance Measurement System (CPMS). The CPMS consists of fourstages, and uses a mixed methodological strategy. This paper provides the findings from the firststage of the CPMS, to establish generic customer satisfaction benchmarks for the FM industry. This isundertaken through two annual customer satisfaction surveys in 2007 and 2008 across the UK andIreland.

    Findings The paper establishes customer satisfaction benchmarks for individual FM services. Thebenchmarks identify trends between key variables of criticality, efficiency and service provision,including general variables regarding the performance of the FM team delivering the services andoverall satisfaction with all services provided.

    Research limitations/implications The research presented forms part of a wider study testing

    the concept of the CPMS framework. The paper provides an overview of the wider study, whilefocusing on the completion of the first stage of primary research.

    Originality/value Information on the application of customer satisfaction indicators within theindustry is limited. This paper provides an insight into how customers perceive individual FMservices within the UK and Ireland.

    Keywords Benchmarking, Customer satisfaction, Performance management, Facilities,United Kingdom, Ireland

    Paper type Research paper

    IntroductionFacilities management (FM) is the integration and alignment of the non-core services,including those relating to premises, required to operate and maintain a business to

    fully support the core objectives of the organisation (Tucker and Pitt, 2008). AlthoughFM services are non-core in nature, if managed correctly they should have a strategicimportance to adding value to an organisations core business delivery.

    The standard at which an organisation believes it is delivering FM services can oftenbe distinctly different from the perceptions of the customers/users receiving the services.This is reinforced by the GAPS model created by Parasuraman (2004), illustrating thatservice quality fails when there is a gap between customer expectations and perceptions.FM therefore has a critical importance within the workplace, where the performance of

    The current issue and full text archive of this journal is available at

    www.emeraldinsight.com/0263-2772.htm

    Standards ofcustomer

    satisfaction

    497

    Received March 2009Accepted June 2009

    Facilities

    Vol. 27 No. 13/14, 2009

    pp. 497-514

    q Emerald Group Publishing Limited

    0263-2772

    DOI 10.1108/02632770910996342

  • 7/30/2019 Satisfaction Fm National_standards

    3/19

    service delivery needs to be measured to ensure added value is being achieved.Moreover, Tucker and Smith (2008) express that the perceptions of the user to the initialinput of the service delivery process are equally important, as they essentially determinethe strategic and operational objectives of the organisation. The importance of gathering

    service delivery data from a customer perspective in FM is clear.In 2004, the British Institute of Facilities Management (BIFM), part funded by the

    Department of Trade and Industry (DTI) (now known as the Department for BusinessInnovation and Skills (BIS)), issued a report titled Rethinking facilities management:accelerating change through best practice (BIFM, 2004). Promoting customersatisfaction was regarded as one of the top five issues today and in five to ten yearstime for FM. Performance benchmarking was also noted as being important, as althoughnot in the top five current issues, it showed the largest change in importance for five toten years time across all FM functions. The report is now five years old, and based onthese findings the importance of promoting customer performance measurement and theuse of benchmarking within a strategic FM function should now be essential.

    Studies on performance measurement within FM have been commonplace over the

    past two decades. Studies have generally tended to focus on how FM organisations canmanage performance strategically to achieve added value and more efficient servicedelivery (e.g. Amaratunga and Baldry, 2000, 2002, 2003; Hinks and McNay, 1999;Kincaid, 1994; Varcoe, 1996a), and the importance of benchmarking, which havearguably leaned towards more financially orientated factors (e.g. Massheder andFinch, 1998a, b; McDougall and Hinks, 2000; Wauters, 2005; Williams, 2000; Varcoe,1996b). Specific studies focusing on the involvement of the customer in FM are alsoevident (e.g. Bandy, 2003; Amaratunga et al., 2004; Futcher et al., 2004; Shaw andHaynes, 2004; Walters, 1999; Loosemore and Hsin, 2001), but they are not as frequentas more generic performance measurement papers. Moreover, specific research on theapplication of customer satisfaction indicators as a strategic performancemeasurement process within FM is insufficiently documented.

    According to Sarshar and Pitt (2009), FM suppliers need to go beyond their existingcustomer performance measurement systems, moving from customer satisfactionsurveys towards 360 degree client perception management systems. By this, Sarsharand Pitt (2009) explain that this would require a review of client requirements fromboth quantitative and qualitative data, where most surveys are quantitative in natureand consequently miss out on important issues.

    This paper forms the first stage of a wider research study investigating how FMprovider organisations can strategically use and apply customer performancemeasurement to innovate and improve FM service provision. This was achieved bydeveloping a conceptual model, known as the Customer Performance MeasurementSystem (CPMS) (Tucker and Pitt, 2009), which incorporates a robust process of

    quantitative and qualitative methods that are accessible and strategically applicable toFM organisations.The following objectives were set to investigate operationally the development of

    the CPMS:. establish generic industry benchmarks of customer satisfaction for specific FM

    services in the UK and Ireland;

    . strategically apply generic industry benchmarks of customer satisfaction to anFM organisation through the quantitative measurement of internal client-base

    F27,13/14

    498

  • 7/30/2019 Satisfaction Fm National_standards

    4/19

    benchmarks and a qualitative assessment of the organisations existingprocesses; and

    . assess the extent to which the CPMS model can enhance an FM organisationsexisting customer performance measurement processes and contribute tostrategically innovating and improving FM service provision.

    This paper provides an overview of the wider study, and then focuses on the primaryresearch undertaken to complete the first aim identified above.

    The CPMS modelAccording to Fellows and Liu (2003), modelling is the process of constructing a model, arepresentation of a designed or actual object, process or system, a representation ofreality. To this, the development of a conceptual framework or model, termed CPMS,was developed to effectively test the aims and objectives set in the previous section.

    The CPMS will create a continuous improvement process allowing both customers

    and FM providers to gain knowledge and accessibility to customer performance data,and help FM provider organisations prioritise and strategically apply customerperformance measurement information. The studies mentioned in the Introductionwere influential in developing the CPMS framework. The overarching concept ofapplying benchmarking techniques however came from Camp (1989), who suggestedthat benchmarking is the search for industry best practices that lead to superiorperformance. Hence, by establishing a national customer satisfaction benchmark ofFM service delivery, this will help kick-start the potential to obtain superiorperformance for FM service providers. However, the fundamental influence indeveloping the model reflected on the work of Amaratunga and Baldry (2002), whoemphasised the importance of performance management, where results inperformance measurement indicated what happened, not why it happened, or what

    to do about it. Hence the CPMS model uses external and internal benchmarks toexplain what has happened, a gap analysis procedure will explain where it hashappened, but the most important component, i.e. a strategic implementation process,will explain why and how FM providers can go about improving customerperformance. The CPMS framework consists of four key stages (Figure 1):

    Figure 1.CPMS

    Standards ofcustomer

    satisfaction

    499

  • 7/30/2019 Satisfaction Fm National_standards

    5/19

    . Stage 1: National benchmarking The purpose of this initial stage is todetermine an external customer opinion via generic benchmarks within the FMindustry. Key customer satisfaction attributes will be agreed across individualFM services and will be targeted at organisations that receive and/or operate FM

    services in-house (i.e. no FM providers).. Stage 2: Internal benchmarking An internal benchmarking exercise will then

    take place within an individual FM provider organisation. This will requireaccess to the FM providers client base. The exercise will largely mimic thecontent of Stage 1, providing a direct comparison to evaluate how individual FMproviders are performing against the perceived national standards. Data wouldalso be collected from a qualitative assessment of key FM staff within theorganisation, and a sample of their client base, to further understand the currentorganisational framework for capturing customer satisfaction.

    . Stage 3: Gap analysis The CPMS is now able to analyse the specific gaps/areasfor improvement within the service delivery of the FM provider.

    . Stage 4: Customer performance strategy The FM provider organisation is nowin a position of the informed client. Strategic decisions can be made to devise newprocesses to add value and further enhance the organisations service quality.

    Research structureIn order to test the effectiveness of the conceptual CPMS framework described, amixed methodological approach is taken using both quantitative and qualitativemethods. More specifically, a mixed sequential explanatory strategy (Creswell,2009) is used, where quantitative data and qualitative data are collected insequential phases. This also gives weight to the collection of quantitative data,where initial quantitative data is collected in the first phase, and is followed up by

    the collection of qualitative data in the second phase to help inform the initialquantitative findings (Creswell, 2009).

    Creswell (2009) states that theory use in mixed methods studies may include theorydeductively, in quantitative theory testing and verification, or inductively as in anemerging qualitative theory or pattern. Essentially this study adopts a deductiveapproach, where theory is systematically tested in order to unravel new findings andtheory (Stages 1-3 of the CPMS). However, the final process is inductive (Stage 4 of theCPMS), where the researcher infers the implications of his or her findings for thetheory that prompted the whole exercise (Bryman and Bell, 2007).

    Stage 1 is conducted through a large-scale quantitative study in order to establishgeneric customer satisfaction benchmarks for the FM industry. Stage 2 then involvestesting whether the benchmarks can be applied to an instrumental case study, through

    an individual FM provider organisation. This will be achieved through an internalquantitative study to compare the case studys client base benchmarks (Stage 2a), tothe generic benchmarks established in Stage 1. It will also be achieved through aqualitative study (Stage 2b) involving a series of semi structured interviews with casestudy staff and clients to inform the researcher of the existing processes involved forassessing customer satisfaction. Stage 3 then amalgamates the data established fromStages 1 and 2, analysing the key gaps and areas for improvement within the casestudy. This information will then be disseminated to case study staff for consideration.

    F27,13/14

    500

  • 7/30/2019 Satisfaction Fm National_standards

    6/19

    Stage 4 then finally infers the findings by providing suggestions for improvement tothe existing processes. This will take place by conducting a series of post-CPMSinterviews with case study staff to validate the effectiveness and applicability of theframework (Stage 4a), followed by a series of collective case studies, through a

    selection of external FM providers to validate externally the effectiveness andapplicability of the framework (Stage 4b).

    CPMS Stage 1: methodologySurvey designA focus group was arranged in 2007 to liaise with FM experts to find out exactly whatwould be the most effective benchmarks to provide within the FM industry. The focusgroup contained the following members:

    . senior management representation from a leading FM provider in the UK;

    . a specialist FM consultant with experience in client/provider relationships; and

    . representatives from two leading FM journal magazines in the UK.

    The focus group produced the following outcomes regarding the establishment ofcustomer satisfaction benchmarks in FM:

    . to establish customer feedback for individual FM services;

    . to establish how important FM services are to the customer;

    . to establish how efficient the level of service is provided to the customer;

    . to establish how each service is provided, and how long (i.e. in-house oroutsourced); and

    . to establish general perceptions of overall satisfaction of all services provided,

    level of innovation in service provision, and the FM team providing the serviceswithin the customer organisation.

    The survey sought to obtain information on individual FM services using ordinalquestions to rank each FM service by efficiency, criticality and service provision. TheFM services covered were:

    . maintenance of the building fabric;

    . mechanical and electrical (M&E) engineering;

    . waste management;

    . maintenance of grounds and gardens/internal planting;

    .

    cleaning;. catering;

    . mailroom;

    . security;

    . health and safety;

    . reception (including switchboard); and

    . helpdesk.

    Standards ofcustomer

    satisfaction

    501

  • 7/30/2019 Satisfaction Fm National_standards

    7/19

    The level of criticality was firstly rated using a four-point scale under the followingcriteria:

    . Not critical Although the service is important, overall business operationswould not be effected by its temporary delivery failure.

    . Moderately critical The service would have some repercussions in theoccurrence of failure, but can quickly be overcome.

    . Very critical Delivery failure of the service would completely effect businessoperations and cause major problems continuing in its absence.

    The level of efficiency was then rated using a five-point scale under the followingcriteria:

    . Excellent Standards almost always meet and frequently exceed expectations.Performance in key areas is uniformly high.

    . Good Standards almost always meet and often exceed expectations.

    Performance in key areas is uniformly good and in some cases excellent.. Acceptable Standards usually meet expectations though there is room for

    improvement in some areas. No failings in key areas.

    . Poor Standards regularly fail to meet expectations and significant room forimprovement. Some failings in key areas.

    . Unacceptable Standards fail to meet expectations in most areas andimprovements required urgently.

    The level of service provision was then provided using a nominal scale under thefollowing criteria:

    . outsourced for less than a year;

    .

    outsourced for 1-3 years;. outsourced for more than three years; and

    . service provided in-house.

    Customers also had the opportunity to comment on their satisfaction with more generalthemes relating to the delivery of FM services within their organisation. In particular,customers provided feedback on the following themes:

    . aspects of the FM team in terms of people involvement and cultural fit, trainingand guidance, and general attitude;

    . the level of innovation in service provision; and

    .

    overall satisfaction with the services received.

    These questions were rated using the same five-point scale as the efficiency variable.

    Survey methodSpecific constraints were identified when considering the best survey approach toadopt in order to collect generic customer benchmarking data successfully. Theseincluded:

    F27,13/14

    502

  • 7/30/2019 Satisfaction Fm National_standards

    8/19

    . geographical spread in order to establish generic benchmarks, a significantgeographic spread was required;

    . time consideration had to be given to the time it would take to administer thesurvey;

    . budget conducting a large scale national survey has potentially high costsassociated with it, and hence in order to successfully conduct a survey across awide geographical spread, careful consideration to the most cost-effective surveymethod was needed to avoid potentially high costs; and

    . target characteristics targeting business customers within a workplaceenvironment can potentially have adverse effects if the wrong method is chosen.

    Through identification of these issues, it was determined that the most cost-effectivesurvey method to use was an online web-based survey method.

    Target population and procedure

    Due to the specific focus on FM, the logical audience that could capture a wide-rangingviewpoint was through BIFM members. The BIFM consists of over 12,000 members,and although it is not possible to determine the exact proportion of FM customerswithin this population, it provides an adequate population that was diverse andvariable enough to provide a representative sample of organisations receiving FMservices within the UK.

    Initially the survey was undertaken in 2007 only, with a further annual surveyundertaken in 2008 in order to test whether there were any significant differences incustomer perceptions on an annual basis. The surveys targeted organisations thatreceive and/or operate in-house FM services (i.e. the customer as opposed to theprovider). A simple random sample was then implemented by conducting two identicalonline surveys with all BIFM members in the UK and Ireland in 2007 and 2008. This

    means that each unit of the population has equal opportunity to complete the survey(Bryman and Bell, 2007).

    Response rateA response rate could not be calculated due to the unavailability of statisticsdetermining the proportion of BIFM members receiving and/or operating in-houseservices. However, although response rates are useful they do not show how accurateor confident the survey data is in determining the views of the total population (i.e.including those who do not participate). This can be achieved by using a 95 per centconfidence interval method, in which it is evident that the survey produced accuratefindings.

    Through a sample size of 230 collected in 2007 from a total BIFM member

    population of 11,414 (BIFM, 2007), a 95 per cent confidence interval of 6.4 per cent wascalculated. This means that if 50 per cent of respondents answer yes to a yes/noquestion, we can be 95 per cent certain that the views of the total population answeringyes (including those who did not participate in the survey) will lie between 43.6 percent and 56.4 per cent. In 2008, a sample size of 222 was collected from a total BIFMmember population of 12,678 (BIFM, 2008), giving a 95 per cent confidence interval of6.5 per cent. These figures were deemed to provide a fairly accurate indication ofcustomer satisfaction benchmarks[1].

    Standards ofcustomer

    satisfaction

    503

  • 7/30/2019 Satisfaction Fm National_standards

    9/19

    CPMS Stage 1: AnalysisA systematic analysis procedure was undertaken for each question asked within the2007 and 2008 surveys:

    (1) Kolmogorov-Smirnov one-sample test (to identify whether the data wasnormally distributed);

    (2) Mann-Whitney U test (to identify any significant differences between 2007findings and 2008 findings);

    (3) frequency distribution tests (to identify trends in percentages betweenindividual variables); and

    (4) x2 tests (categorical analysis to identify any significant relationships betweentwo individual variables).

    Kolmogorov-Smirnov one sample testThe one-sample Kolmogorov-Smirnov test compares the scores in a given sample to anormally distributed (theoretical) set of scores with the same mean and standarddeviation (Field, 2009). Each variable produced a high significance level (,0.005),meaning that the data was not normally distributed and further inferential analysistests to understand differences between variables had to be non-parametric.

    Mann-Whitney U testThe Mann-Whitney Utest can be used for non-parametric data to see whether there areany differences between two variables where different participants have been used.This was the most appropriate test to use, as although each survey was completed bythe same target population (i.e. BIFM members), the actual participants within thattarget population may be different in each survey. In addition, because Mann-Whitneytests are generally better suited to large samples, a Monte Carlo method was adopted toincrease the accuracy of the results, by creating a theoretical distribution similar to thatfound in the sample and then taking several samples (the default in SPSS being 10,000)(Field, 2009).

    The results found that no variables had a high significance level (,0.005),suggesting that customers perception and satisfaction levels when ratingcharacteristics of FM services did not dramatically differ in the space of one year,i.e. customers perception and satisfaction levels of received FM services are not anymore different in 2008 than in 2007.

    Frequency distribution analysisBecause the findings of the Mann-Whitney test identified very little difference between

    the 2007 and 2008 customer satisfaction benchmarks, this section reports on thefrequency distribution trends from the most recent survey (2008). It does howeverreinforce the findings by illustrating central tendency trends for 2007 and 2008.

    Criticality. In terms of customers perception of the criticality of FM services tooverall business operations, the majority of services were perceived to be verycritical. This included traditional hard services such as M&E engineering,services with high legislative demands such as health and safety, and front-lineservices such as the reception and helpdesk services. In summary, Table I shows

    F27,13/14

    504

  • 7/30/2019 Satisfaction Fm National_standards

    10/19

    the frequency distribution trends (based on the most frequent category that customersselected from the 2008 survey):

    In addition, Figure 2 illustrates these trends using a mean score, and highlights the

    similarity between the 2007 and 2008 surveys. Here is it clear to see which services areconsidered more critical than others.

    Efficiency. In terms of customers satisfaction with the efficiency of service delivery ofindividual FM services, the majority of customers rated all services as good orexcellent. The highest rating services included health and safety, reception, and themailroom. The lowest rated services included harder services such as wastemanagement, building fabric, and M&E engineering services. In summary, Table IIshows the frequency distribution trends from 2008 survey (recoded to a three-point scale).

    In addition, Figure 3 illustrates these trends using a mean score, and highlights thesimilarity between the 2007 and 2008 surveys. Here the graph shows that there is not agreat difference in the efficiency of each service, as they are generally rated between 2(good) and 3 (acceptable).

    Provision. In terms of customers identifying the provision of services delivered attheir location, the majority of customers outsource their services, and have done sofor more than three years. However, the majority of customers provided the healthand safety service in-house, along with front-line services such as reception, and

    Very critical services M&E engineering (81 per cent)Security (72 per cent)Health and safety (66 per cent)Helpdesk (52 per cent)Mailroom (52 per cent)Reception (51 per cent)

    Moderately critical services Cleaning (52 per cent)Building fabric (49 per cent)Waste management (47 per cent)Catering (45 per cent)

    Not critical services Grounds and gardens (58 per cent)

    Table I.Criticality of FM services

    from customersatisfaction survey 2008

    Figure 2.Criticality of FM servicesusing a mean score fromthe customer satisfaction

    surveys in 2007 and 2008

    Standards ofcustomer

    satisfaction

    505

  • 7/30/2019 Satisfaction Fm National_standards

    11/19

    helpdesk services, and also the mailroom. In summary, Table III shows thefrequency distribution trends (based on the most frequent category that customersselected from the 2008 survey)

    In addition, Figure 4 illustrates these trends using a mean score, and highlights thesimilarity between the 2007 and 2008 surveys. Here it is clear to see the difference inservices generally outsourced for either one-three years or more than three years (meanscore between 2 and 3) and provided in-house (mean score close to 4).

    General. Customers also rated general variables of service delivery, includingaspects of the FM team delivering the services onsite, satisfaction with the level ofinnovation in service provision, and an overall satisfaction for the delivery of allservices onsite. Generally, customers rated aspects of the FM team positively, with thevast majority rating all aspects as good or excellent. A summary is provided below(based on percentage rating good or excellent from 2008 survey):

    . people involvement and cultural fit (81 per cent);

    . training and competence (67 per cent); and

    . general attitude (78 per cent).

    Service Good Fair Poor

    Reception 81 17 2Health and safety 80 16 4

    Mailroom 78 21 1Catering 69 27 4Helpdesk 67 25 8Security 66 28 6Grounds and gardens 63 32 5Cleaning 61 29 10Waste management 58 36 6M&E engineering 57 34 9Building fabric 52 39 9

    Note: Figures given are percentages

    Table II.Efficiency of FM servicesfrom customersatisfaction survey 2008

    Figure 3.Efficiency of FM servicesusing a mean score fromthe customer satisfactionsurveys 2007 and 2008

    F27,13/14

    506

  • 7/30/2019 Satisfaction Fm National_standards

    12/19

    However, customers had a more varied opinion regarding the level of innovation inservice provision, with just over a third (40 per cent) who rated good or excellent,

    just over a third (36 per cent) rating acceptable, and a large proportion (24 per cent)rating poor or unacceptable.

    Finally, customers were generally satisfied overall with all services provided attheir location, with 59 per cent rating good or excellent, with around a third (35 percent) rating acceptable and only 6 per cent rating poor or unacceptable.

    Chi-square testsAlthough the central tendency and frequency distribution analysis identified potentialrelationships between the data, inferential analysis was undertaken in the form ofx2

    tests (or Fishers exact tests, where less than 20 per cent of cells produced counts lessthan 5) in order to establish any relationships between variables that were statisticallysignificant. In addition, if a significant relationship was found, the strength of

    association was then tested using Cramers V test[2].Criticality/efficiency/provision variables. The following hypotheses were explored foreach service in both the 2007 and 2008 surveys:

    Figure 4.Provision of FM servicesusing a mean score fromthe customer satisfaction

    surveys 2007 and 2008

    Outsourced services Waste management (88 per cent)M&E engineering (82 per cent)Grounds and gardens (82 per cent)Cleaning (81 per cent)Catering (80 per cent)Building fabric (70 per cent)Security (66 per cent)

    In-house services Health and safety (88 per cent)Mailroom (80 per cent)Reception (76 per cent)Helpdesk (67 per cent)

    Table III.

    Provision of servicesfrom customer

    satisfaction survey 2008

    Standards ofcustomer

    satisfaction

    507

  • 7/30/2019 Satisfaction Fm National_standards

    13/19

    . the level of criticality people associate with a service relates to how efficientpeople perceive that service to be.

    . the level of criticality people associate with a service relates to the way theservice is provided; and

    . the level efficiency people perceive a service to have relates to the way the serviceis provided.

    Only the catering and mailroom services identified a significant associationbetween the efficiency and criticality of services in both the 2007 and 2008 surveys(Table IV).

    The maintenance of the grounds and gardens was the only service to produce asignificant association between the criticality and provision of services in both the2007 and 2008 surveys. This service was generally rated least critical andpredominantly provided in-house (Table V).

    There were no significant associations found between the efficiency and provision

    of services in both the 2007 and 2008 surveys.FM team variables. The following hypothesis was explored for the level of

    criticality/efficiency/provision of each service relating to aspects of the FM teamdelivering the services onsite in both 2007 and 2008 surveys:

    2007 survey 2008 survey

    Service

    Statisticalsignificance

    (Fishers Exact)

    Strength ofsignificance(Cramers V)

    Statisticalsignificance

    (Fishers exact)

    Strength ofsignificance(Cramers V)

    General cross-tabulation trend

    from contingencytable

    Catering 0.005 Slight (0.199) 0.0013 Moderate(0.202)

    Customers perceivecriticality of service

    as moderatelycritical to businessoperations andefficiency of servicedelivery as good

    Mailroom 0.000 Moderate(0.230)

    0.001 Slight (0.194) Customers perceivecriticality of serviceas very critical tobusiness operationsand efficiency ofservice delivery asgood

    Table IV.FM services with asignificant associationbetween criticality andefficiency variables

    2007 survey 2008 survey

    Service

    Statisticalsignificance

    (x2)

    Strength ofsignificance(Cramers V)

    Statisticalsignificance

    (x2)

    Strength ofsignificance(Cramers V)

    Grounds andgardens

    0.003 Moderate (0.250) 0.011 Moderate (0.215)

    Table V.FM services with asignificant associationbetween criticality andprovision variables

    F27,13/14

    508

  • 7/30/2019 Satisfaction Fm National_standards

    14/19

    . The level of criticality/efficiency/provision associated with a given service

    relates to the level of perception associated with the different aspects of the FM

    team.

    The only variable to show significant associations with aspects of the FM team in boththe 2007 and 2008 surveys was the efficiency of service delivery.

    In terms of people involvement and cultural fit of the FM team, there were

    significant associations with all but four services, which were the waste

    management, grounds and gardens, cleaning and catering services (Table VI).

    Generally customers who rated the efficiency of service delivery as good also rated

    the people involvement and cultural fit of the FM team as good.

    In terms of training and competence of the FM team, again there were significant

    associations with the majority of services, apart from the grounds and gardens,

    mailroom, cleaning and catering services (Table VII). Generally customers rated

    the efficiency of service delivery as good also rated the training and competence of

    the FM team as good.In terms of general attitude of the FM team, there were significant associations

    with six services, which were building fabric, M&E engineering, health and

    safety, reception, mailroom, and helpdesk (Table VIII). Generally customers

    rated the efficiency of service delivery as good also rated the general attitude of the

    FM team as good.

    2007 survey 2008 survey

    Service

    Statisticalsignificance

    (Fishers exact)

    Strength ofsignificance(Cramers V)

    Statisticalsignificance

    (Fishers exact)

    Strength ofsignificance(Cramers V)

    Building fabric 0.000 Moderate (0.284) 0.000 Moderate (0.319)M&E engineering 0.001 Moderate (0.201) 0.000 Moderate (0.356)Security 0.003 Moderate (0.224) 0.003 Slight (0.194)Health and safety 0.001 Moderate (0.227) 0.000 Moderate (0.279)Reception 0.000 Moderate (0.290) 0.000 Moderate (0.262)Mailroom 0.002 Moderate (0.216) 0.000 Moderate (0.336)Helpdesk 0.009 Moderate (0.215) 0.000 Moderate (0.259)

    Table VI.FM services with a

    significant associationbetween efficiency andpeople involvement of

    FM team variables

    2007 survey 2008 survey

    Service

    Statisticalsignificance

    (Fishers exact)

    Strength ofsignificance

    (Cramers V)

    Statisticalsignificance

    (Fishers exact)

    Strength ofsignificance

    (Cramers V)

    Building fabric 0.000 Moderate (0.359) 0.000 Moderate (0.232)M&E engineering 0.001 Moderate (0.217) 0.000 Moderate (0.249)Waste management 0.019 Slight (0.155) 0.000 Moderate (0.238)Security 0.006 Moderate (0.245) 0.000 Moderate (0.265)Health & safety 0.008 Slight (0.172) 0.000 Moderate (0.261)Reception 0.004 Moderate (0.249) 0.000 Moderate (0.239)Helpdesk 0.002 Moderate (0.210) 0.008 Moderate (0.227)

    Table VII.FM services with a

    significant associationbetween efficiency and

    training and competenceof FM team variables

    Standards ofcustomer

    satisfaction

    509

  • 7/30/2019 Satisfaction Fm National_standards

    15/19

    Overall variables. The following hypotheses were explored for the level ofcriticality/efficiency/provision of each service relating to general variables in both2007 and 2008 surveys:

    . The level of criticality/efficiency/provision associated with a given servicerelates to the level of perception associated with the amount of innovation in theprovision of the services delivered.

    . The level of overall satisfaction with all services relates to the level ofperception associated to the amount of innovation in the provision of theservices delivered.

    The only variable to show significant associations with the level of innovation inservice provision in both the 2007 and 2008 surveys was again the efficiency ofservice delivery, in which all but two services showed a significant association(Table IX). The only two variables not to were the catering and grounds andgardens services, which were only significant in the 2008 survey.

    Finally, the overall satisfaction with all services was tested for any significantrelationship with the level of innovation in service provision. Both the 2007 and 2008survey produced highly significant results (0.000 in both). The strength of association

    2007 survey 2008 survey

    Service

    Statisticalsignificance

    (Fishers exact)

    Strength ofsignificance(Cramers V)

    Statisticalsignificance

    (Fishers exact)

    Strength ofsignificance(Cramers V)

    Building fabric 0.000 Moderate (0.298) 0.000 Moderate (0.252)M&E engineering 0.001 Moderate (0.237) 0.000 Moderate (0.285)Health and safety 0.003 Moderate (0.215) 0.010 Slight (0.166)

    Reception 0.005 Slight (0.198) 0.003 Moderate (0.200)Mailroom 0.001 Moderate (0.227) 0.000 Moderate (0.302)Helpdesk 0.001 Moderate (0.269) 0.036 Slight (0.159)

    Table VIII.FM services with asignificant association

    between efficiency andgeneral attitude of FMteam variables

    2007 survey 2008 survey

    ServiceStatistical

    significance

    Strength ofsignificance(Cramers V)

    Statisticalsignificance

    Strength ofsignificance(Cramers V)

    Building fabric 0.000 Moderate (0.246) 0.000 Moderate (0.284)M&E engineering 0.000 Moderate (0.214) 0.000 Moderate (0.290)

    Waste management 0.000 Moderate (0.250) 0.000 Moderate (0.218)Cleaning 0.029 Slight (0.156) 0.012 Slight (0.172)Security 0.003a Slight (0.186) 0.000a Moderate (0.219)Health and safety 0.000a Moderate (0.208) 0.000

    aModerate (0.246)

    Reception 0.014a

    Slight (0.158) 0.033a

    Slight (0.154)Mailroom 0.004a Slight (0.185) 0.000a Moderate (0.228)Helpdesk 0.001a Moderate (0.225) 0.001a Moderate (0.224)

    Note: aFishers exact test

    Table IX.FM services with asignificant associationbetween efficiency andlevel of innovation inservice provision

    F27,13/14

    510

  • 7/30/2019 Satisfaction Fm National_standards

    16/19

    of both surveys was also very strong (0.470 in 2007 and 0.490 in 2008) using CramersV test (Table X). Through cross-tabulating the two variables, generally it can beassumed that customers overall satisfaction with all services is good and theirsatisfaction with the level of innovation in service provision is also good.

    CPMS Stage 1: discussions and implicationsThe paper has provided an overview of the first stage of primary research undertakenwithin a wider study focusing on how customer performance measurementinformation can be strategically implemented into FM provider organisations. Thisfirst stage of primary research has aimed to establish generic customer satisfactionbenchmarks of service delivery for the FM industry.

    Following the Mann-Whitney tests, it was established that there was no significantdifference between the 2007 and 2008 survey findings. This suggests that if genericcustomer satisfaction benchmarks are to be permanently implemented across the UKand Ireland, the survey would not need to be conducted every year, but possibly

    updated every two or three years minimum.The frequency distribution analysis identified some interesting findings in therelationships between the criticality, efficiency, and provision of certain services.Generally, it could be assumed that the health and safety, mailroom, reception,and helpdesk services were all considered very critical, highly efficient, andpredominantly provided in-house. One could contend that because health and safetyservices rely on strict legislative demands, that possibly a specialist provider would bemore suited to deliver the service. However, this does not appear to be the case acrossthe UK and Ireland, with customers considering the service to be highly efficient viain-house methods. The reception and helpdesk services are also interesting findings, asthey are generally considered to be front-line services and are therefore naturallyperceived to be very critical, in which customers again seem to entrust as an in-house

    service, and provide highly efficient results. Conversely, the M&E engineering,waste management and building fabric services were considered very ormoderately critical, but rated lowest in terms of efficiency, and predominantlyoutsourced. These services are often considered more traditional hard functions ofFM compared to those rated more positively above, but are distinctly more negative interms of service delivery.

    Through the x2 and Fishers exact tests, however, only a small proportion ofservices (mailroom, catering, grounds and gardens) actually produced significantrelationships between the level of criticality, efficiency, and provision of services,suggesting that although the trends identified through the frequency distributionanalysis are important to consider, they do not necessarily prove that this is the normfor all customers across the country.

    2007 survey 2008 surveyStatistical significance(Fishers exact)

    Strength of significance(Cramers V)

    Statistical significance(Fishers exact)

    Strength of significance(Cramers V)

    0.000 Strong (0.470) 0.000 Strong (0.490)

    Table X.Overall satisfaction with

    all FM services with asignificant association

    with the level ofinnovation in service

    provision

    Standards ofcustomer

    satisfaction

    511

  • 7/30/2019 Satisfaction Fm National_standards

    17/19

    The x2 and Fishers exact tests did, however, produce the most frequent levels ofsignificant relationships between the efficiency of services and the aspects of the FMteam delivering the services. This suggests that customers are conscious about theactual people delivering the services on-site, implying that choice of the right personnel

    for particular services by FM providers is essential in achieving higher levels ofcustomer satisfaction regarding the efficiency of service delivery.

    Equally, there were many significant relationships evident regarding the efficiencyof services and the level of innovation in service provision. This was also notable fromcustomers overall satisfaction with all services provided on-site, which produced astrong association to the level of innovation in service provision. This suggests thatcustomers are conscious of how innovative FM providers are in delivering certainservices, implying possibly that there is an expectation for FM providers to beproactive in their approach to service delivery, which can have a direct impact tocustomer perception of the efficiency of service delivery.

    In summary, the customer satisfaction benchmarks established within thisresearch have produced some significant findings that could potentially furtherenhance FM provider organisations existing performance measurement strategies.The authors are now working closely with a case study FM provider organisation totest the further stages of the conceptual CPMS framework, which we hope to publishin the near future.

    Notes

    1. The confidence intervals would actually be more accurate than the figures calculated aboveas the BIFM member population figures include FM provider organisations, which were not

    eligible to take part in the survey. This would mean that the total population targeted would

    be less, giving a stronger confidence interval. However, because it was not possible to

    identify a split between providers and customers, this cannot be proved, so the above

    confidence interval figures are quoted.2. The x2 test findings within this section only report on significant relationships found in both

    2007 and 2008. However, other significant relationships between variables were evident forindividual years.

    References

    Amaratunga, D. and Baldry, D. (2000), Assessment of facilities management performance inhigher education properties, Facilities, Vol. 18 Nos 7/8, pp. 293-301.

    Amaratunga, D. and Baldry, D. (2002), Moving from performance measurement to performance

    management, Facilities, Vol. 20 Nos 5/6, pp. 217-23.

    Amaratunga, D. and Baldry, D. (2003), A conceptual framework to measure facilities

    management performance, Property Management, Vol. 21 No. 2, pp. 171-89.

    Amaratunga, D., Baldry, D. and Haigh, R. (2004), Customer-related facilities management

    process and its measurement: understanding the needs of the customer, Proceedings ofCIB W70 Facilities Management & Maintenance Hong Kong 2004 Symposium, Hong

    Kong Polytechnic University, Hong Kong.

    Bandy, N.M. (2003), Setting service standards: a structured approach to delivering outstanding

    customer service for the facility manager, Journal of Facilities Management, Vol. 1 No. 4,pp. 322-36.

    F27,13/14

    512

  • 7/30/2019 Satisfaction Fm National_standards

    18/19

    BIFM (2004), Rethinking facilities management: accelerating change through best practice,

    available at: www.bifm.org.uk (accessed 3 August 2009).

    BIFM (2007), BIFM Annual Review 2007, available at: www.bifm.org.uk (accessed 4 March

    2008).

    BIFM (2008), BIFM Annual Review 2008, available at: www.bifm.org.uk (accessed 1 February

    2009).

    Bryman, A. and Bell, E. (2007), Business Research Methods, 2nd ed., Oxford University Press,Oxford.

    Camp, R.C. (1989), Benchmarking The Search for Industry Best Practices that Lead to SuperiorPerformance, ASQC Quality Press, New York, NY.

    Creswell, J.W. (2009), Research Design: Qualitative, Quantitative, and Mixed MethodsApproaches, 3rd ed., Sage Publications, London.

    Fellows, R. and Liu, A. (2003), Research Methods for Construction, Blackwell, Oxford.

    Field, A. (2009), Discovering Statistics Using SPSS, 3rd ed., Sage Publications, London.

    Futcher, K., So, M. and Wong, B. (2004), Stakeholder value in facilities management,Proceedings of the CIB W70 Facilities Management and Maintenance Hong Kong 2004Symposium, Hong Kong Polytechnic University, Hong Kong, Vol. 2004.

    Hinks, J. and McNay, P. (1999), The creation of a management-by-variance tool for facilities

    management performance assessment, Facilities, Vol. 17 Nos 1/2, pp. 31-53.

    Kincaid, D.G. (1994), Measuring performance in facilities management, Facilities, Vol. 12 No. 6,pp. 17-20.

    Loosemore, M. and Hsin, Y.Y. (2001), Customer-focused benchmarking for facilities

    management, Facilities, Vol. 19 Nos 13/14, pp. 464-75.

    McDougall, G. and Hinks, J. (2000), Identifying priority issues in facilities management

    benchmarking, Facilities, Vol. 18 Nos 10-12, pp. 427-34.

    Massheder, K. and Finch, E. (1998a), Benchmarking methodologies applied to UK facilitiesmanagement, Facilities, Vol. 16 Nos 3/4, pp. 99-106.

    Massheder, K. and Finch, E. (1998b), Benchmarking metrics used in UK facilities management,

    Facilities, Vol. 16 Nos 5/6, pp. 123-7.

    Parasuraman, A. (2004), Assessing and improving service performance for maximum impact:

    insights from a two-decade-long research journey, Performance Measurement & Metrics,Vol. 5 No. 2, pp. 45-52.

    Sarshar, M. and Pitt, M. (2009), Adding value to clients: learning from four case studies,

    Facilities, Vol. 27 Nos 9/10, pp. 399-412.

    Shaw, D. and Haynes, B. (2004), An evaluation of customer perception of FM service delivery,

    Facilities, Vol. 22 Nos 7/8, pp. 170-7.

    Tucker, M. and Pitt, M. (2008), Performance measurement in facilities management: drivinginnovation?, Property Management, Vol. 26 No. 4, pp. 241-54.

    Tucker, M. and Pitt, M. (2009), Customer performance measurement in facilities management: a

    strategic approach, International Journal of Productivity and Performance Management,Vol. 58 Nos 5/6, pp. 407-22.

    Tucker, M. and Smith, A. (2008), User perceptions in workplace productivity and strategic FM

    delivery, Facilities, Vol. 26 Nos 5/6, pp. 196-212.

    Varcoe, B.J. (1996a), Facilities performance measurement,Facilities, Vol. 14 Nos 10/11, pp. 46-51.

    Standards ofcustomer

    satisfaction

    513

  • 7/30/2019 Satisfaction Fm National_standards

    19/19

    Varcoe, B.J. (1996b), Business-driven facilities benchmarking, Facilities, Vol. 14 Nos 3/4,pp. 42-8.

    Walters, M. (1999), Performance measurement systems a study of customer satisfaction,Facilities, Vol. 17 Nos 3/4, pp. 97-104.

    Wauters, B. (2005), The added value of facilities management: benchmarking work processes,

    Facilities, Vol. 23 Nos 3/4, pp. 142-51.

    Williams, B. (2000), An Introduction to Benchmarking Facilities and Justifying the Investment inFacilities, Building Economics Bureau, London.

    Corresponding authorMatthew Tucker can be contacted at: [email protected]

    F27,13/14

    514

    To purchase reprints of this article please e-mail: [email protected] visit our web site for further details: www.emeraldinsight.com/reprints