so a measurements and reporting

Upload: subasri-subaramanian

Post on 06-Apr-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/3/2019 So a Measurements and Reporting

    1/16

    white PAPeR

    Ashok Goudar

    December 2008

    SOA MeASUReMeNtS AND RePORtiNG

  • 8/3/2019 So a Measurements and Reporting

    2/16

    MphasiS white paperSOA Measurements and Reporting

    | |

    TableofContents

    inroducon 3

    Service Realization Efciency Metrics 4

    Business Service Creation Efciency Metrics 4

    Business Service Change Efciency Metrics 4

    Shared Service Enablement Efciency Metrics 4

    Shared Service Modication Efciency Metrics 4

    Business Service Reusability Savings Metrics 4

    Shared Service Reusability Savings Metrics 5

    Service Bus Measurements 5

    Transformation Metrics 5

    Routing Metrics: 5

    Aggregation (Choreographed services) Metrics 5

    File based ESB Mediation Metrics 6

    SMTP ESB Mediation Metrics 6

    Data Service ESB Mediation Metrics 6

    Adapter Service ESB Mediation Metrics 6

    ESB Fault Density Metrics 7

    Average ESB Service Time 7

    Rules Driven Service Mediation 7

    Service Bus Reliability Metrics 7

    Service Performance Metrics 8

    Service Response Metric 8

    Service Load (Throughput) Metrics for Messages 8

    Service Load (Throughput) Metrics for Files 8

    Service Load (Throughput) Metrics for Documents 8

    Quality Of Service Metrics 9

    Service Faults Metrics 9

    Business Service Faults Metrics 9

  • 8/3/2019 So a Measurements and Reporting

    3/16

    | |

    MphasiS white paper SOA Measurements and Reporting

    Service Security Violations Metrics 9

    Service Interoperability Fault Metrics 9

    Service Availability Metrics 9

    Service End Point Availability Metrics 10

    Service End Point Downtime Metrics 10

    Service Bus Availability Metrics 10

    Service Registry Availability (For dynamic binding) Metrics 10

    Service Usability Metrics 10

    Client wise service consumptions 10

    Protocol wise service consumption 11

    Average long lived service life metrics 11

    Service Delivery Metrics 11

    Service Availability Delivery Metrics 11

    Service Performance Delivery Metrics 11

    Service Governance Policy Change Management 11

    External Service Usage Metrics 11

    Service Alert Metrics and Reports 11

    Service Repository Management 12

    Service Group (Classication) Density/ Distribution 12

    Service Change Frequency Metrics 12

    Service Change Impact Density 12

    Service Change Spread 12

    Service Policy Density 12

    Service Version Density 12

    Dynamic Service Discovery Index 12

    Service Maturity (CMM levels mapping) and Readiness Measurements 13

    Conclusion Summary 14

  • 8/3/2019 So a Measurements and Reporting

    4/16

    MphasiS white paperSOA Measurements and Reporting

    | |

    1. Introduction

    The Quantitative Service Management plays a critical part in the adoption of an effective realization and governance

    of service oriented architecture based systems. The key parameters and indicators are required to be collected andmeasured at various stages of the service life cycle. In this context, this paper evaluates the various metrics that are required

    to be collected to measure the effectiveness of the SOA based systems within an organisation. The metrics indicated in the

    document, will help to devise and adopt appropriate correction and improvement measures, so as to have an effective,

    efcient and optimized SOA based infrastructure within the organization. The metrics given in the document are indicative

    in nature. Organisations/business units can devise additional metrics to be collected, based on their specic measurement

    requirements. These service measurements, subsequent service improvements and optimization can help to improve the SOA

    maturity, within the enterprise, as depicted in the gure below.

  • 8/3/2019 So a Measurements and Reporting

    5/16

    | |

    MphasiS white paper SOA Measurements and Reporting

    2. Service Realization EfciencyMetrics

    These metrics provide the visibility on the Efciency of

    Realization of SOA based services and systems.

    Business Service Creation Efciency Metrics

    As a part of the SOA realization, it becomes very

    important to understand the efciency, with which a

    business service is created and put to use for

    consumption. This metric measures the efciency with

    which the new business services are created and

    aggregated using the existing shared services.

    Business Service Creation Efciency Metrics

    Business Service Creation Efciency = (Number of new

    business services created * cost/per business service

    creation1) / Total Service Realization Cost Incurred2.

    Where

    1 Average cost incurred to create a business service and deploy

    for consumption, pre collected cost gures

    2 Total Cost Associated with the entire service realization

    programme

    Business Service Change Efciency Metrics

    Business change management would usually trigger the

    changes to the deployed business services. In order to

    meet the changing business requirements, business

    service interfaces and service implementations needs

    to be modied. In this context, this metric measures the

    efciency with which the business service change

    management is handled.

    Business Service Modication Efciency = (Number of

    new business services modied * cost/per business

    service change1) / Total Service Change Management

    Cost Incurred2 within the measurement window3.

    Where

    1 Average cost incurred to change a business service interface

    (implementation) and deploy for consumption, pre collected cost

    gures

    2 Total Cost Associated with the entire service change

    management programme (pre collected data)

    3 Measurement window needs to be set as applicable [could be

    in terms of months, weeks, quater-wise]

    Shared Service Enablement Efciency Metrics

    Shared services enable the service reusability within

    the organization. As a part of the SOA implementation

    programme, the shared services are required to be

    modeled, implemented and made available as part of

    the shared service repository. In this context, this metric

    measures the efciency with which the shared

    services are modeled and implemented as part of the

    SOA realization.

    Shared Service Enablement Efciency =

    (Number of new shared service created * cost/per

    shared service creation1) / Total Shared Service

    Realization Cost Incurred2.

    Where

    1 Average cost incurred to create a shared service and deploy for

    consumption, pre collected cost gures

    2 Total Cost Associated with the entire shared service realization

    programme

    Shared Service Modication Efciency Metrics

    Shared services which have been consumed by multiple

    consumers might undergo changes in terms of interfaces

    or underlying implementation. This particular metric

    measures the efciency with which the shared service

    change management is handled within the organization,

    or the business unit.

    Shared Service Enablement Efciency = (Number of

    shared service change implementations * cost/per

    shared service change management1) / Total Shared

    Service Change Management Realization Cost Incurred

    within a measurement window2.

    Where

    1 Average cost incurred to create a shared service and deploy for

    consumption, pre collected cost gures

    2 Total Cost Associated with the entire shared service realization

    programme

    3 Measurement window needs to be set as applicable [could be in

    terms of months, weeks, quater-wise]

    Business Service Reusability Savings Metrics

    SOA architecture helps in achieving and improving the

    reusability in the enterprise. However the challenge is

    to measure the degree of reusability and subsequent

    cost savings resulted due to reusability. This metric

    quanties the savings achieved due the deployment of

    business services.

    Business Service Reusability Savings Efciency (%) =

    (Cost per business service1* Number of consumers2)

    Cost per business service) (Cost per business service *

  • 8/3/2019 So a Measurements and Reporting

    6/16

    MphasiS white paperSOA Measurements and Reporting

    | |

    Number of consumers) * 100

    Where

    1 Average cost incurred to create a standard business service.

    2 Number of the service consumers.

    Shared Service Reusability Savings Metrics

    Shared services are the key services which will help in

    achieving and improving the reusability in the enterprise.

    This metric try to quantify the savings achieved due the

    deployment of reusable shared services.

    Shared Service Reusability Savings Efciency (%) =

    (Cost per shared service1* Number of consumers2) Cost per shared service)

    (Cost per shared service * Number of consumers)) * 100

    Where

    1 Average cost incurred to create a standard shared service

    2 Number of the service consumers

    3. Service Bus Measurements

    The metrics mentioned in this section below measure

    various parameters, which affect the performance of

    the service bus which is deployed, as a part of the SOA

    based systems.

    A typical service bus implementation would support

    major integration patterns including the ones

    mentioned below;

    Synchronous service mediation

    Asynchronous service mediation

    Publish Subscribe

    The measurements are carried out, as applied to the

    above mentioned patterns.

    Transformation Metrics

    These metrics measure the transformation

    capacity (throughput) of the services bus while

    mediating and transforming the messages between the

    consumers and producers.

    The measurements addresses the message

    transformation load (throughput) [XML, XSLT, EDI, TDS

    etc] between the consumer request and the provide

    service end points). The measurements are furtherorganized according to the message formats involved.

    Transformation throughput (Load) Metrics = No of

    transformation transactions completed1/ Operation2

    / Service3 [Within the measurement window4] [Permessage format5]

    Where

    1 The transformations happened between the consumer request

    message format and product service message format.

    2 Operation wise measurement

    3 - Service wise measurement gathering

    4 Measurement window needs to set as per measuring frequency

    (in terms of days, weeks, quater-wise)

    Routing Metrics

    Routing Measure the routing throughput (no. of

    messages routed) between consumer requests to

    appropriate provider endpoints. (Content based routing

    between the consumer request and the provider end

    point). Content based routing is an important

    functionality of a service bus. Content based routing

    helps to achieve the service virtualization and to

    achieve continued service availability. The consumer

    need not know the location of provider end points, the

    ESB will ensure the correct routing of the consumerrequests to the provider end points, which could be

    multiple in nature providing the same functionality, and

    available on multiple protocols. Routing is also part of

    the quality of service factor, of the deployed services,

    meaning based on the pre agreed SLAs, consumer

    requests could be routed to different service endpoints,

    providing the same functionality.

    Routing Metrics = No of content based routing

    transactions completed1/ Operation2 / Service3 [Within

    the measurement window4] [Per message format5]

    Where

    1 The routing transactions happened between the consumer

    requests to the appropriate provider service endpoints.

    2 Operation wise measurement.

    3 - Service wise measurement gathering

    4 Measurement window needs to set as per measuring frequency

    (in terms of days, weeks, quater-wise)

    Aggregation (Choreographed services)

    Metrics

    This measure the aggregation load [transactions]

    handled as a part of the service bus. The aggregated

  • 8/3/2019 So a Measurements and Reporting

    7/16

    | |

    MphasiS white paper SOA Measurements and Reporting

    services usually internally consume constituting

    services which could be utility services, technical

    services, security services, application services or other

    business services. The composite business services are

    created by choreographing the underlying service

    endpoints. The measures in this section provide the

    visibility on the capacity of the services bus to handle

    the aggregated services. The data collection for these

    measurements may not be straight forward; however

    governance tools can provide the details to collect

    this data.

    Aggregation Metrics = Number of message

    aggregations completed1/ Operation2 / Service3 [Within

    the measurement window4] [Per message format5]

    Where

    1 Number of message aggregations completed between consumer

    request and provider response.

    2 Operation wise measurement.

    3 - Service wise measurement gathering

    4 Measurement window needs to set as per measuring frequency

    (in terms of days, weeks, quater-wise)

    File based ESB Mediation Metrics

    This measure the mediation throughput for the le

    based protocol interactions, between the requestor and

    the providers of the services.

    FTP: FTP based service brokering measurements; In

    this case the service bus acts as the broker between the

    consumers and the service providers using FTP

    protocols.

    File based ESB Mediation = Number of FTP based

    transactions completed1/ Operation2 / Service3 [Within

    the measurement window4]

    Where

    1 No of File transfer transactions completed with FTP protocol.

    2 Operation wise measurement.

    3 - Service wise measurement gathering

    4 Measurement window needs to set as per measuring frequency

    (in terms of days, weeks, quater-wise)

    SMTP ESB Mediation Metrics

    This measure the mediation throughput for the SMTP/

    e-mail based protocol interactions, between the

    requestor and the providers of the services.

    e-mail based ESB Mediation = Number of FTP based

    transactions completed1/ Operation2 / Service3 [Within

    the measurement window4]

    Where

    1 No of transactions completed with SMTP/POP3 protocols.

    2 Operation wise measurement.

    3 - Service wise measurement gathering

    4 Measurement window needs to set as per measuring frequency

    (in terms of days, weeks, quater-wise)

    Data Service ESB Mediation Metrics

    This measure the mediation throughput for the data/sqlbased transactions between the requestor and the

    providers of the services. This is where the data is

    exposed as data services and the service endpoints

    interact through SQL mode. Consumers initiate the

    requests (through SQL), and response is provided back

    by the corresponding data services. Enterprise wide

    data sources are encapsulated with data services.

    Data Service Mediation Metrics = Number of SQL data

    transactions completed1/ Operation2 / Service3 [Within

    the measurement window4]

    Where

    1 No of SQL transactions mediated.

    2 Operation wise measurement. .

    3 - Service wise measurement gathering

    4 Measurement window needs to set as per measuring frequency

    (in terms of days, weeks, quater-wise)

    Adapter Service ESB Mediation Metrics

    The adapter (Application and technology adapters

    exposed as services) services integrates with

    service bus. Many of the enterprise applications are

    service enabled and are integrated with the service bus

    using application and technical adapters. These metrics

    measure the ESB mediation between the consumers

    and adapter hosted service providers.

    Application Adapter0 Mediation Metrics = Number of

    Application Adapter transactions completed1/

    Operation2 / Service3 [Within the measurement

    window4]

    where

    0 Application adapters such as Siebel, Oracle Apps, SAP etc.

  • 8/3/2019 So a Measurements and Reporting

    8/16

    MphasiS white paperSOA Measurements and Reporting

    | |

    1 No of Application adapter service mediation transactions

    handled.

    2 Operation wise measurement.3 Application Adapter Service wise measurement

    gathering

    4 Measurement window needs to set as per measuring frequency

    (in terms of days, weeks, quater-wise)

    Technology Adapter0 Mediation Metrics = Number of

    Technology Adapter transactions completed1/

    Operation2 / Service3 [Within the measurement

    window4]

    Where

    0 Technology adapters such as JDBC, XML, File adapters.

    1 No of Technology adapter service mediation transactions

    handled.

    2 Operation wise measurement.

    3 Technology Adapter Service wise measurement gathering

    4 Measurement window needs to set as per measuring frequency

    (in terms of days, weeks, quater-wise)

    ESB Fault Density Metrics

    This is a measure that will be help to ascertain the

    density of the service faults occurring at the service

    bus. The measurements in this section provide the

    visibility on effectiveness of service bus in terms of

    quality of service. Higher densities indicate that more

    service faults are occurring perhaps due to faulty,

    conguration of the service bus.

    ESB Fault Density = Number of Service faults generated

    in the ESB1

    Total number of Service Endpoints deployed

    at ESB [Within the measurement window2]

    Where

    1 Number of service faults or errors generated in the ESB.

    2 Measurement window needs to set as per measuring frequency

    (in terms of days, weeks, quater-wise)

    Average ESB Service Time

    This is a measure which will give an indication on the

    average service processing time (mediation,

    transformation, routing, logging etc) spent in the

    service bus.

    Average Service Bus Time = Total Service Transaction

    time in ESB 1

    Total number of Service Transactions mediated through

    ESB2 [Within the measurement window2]

    Where

    1 Is the total time during, which the ESB was active in handling

    service mediation tasks.

    2 Total cumulative number of transactions carried through the

    ESB during the measurement window.

    3 Measurement window needs to set as per measuring frequency

    (in terms of days, weeks, quater-wise).

    Rules Driven Service Mediation

    In case of rules driven service mediation, the actual

    mediation is governed by the congurable

    mediation rules. Such mediation helps to change the

    mediation paths during runtime, without redeploying

    the code or congurations. A standard ESB will provide

    all features to support the rules driven service

    mediation. This metrics measures the effectiveness rule

    based service mediation and gives the rule based

    service density in comparison with hardwired service

    mediations. (Where the consumer and providers link is

    hardwired)

    Rules based Service Mediation Density = Number of

    successful end to end mediations 1

    Total number of service mediation transactions across

    the ESB2 [Within the measurement window3]

    Where

    1 Is the number indicating the successful service transactions.

    2 Is the total number of service transactions carried through the

    ESB (both successful + failed)

    3 Measurement window needs to set as per measuring frequency

    (in terms of days, weeks, quater-wise).

    Service Bus Reliability Metrics

    This measures reliability (Effectiveness) of the service

    bus while mediating the service transactions. Its a good

    indicator as to how the service bus is performing.

    Service Bus Reliability Measure =

    Total Service Mediations transacted with mediation rule

    tables1 Total number of Service Transactions mediated

    through ESB2

    [Within the measurement window3]

    Where

    1 Is the total number of rule based service mediations conducted.

  • 8/3/2019 So a Measurements and Reporting

    9/16

    | |

    MphasiS white paper SOA Measurements and Reporting

    2 Total cumulative number of transactions carried through the

    ESB during the measurement window.

    3 Measurement window needs to set as per measuring frequency

    (in terms of days, weeks, quater-wise).

    4. Service Performance Metrics

    Metrics mentioned in section provide the details on

    the performance related measurements of the services.

    Performance of the service bus is of paramount

    importance while meeting the pre agreed quality of

    service (qos) with the consumers.

    Service Response Metric

    This metric measures the response times of the

    services. Response time becomes even more important

    in Synchronous service invocations. In this

    particular case, Service response time is measured

    in terms of Highest Response Time, Lowest

    Response Time, and Average Response Time. These

    measurements give a good indication on the spread of

    the service response times. These measurements

    are taken operation wise and service wise across all the

    deployed services.

    Highest Response Time = Highest Response time of

    a service within a time span (per day)/ per operation /

    per service.

    Lowest Response Time = Lowest Response time of a

    service within a time bracket (per day)/ per operation/

    per service.

    Average Response Time = Summation (Service

    response times1)

    Number of Service Transactions

    Where

    1 = Unit of measure for time (milliseconds, second, minutes as

    required).

    Service Load (Throughput) Metrics for

    Messages

    Throughput measurement gives good indication on

    the overall load that the service bus can handle without

    degrading the performance. These metrics measurethe service message handling throughput of the service

    bus.

    Average Load = [Average] Number of messages

    executed /per day/per operation/ per service

    Peak Load = [Peak Business Load] Total number of

    messages processed within peak business hour / per

    day / per operation / per service.

    Off Peak Load = [Off Peak Non Business Hour Load]

    Total number of messages processed within non

    business hours / per day / per operation / per service.

    Service Load (Throughput) Metrics for Files

    Measures the throughput levels of the service bus while

    handling the le based asynchronous service

    transactions. The measurement is gathered on Average,

    Peak and Off Peak basis as indicated below.

    Average Load = [Average] Number of messages

    executed /per day/per operation/ per service

    Peak Load = [Peak Business Load] Total number of

    messages processed within peak business hour / per

    day / per operation / per service.

    Off Peak Load = [Off Peak Non Business Hour Load]Total number of messages processed within non

    business hours / per day / per operation / per service

    Service Load (Throughput) Metrics for

    Documents

    This measures the throughput levels of the service bus

    while handling the document based synchronous and

    asynchronous service transactions. The measurements

    are gathered on Average, Peak and Off Peak basis, as

    indicated below. Documents in the service scenario can

    impact the service performance. (Documents

    embedded as attachments in the servicecalls)

    Average Load = [Average] Number of documents

    transacted /per day/per operation/ per service

    Peak Load = [Peak Business Load] Total number of

    documents processed within peak business hour / per

    day / per operation / per service.

    Off Peak Load = [Off Peak Non Business Hour Load]

    Total number of documents processed within non

    business hours / per day / per operation / per service.

  • 8/3/2019 So a Measurements and Reporting

    10/16

    MphasiS white paperSOA Measurements and Reporting

    | |

    5. Quality Of Service Metrics

    Quality of service (Qos) represents quality of the

    services that are deployed and are being used by

    various consumers. Qos could be different for

    different consumers based on context of the quality

    usage. It is usually further grouped in terms of

    transport level quality, message level quality and

    service level quality.

    Metrics provided below measure various key Quality

    of Service parameters of the services

    Service Faults Metrics

    This metrics measures the service faults generated as apart of the deployed service. Service fault metrics gives

    a good insight into the quality of service of the

    deployed services. The faults measured on Average,

    Peak hour and Off Peak hour basis are calculated as:

    Average Service Faults = Average Number of Service

    Faults Generated/ Per Operation / Per Service.

    Peak Hours Service Faults = Peak Hour Service Faults/

    Per Operation/ Per Service.

    Off Peak Hour Service Faults = Off Peak Hour ServiceFaults/ Per Operation/ Per Service.

    Business Service Faults Metrics

    This measures the business service exceptions

    generated as part of the deployed service. Here the

    focus is to measure the exceptions at the business

    service level, not at the individual constituting services.

    It gives a good indication on how the deployed business

    services are performing as a whole. In this particular

    case, the business services faults are measured on

    Average, Peak hour and Off Peak hour basis.

    Average Business Service Faults = Average Number of

    Business Service Faults Generated/ Per Business

    Operation / Per Business Service.

    Peak Hours Business Service Faults = Peak Hour

    Business Service Faults/ Per Business Operation/ Per

    Business Service.

    Off Peak Hour Business Service Faults = Off Peak Hour

    Business Service Faults/ Per Business Operation/ Per

    Business Service.

    Service Security Violations Metrics

    Service security is also a key part of the quality of

    services offered to consumers. These metrics measurethe security violations that might have occurred during

    the course of service usage. The security violations are

    measured in terms authentication failures,

    authorizations failures, as indicated below.

    Average Service Authentication Failures = Average

    Number of authentication failures /per operation/ per

    service

    Average Authorization Failures = Average Number of

    authorization failures/per operation/ per service

    Service Interoperability Fault Metrics

    This measure the interoperability issues that are

    inherent with the deployed services. Due to multiple

    standards and technologies, the integrating services

    components may face the problems related

    incompatible standards and implementations. In orders

    to enhance the service interoperability between

    heterogeneous service oriented systems, WS-I prole

    has been dened, which will promote the service

    interoperability, among the various producers andconsumers in SOA based systems. The interoperability

    fault metrics provide a good insight, as to how good the

    systems are, as far service interoperability is concerned.

    The following are the associated measures of this metrics.

    Average Number of Service Interoperability Faults =

    Average number of interoperability faults / per

    operation/ per service

    Average* Number System Faults = Average number

    system faults / per operation / per service / per

    binding

    Where

    *Averages are calculated over the predened time interval such as per day,

    per week, per quarter.

    6. Service Availability Metrics

    Metrics in this section measure the availability of the

    services at run time in different environments, mainly

    production environments. The availability of the service

    infrastructure directly determines the quality of service

    offered to the consumers.

  • 8/3/2019 So a Measurements and Reporting

    11/16

    | 0|

    MphasiS white paper SOA Measurements and Reporting

    Service End Point Availability Metrics

    Aims to measure the down time of the actual services,

    which are hosted and linked to ESB.

    Average Service End Point Availability (Overall) =

    (Measurement Window Time Overall Service

    Downtime)/Measurement Window Time) * 100.

    Average Service End Point Availability (Per Service) =

    (Measurement Window Time Overall Service

    Downtime/ Number of deployed services)/Measurement

    Window Time) *100.

    Service wise End Point Availability = (Measurement

    Window Time Service Specic Downtime)/Measurement Window Time) * 100.

    Service End Point Downtime Metrics

    These measurements provide the statistics on the

    downtime of the deployed services in terms of overall

    average down time and also service specic downtime.

    Average Service End Point Downtime (Overall) =

    (Measurement Window Time Overall Service Uptime)/

    Measurement Window Time) * 100.

    Average Service End Point Availability (Per Service)

    = (Measurement Window Time Overall Service

    Uptime/ Number of deployed services)/Measurement

    Window Time) * 100.

    Service wise End Point Availability = (Measurement

    Window Time Service Specic Uptime)/Measurement

    Window Time) * 100.

    Service End Point Invocations Metric:

    Try to measure the successfulness of the service

    consumptions of the deployed services.

    Service End Point Invocations = (Number of service

    invocations Number of service failures due to non

    availability)/ Total Number of service invocations * 100.

    Service Bus Availability Metrics

    The availability of the service bus (whenever

    implemented) in SOA systems has a direct

    inuence on the overall availability of the SOA systems.

    As all mediations (transformation, mapping, routing)

    happen through service bus (wherever service bus

    integration pattern is implemented), any down time

    of the service bus itself will directly affect, the entire

    consumer requests passing through the service bus.

    The service bus availability needs to be addressed

    both at logical component levels (using logical

    clusters) as well as physical deployment levels using the

    high available physical topologies. The availability

    gures are computed as mentioned below.

    Service Bus Availability = (Measurement window time

    Service Bus down time) / Measurement Window Time

    * 100.

    Service Registry Availability (For dynamic

    binding) Metrics

    This measures the availability of service registry which

    is deployed as a part of the overall SOA system. Being

    integral part of the SOA deployments the availability

    of the registry will impact the overall quality of

    service of the deployed services. The service

    aggregations which use dynamic bindings, will heavily

    depend on the registry during run time execution. In

    order to provide resilient (fault tolerant and highly

    available) service registry services, the registry needs

    to be deployed on logical and physical clustering

    topologies. The registry availability gures are

    computed as below:

    Service Registry Availability = (Measurement window

    time Service Bus down time) / Measurement Window

    Time * 100.

    7. Service Usability Metrics

    The metrics collected below measure the usability of

    the services by various consumers across multiple

    channels (bindings). These measures give an indication

    on how the service infrastructure is being used by

    various consumers, within and external to organization.

    In cases where service consumptions are being billed

    as per the consumption, these measures form basis to

    calculate the service billing.

    Client wise service consumption

    This measure the consumer wise service consumption

    data, which could be used for various purposes such as

    service metering, service billing.

    Client wise service consumptions = (Number of service

    requests/ Measurement Window)/ Per Client (User

    wise).

    Client wise service consumptions = (Number of service

  • 8/3/2019 So a Measurements and Reporting

    12/16

    MphasiS white paperSOA Measurements and Reporting

    | |

    requests/ Measurement Window)/ Per Machine wise

    (IP wise).

    Protocol wise service consumption

    This gives an increased breakup on the service usage

    with respect the underlying binding protocols. The

    binding wise service usage data can be used for

    metering, billing and service provision purposes.

    Protocol (Binding) wise service consumption = (Number

    of service consumptions/ Measurement Window)/ Per

    Client / Per protocol (Binding) (User/Machine wise)

    Average long lived service life metrics:

    These metrics measure the average life time of long

    lived transactions (ex: asynchronous process services).

    The average life data of such services transactions can

    help in deriving the capacity planning and sizing of the

    underlying physical infrastructure (database persistence

    etc). The computations of these measures are indicated

    below.

    Average Long Lived Service Life = Summation (Long

    Lived Transaction Life Cycle Time) / Number of LongLived Service Transactions [Over A Measurement

    Window].

    Minimum Long Lived Service Life = Minimum (Long

    Lived Transaction Life Cycle Times) [Over A

    Measurement Window].

    Maximum Long Lived Service Life = Maximum (Long

    Lived Transaction Life Cycle Times) [Over A

    Measurement Window].

    8. Service Delivery MetricsThe metrics below include measurements on the

    delivery of the services, to the consumers and to the

    business. These measurements will help to ascertain

    how the services are delivered (service infrastructure)

    to the consumers, with respect to pre agreed SLA

    (Service Level Agreements).

    Service Availability Delivery Metrics

    Measures the actual availability of the services to the

    consumers, as against to pre published service

    availability SLA. This will help to ascertain the level of

    compliance to the pre published SLAs.

    Service Availability Delivery = (Actual Service

    Availability / Published Service Availability)/Per

    Operation/ Per Service)*100.

    Service Performance Delivery Metrics

    This measures the actually delivered service

    performance as against to pre published service

    performance assurance levels (SLAs). These measures

    will indicate how the service performances SLAs are

    being met.

    Service Performance Delivery = (Actual Service

    Performance / Published Service Performance)/Per

    Operation/ Per Service)*100

    Service Governance Policy Change

    Management

    This measure the change management effectiveness of

    the service governance (policies).

    Service Governance Policy Changes = (Number of

    Changes to Governance Policies / Measurement

    window)

    External Service Usage Metrics

    In organizations, the applications services canconsume the services which are deployed internally

    and are available from the external sources. The

    external service consumption (as like some of the

    internal services) might be subjected to metering and

    billing. For the billing validation purposes, it might

    be required to measure the consumption of the billable

    external services. Such an external service usage may

    be computed, as indicated below.

    External Service Usage (call wise) = Number of External

    Service Request Made / Per Operation wise/ Per Service

    Wise [within the measurement window]

    External Service Usage (duration wise) = Total duration

    of the external service requests [between the request

    and response] / Per Operation wise/ Per Service Wise

    [within the measurement window]

    Service Alert Metrics and Reports

    Measures the effectiveness of the SOA governance,

    Monitoring and Alert Management. These measures also

    provide the insight into the health of deployed

    services. The collected gures can also help to manage

    the maintenance and support of the SOA based

    systems. The SOA monitoring systems deployed, will

  • 8/3/2019 So a Measurements and Reporting

    13/16

    | |

    MphasiS white paper SOA Measurements and Reporting

    generate appropriate alerts, based on faults or on the

    reaching of the preset threshold conditions (eg.

    capacity related, volume related, memory related

    thresholds) and will route these alerts (events) to

    appropriate users through predened channels such as

    e-mail, SMS, PDA, monitoring consoles etc.

    Service Alert Metrics (Level 1) = Number of Level 1

    Service Alerts raised / per service operation / per

    service / per threshold

    Service Alert Metrics (Level 2) = Number of Level 2

    Service Alerts raised / per service operation / per

    service / per threshold

    Service Alert Metrics (Level 3) = Number of Level 3

    Service Alerts raised / per service operation / per

    service / per threshold

    9. Service Repository Management

    The metrics in this section provide the indicative

    measurements on the service registry and discovery.

    These measurements help in understanding overall

    service portfolio deployed in the organization. These

    measurements can also help to understand, how

    effectively the services are grouped, categorizedclassied and published into the service registry.

    Service Group (Classication) Density/

    Distribution

    This will measure the average number of services

    grouped (created and published) in to a service group.

    Service Group (Classication) Average = Average

    number of published service end points / Total number

    of service groups [classes].

    Service Group (Classication) Minimum = Minimum

    number of published service end points / across all of

    service groups [classes].

    Service Group (Classication) Maximum = Maximum

    number of published service end points / across all of

    service groups [classes].

    Service Change Frequency Metrics

    This measures the effectiveness and extent of service

    change management in the organization. These

    measures are computed as indicated below.

    Average Service Changes = (Total Number of Service

    End Point Changes Published / Total Number of Service

    End points) / [Measurement Window]

    Service Change Impact Density

    This measures the extent of the impact happened due

    to the changes to the deployed services. Each service

    end point change could have an impact on the multiple

    known as well as unknown consumers. There

    are tools available to perform the impact analysis of

    the service changes. More importantly, these

    measurements can help to decide whether to go ahead

    with change or not, based on the level of impact, which

    the change is bringing. In order to mitigate the change

    impact, it might happen that the multiple versions of

    the services might be required to be deployed. These

    measurements are computed as indicated below.

    Service Change Impacted Consumers = Total Number

    of Impacted Consumers / Per Operation / Per Service /

    Per Service Change [Measurement Window]

    Overall Change Impact Index = Total Number of

    Impacted Consumers / Total Number of Service

    Changes [Measurement Window]

    Service Change Spread

    These measures will give an indication as to how

    changes to a particular service are being handled.

    They can indicate, if a particular service is

    undergoing frequent changes and having signicant

    impact on the associated consumers. The frequently

    changing services needs to be analyzed, designed to

    bring more stable versions of the service. These

    measurements are gathered with following

    computations:

    Service Change Spread = Maximum Time between the

    Service Changes / Per Measurement Window / Per

    Service

    = Minimum Time between the Service Changes / Per

    Measurement Window / Per Service

    = Average Time between the Service Changes / Per

    Measurement Window / Per Service

    Service Policy Density

    Service policies determine how the quality of service is

    offered to the consumers of the deployed services.

    These policies determine various aspects of the quality

  • 8/3/2019 So a Measurements and Reporting

    14/16

    MphasiS white paperSOA Measurements and Reporting

    | |

    of service such as security, service availability window,

    service preferred channels, alternative service

    availability etc. Service policies are key elements of the

    overall service governance adopted in the organization.

    These measures are computed as indicated below.

    Service Policy Density = Total number of service

    policies deployed / Total number of service end points

    published / [Measurement window].

    Service Version Density

    In order to effectively manage the service change

    management, it may be required to deploy and run

    multiple versions of services (at a given point in time).

    These measures indicate as to how the change man

    agement is effectively negotiated and helps determine

    as to how the multiple versions of the deployed services

    are managed.

    Service Version Density = Total number of service end

    point versions published / Total number of the service

    end points published [ Measurement window].

    Dynamic Service Discovery Index

    The run time determination (binding) of the service

    endpoints can help to enhance the service

    virtualization and to achieve true decoupling between

    the service consumers and providers. The dynamic

    binding can also help to achieve protocol independence

    between the service consumers and providers. The

    dynamic binding can enable enhanced service

    availability, as the end points are discovered and bound

    at run time, rather then at the design time. However,

    the dynamic binding could impact the response times

    involved in the service consumption, as the discovery

    is done at the run time. This particular metric

    measures the extent of usage of dynamically bound

    services in a SOA based system. The density is

    computed as indicated below:

    Dynamic Service Discovery Index = Total number of

    dynamic service end point requests/ Total number of

    end points published [Measurement window].

    9. Service Maturity (CMM levels mapping) and Readiness Measurements

    The following table includes indicative measurements which could be used to assess the SOA readiness /maturity of

    a business unit in a large enterprise. The measurements dened here are designed to assess the SOA readiness as per

    the CMM capability models. Although CMM model used may not give direct relations with services (as it is more process

    oriented), it will still serve as a good model to assess the readiness.

    Service Maturity (Capability Metrics) Measurements

    Business Service Enablement Metrics(Measure Basic Service Readiness)

    (Total number of live business services operational / Total numberbusiness functions [capabilities]) * 100 [per business unit].

    Higher the percentage, better the compliance to the basic readinesslevel of SOA maturity and readiness

    Dene additional customized measurements to ascertain the degree ofuse of the services within business unit

    Business Service Reusability Index(Measure Repeatable Service UsageReadiness)

    Average (Number of consumers / per operation /Per Service) / totalnumber of services / per business unit

    Higher the reusability, greater the compliance to the Repeatablereadiness level of the SOA maturity

    Dene additional customized measurements to measure repeatedconsumption of the services (repeated use of the services) within theorganization

  • 8/3/2019 So a Measurements and Reporting

    15/16

    | |

    MphasiS white paper SOA Measurements and Reporting

    Service Maturity (Capability Metrics) Measurements

    Business Service Denition and

    Rulzaon measurements

    The metrics in this section need to measure the practice of service

    modeling, service realization and service governance

    One may use multiple measurements/metrics to ascertain the practiceof SOA architectures (highly dened, well modeled business services,with a dened service life cycle)

    Dene the metrics to measure the modeling phase of the services

    Dene the metrics to measure the realization phase of the services

    Dene metrics to measure the governance phase of the services

    One can use the combination of these measurements to assess theDened/Reutilization readiness of the services within a business unit

    Quantitative measurements of

    services to assess the MeasuredServices

    Assess the various metrics collected in the business units to measure the

    various parameters of the services during different phases of the life cycle.

    The metrics dened in the earlier section of this document are some of theexamples, which business units/organizations can use to measure theservices

    Higher the degree of service measurements adopted in the business unit,better the compliance of the business unit to Measured Servicesreadiness

    Optimized and Institutionalizedservices state measurements

    Design assessment techniques and procedures to understand themechanisms that have been deployed to continuously measure andimprove the services at all phases of life cycle

    Measure how the services are improved from their previous versions/states so as to meet the business objectives in most optimum way.

    Measure how the Service Orientation is institutionalized across thedepartments/business units/ organizations/partners/suppliers of theorganization

    Higher the degree of service improvements and optimization (due to theprevious measurements), better the compliance of the business unitto Optimized Services readiness

    10. Conclusion Summary

    The metrics mentioned in the earlier sections of this document will help to measure various parameters deployed as a

    part of the SOA based systems. Based on the specics measurement needs, the measurement data needs to be

    collected, analysed, quantied and measured. The collected metrics plays a key role in continuous service improvements

    in SOA based systems and thus helps the organizations to achieve their business objectives and improve their SOA

    maturity to an optimized/ Service Institutionalised level.

  • 8/3/2019 So a Measurements and Reporting

    16/16

    MphasiS white paperSOA Measurements and Reporting

    AboutMphasiS

    MphasiS, an EDS company, delivers Applications Services, Remote Infrastructure

    Services, BPO and KPO services through a combination of technology know-how, domain

    and process expertise. We service clients in the Manufacturing, Financial Services,

    Healthcare, Communications, Energy, Transportation, Consumer & Retail industries and

    to Governments around the world. We are certied with ISO 9001:2000, ISO/IEC

    27001:2005 (formerly known as BS 7799), assessed at CMMI v 1.2 Level 5 and are

    undergoing SAS 70 certication. We also provide SEI CMMI, ISO and Six Sigma related

    services support.

    MphasiS is a performance based company, dedicated to outstanding customer service.

    We offer capabilities to provide innovative solutions by sustainable cost savings and

    improved business performance through exible engagement models. Customer

    centricity, transparency in operations, result-oriented activity and exibility are the

    values on which we build long-term relationships with our clients.

    Contactus

    USAMphasiS

    460 Park Avenue South

    Suite # 1101, New York

    NY 10016, U.S.A.

    Tel: +1 212 686 6655

    Fax: +1 212 686 2422

    UK

    MphasiS

    100 Borough High Street

    London SE1 1LB

    Tel: +44 20 30 057 660

    Fax: + 44 20 30 311 348

    MphasiS

    Edinburgh House

    43-51 Windsor Road

    Slough SL1 2EE, UK

    Tel: +44 0 1753 217 700

    Fax: +44 0 1753 217 701

    INDIA

    MphasiS

    Bagmane Technology Park

    Byrasandra, C.V. Raman Nagar

    Bangalore 560 093, India

    Ph.: +91 80 4004 0404

    Fax: +91 80 4004 9999

    MphasiS and the MphasiS logo are registered

    trademarks of MphasiS Corporation. All other

    brand or product names are trademarks or

    registered marks of their respective owners.

    MphasiS is an equal opportunity employer and

    values the diversity of its people. Copyright

    MphasiS Corporation. All rights reserved.

    1208

    Contactus

    USAMphasiS

    460 Park Avenue South

    Suite # 1101, New York

    NY 10016, U.S.A.

    Tel: +1 212 686 6655

    Fax: +1 212 686 2422

    UK

    MphasiS

    100 Borough High Street

    London SE1 1LB

    Tel: +44 20 30 057 660

    Fax: + 44 20 30 311 348

    MphasiS

    Edinburgh House

    43-51 Windsor Road

    Slough SL1 2EE, UK

    Tel: +44 0 1753 217 700

    Fax: +44 0 1753 217 701

    INDIA

    MphasiS

    Bagmane Technology Park

    Byrasandra, C.V. Raman Nagar

    Bangalore 560 093, India

    Ph.: +91 80 4004 0404

    Fax: +91 80 4004 9999

    MphasiS and the MphasiS logo are registered

    trademarks of MphasiS Corporation. All other

    brand or product names are trademarks or

    registered marks of their respective owners.

    MphasiS is an equal opportunity employer and

    values the diversity of its people. Copyright

    MphasiS Corporation. All rights reserved.