Orchestration as Organisation: Using an organisational paradigm to
achieve adaptable business process modelling and
enactment in service compositions
by
Malinda Kaushalye Kapuruge
A thesis presented to
the Faculty of Information and Communication Technologies,
Swinburne University of Technology, Melbourne,
in fulfilment of the thesis requirement for the degree of
Doctor of Philosophy
2012
i
Abstract
Service Oriented Computing (SOC) has gained popularity as a computing paradigm due to its
ability of easily composing distributed services in a loosely coupled manner. These distributed
services are orchestrated to support and enable business operations according to well-defined
business processes, which is known as service orchestration. Many methodologies and
standards have been proposed to support and realise service orchestrations.
Business requirements are subject to change. New business opportunities emerge and
unforeseen exceptional situations need to be handled. The inability to respond to such changes
can impact on the survival of a business in competitive environments. Therefore, the defined
service orchestrations have to change accordingly. On the one hand, the changes to service
orchestrations defined following rigid standards can incur substantial costs. On the other hand,
the unchecked flexibility in changes to service orchestration definitions can lead to violations of
the goals of the service aggregator, service consumer as well as service providers.
At the same time, the way SOC is used has evolved. Software-as-a-Service and Service
Brokering applications have emerged, profoundly advancing the basic SOC principles. These
developments demand changes to the ways how services are composed and orchestrated in
conventional SOC. This has created a few additional requirements that should be satisfied by
the service orchestration methodologies. As such, the flexibility of a multi-tenanted SaaS
application needs to be afforded without compromising the specific goals of individual tenants
that share the same application instance. The commonalities among multiple business processes
need to be captured and variations need to be allowed, but without compromising their
maintainability. A service broker has to ensure the goals of underlying service providers are
protected while meeting the consumers’ demands. Moreover, the service compositions need to
be continuously in operation with minimal interruptions, while the changes are being made. A
system restart is no longer a viable option.
This thesis introduces a novel approach to composing and orchestrating business services,
called Serendip, to address the above mentioned requirements. In this work, we view a service
composition as an adaptive organisation where the relationships between partner services are
explicitly captured and represented. Such a relationship-based structure of the organisation
provides the required abstraction to define and orchestrate multiple business processes while
sharing the partner services. It also provides the stability to realise continuous runtime evolution
of the business processes, achieving change flexibility but without compromising the business
goals of the stakeholders.
ii
Serendip makes three main research contributions. Firstly, Serendip provides an organisation-
based meta-model and language for modelling service orchestrations with controlled change
flexibility in mind. The meta-model extends the Role Oriented Adaptive Design approach for
adaptive software systems. The flexibility for change is achieved on the basis of the adaptability
and loose-coupling among the organisational entities and the event-driven nature of process
models. The structure and processes (behaviour) of the organisation can be changed during
system runtime. The controllability of changes is achieved by explicitly capturing and checking
the achievement and maintenance of, the goals (requirements and constraints) of the aggregator
and collaborating service providers. Secondly, Serendip has a novel process enactment runtime
infrastructure (engine) and a supporting framework. The runtime engine and supporting
framework manage the orchestration of services in a flexible manner in a distributed
environment, addressing such issues as multi-process execution on shared resources,
management of control and message flows, and coordination of third party services. Thirdly,
Serendip’s adaptation management mechanism has been designed by clearly separating the
functional and management concerns of an adaptive service orchestration. It supports runtime
changes to process definitions and process instances, enabling evolutionary and ad-hoc business
change requirements. The complete Serendip framework has been implemented by extending
the Apache Axis2 web services engine.
The Serendip meta-model, language and framework have been evaluated against the common
change patterns and change support features for business processes. An example case study has
been carried out to demonstrate the applicability of this research in highly dynamic service
orchestration environments. Furthermore, a performance evaluation test has been conducted to
analyse the performance penalty of the runtime adaptation support in this work as compared to
Apache ODE (a popular WS-BPEL based process orchestration engine).
Overall, this research has provided an approach to orchestrating services in a flexible but
controlled manner. It facilitates changes to business processes on-the-fly while considering the
business goals and constraints of all stakeholders, providing more direct and effective business
support.
iii
Acknowledgements
I would like to offer my deepest gratitude to my supervisors, Prof. Jun Han and Dr. Alan
Colman for their generous guidance. Thank you very much for the abundant hours spent
reviewing drafts and providing me with valuable advices during my candidature. I am also
indebted to Prof. Lars Grunske, Prof. Yun Yang, Dr. Adeel Talib, Dr. Minh Tran, Dr. Markus
Lumpe and Dr. Quoc Bao Vo for their advices and feedbacks. I also would like to thank the
thesis examiners Prof. Schahram Dustdar and Prof. Boualem Benatallah for taking their
valuable time to evaluate this thesis.
My thank also goes to my colleagues Justin King, Dr. Tan Phan, Tuan Nguyen, Tharindu
Patikirikorala, Dr. Cameron Hine, Ayman Amin, Ashad Kabir and Mahmoud Hussein for the
numerous discussions that positively influenced the outcome of this thesis. In addition, I
thankfully acknowledge the assistance received from Mark Coorey, Iman Avazpour, Indika
Kumara and Indika Meedeniya to proof read this thesis.
All this would not have been possible without the Swinburne University Postgraduate
Research Award (SUPRA). I am thankful to the awarding committee and the staff for their
continuous assistance.
It is an honour for me to remind the encouragement and guidance I received from Dr. Sanjiva
Weerawarana, Dr. Alexandar Siemers and Dr. Iakov Nakhimovski to take up the challenge of
pursuing higher studies.
I am indebted to my parents, my sister and her husband for their never-ending compassion
and every word of encouragement. A very special expression of thanks goes to my loving wife
Inoga for her confidence, dedication and profuse support during this long journey. I should also
mention my loving son Methvin, who had to sacrifice his time with dad, for the sake of this
thesis.
I must also thank my friends Balendra, Manjula, Rukshana, Roshan, Prasanna and their
families for all the support provided during my stay in Melbourne.
Finally, I thank everybody who was important to the successful completion of this thesis with
an apology for not mentioning by name.
iv
Declaration
This is to certify that,
This thesis contains no material which has been accepted for the award of any other
degree or diploma, except where due reference is made in the text of the examinable
outcome; and
To the best of my knowledge contains no material previously published or written by
another person except where due reference is made in the text of the thesis; and
Where the work is based on joint research or publications, discloses the relative
contributions of the respective workers or authors.
Malinda Kaushalye Kapuruge
October, 2012
Melbourne, Australia.
v
Table of Contents
Chapter 1 Introduction .................................................................................................................. 1
1.1 Business Process Management ........................................................................................... 2
1.1.1 Business Process Management in Practice .................................................................. 3
1.1.2 Business Process Management in Service Oriented Systems ...................................... 4
1.2 Service Orchestration and Its Adaptation ........................................................................... 5
1.2.1 Novel Requirements for Service Orchestration ........................................................... 6
1.2.2 Runtime Adaptability in Service Orchestration ........................................................... 7
1.3 Research Goals .................................................................................................................. 10
1.4 Approach Overview .......................................................................................................... 14
1.5 Contributions..................................................................................................................... 16
1.6 Thesis Structure ................................................................................................................ 16
Chapter 2 Motivational Scenario ................................................................................................ 19
2.1 RoSAS Business Model .................................................................................................... 19
2.2 Support for Controlled Change ......................................................................................... 21
2.3 Support for Single-instance Multi-tenancy ....................................................................... 23
2.4 Requirements of Service Orchestration ............................................................................ 25
2.5 Summary ........................................................................................................................... 26
Chapter 3 Literature Review ....................................................................................................... 27
3.1 Business Process Management - an Overview.................................................................. 28
3.2 Business Process Management and Service Oriented Architecture .................................. 32
3.3 Adaptability in Business Process Management ................................................................ 34
3.4 Techniques to Improve Adaptability in Business Process Management .......................... 35
3.4.1 Proxy-based Adaptation ............................................................................................. 35
3.4.2 Dynamic Explicit Changes ......................................................................................... 38
3.4.3 Business Rules Integration ......................................................................................... 41
3.4.4 Aspect Orientation ..................................................................................................... 44
3.4.5 Template Customisation ............................................................................................ 47
3.4.6 Constraint Satisfaction ............................................................................................... 51
3.5 Summary and Observations .............................................................................................. 55
3.5.1 Summary and Evaluation ........................................................................................... 55
3.5.2 Observations and Lessons Learnt .............................................................................. 58
3.6 Towards an Adaptive Service Orchestration Framework ................................................. 61
vi
3.7 Summary............................................................................................................................ 63
Chapter 4 Orchestration as Organisation ..................................................................................... 65
4.1 The Organisation ............................................................................................................... 66
4.1.1 Structure ..................................................................................................................... 68
4.1.2 Processes ..................................................................................................................... 74
4.2 Loosely-coupled Tasks ...................................................................................................... 76
4.2.1 Task Dependencies ..................................................................................................... 77
4.2.2 Events and Event Patterns .......................................................................................... 79
4.2.3 Support for Dynamic Modifications ........................................................................... 81
4.3 Behaviour-based Processes................................................................................................ 83
4.3.1 Organisational Behaviour ........................................................................................... 84
4.3.2 Process Definitions ..................................................................................................... 86
4.4 Two-tier Constraints .......................................................................................................... 88
4.4.1 The Boundary for a Safe Modification ....................................................................... 88
4.4.2 The Minimal Set of Constraints .................................................................................. 90
4.4.3 Benefits of Two-tier Constraints ................................................................................ 92
4.5 Behaviour Specialisation ................................................................................................... 93
4.5.1 Variations in Organisational Behaviour ..................................................................... 93
4.5.2 Specialisation Rules .................................................................................................... 97
4.5.3 Support for Unforeseen Variations ............................................................................. 98
4.6 Interaction Membranes ...................................................................................................... 98
4.6.1 Indirection of Processes and External Interactions ................................................... 100
4.6.2 Data Transformation ................................................................................................. 102
4.6.3 Benefits of Membranous Design .............................................................................. 104
4.7 Support for Adaptability .................................................................................................. 105
4.7.1 Adaptability in Layers of the Organisation .............................................................. 105
4.7.2 Separation of Control and Functional Process .......................................................... 107
4.8 Managing Complexity ..................................................................................................... 108
4.8.1 Hierarchical and Recursive Composition ................................................................. 109
4.8.2 Support for Heterogeneity of Task Execution .......................................................... 110
4.8.3 Explicit Service Relationships .................................................................................. 111
4.9 The Meta-model .............................................................................................................. 113
4.10 Summary........................................................................................................................ 114
Chapter 5 Serendip Runtime ..................................................................................................... 115
5.1 The Design of an Adaptive Service Orchestration Runtime ........................................... 116
5.1.1 Design Expectations ................................................................................................. 116
vii
5.1.2 Core Components ..................................................................................................... 117
5.2 Process Life-cycle ........................................................................................................... 120
5.2.1 Stages of a Process Instance ..................................................................................... 120
5.2.2 Process Progression.................................................................................................. 121
5.3 Event Processing ............................................................................................................. 122
5.3.1 The Event Cloud ...................................................................................................... 123
5.3.2 Event Triggering and Business Rules Integration .................................................... 125
5.4 Data Synthesis of Tasks .................................................................................................. 128
5.4.1 The Role Design....................................................................................................... 129
5.4.2 The Transformation Process .................................................................................... 130
5.5 Dynamic Process Graphs ................................................................................................ 132
5.5.1 Atomic Graphs ......................................................................................................... 134
5.5.2 Patterns of Event Mapping and Construction of EPC Graphs ................................. 137
5.6 Summary ......................................................................................................................... 144
Chapter 6 Adaptation Management .......................................................................................... 145
6.1 Overview of Process Management and Adaptation ........................................................ 146
6.1.1 Process Modelling Lifecycles .................................................................................. 146
6.1.2 Adaptation Phases .................................................................................................... 148
6.2 Adaptation Management ................................................................................................. 150
6.2.1 Functional and Management Systems ...................................................................... 150
6.2.2 The Organiser ........................................................................................................... 151
6.2.3 The Adaptation Engine ............................................................................................ 152
6.2.4 Challenges ................................................................................................................ 153
6.3 Adaptations ..................................................................................................................... 154
6.3.1 Operation-based Adaptations ................................................................................... 154
6.3.2 Batch Mode Adaptations .......................................................................................... 155
6.3.3 Adaptation Scripts .................................................................................................... 157
6.3.4 Scheduling Adaptations ........................................................................................... 157
6.3.5 Adaptation Mechanism ............................................................................................ 159
6.4 Automated Process Validation ........................................................................................ 161
6.4.1 Validation Mechanism ............................................................................................. 162
6.4.2 Dependency Specification ........................................................................................ 162
6.4.3 Constraint Specification ........................................................................................... 163
6.5 State Checks .................................................................................................................... 164
6.6 Summary ......................................................................................................................... 166
Chapter 7 The Serendip Orchestration Framework .................................................................. 169
viii
7.1 Serendip-Core .................................................................................................................. 171
7.1.1 The Event Cloud ....................................................................................................... 172
7.1.2 Interpreting Interactions ........................................................................................... 174
7.1.3 Message Synchronisation and Synthesis .................................................................. 177
7.1.4 The Validation Module ............................................................................................. 180
7.1.5 The Model Provider Factory ..................................................................................... 181
7.1.6 The Adaptation Engine ............................................................................................. 183
7.2 Deployment Environment................................................................................................ 185
7.2.1 Apache Axis2 ........................................................................................................... 185
7.2.2 ROAD4WS ............................................................................................................... 186
7.2.3 Dynamic Deployments ............................................................................................. 187
7.2.4 Orchestrating Web Services ..................................................................................... 189
7.2.5 Message Delivery Patterns ....................................................................................... 191
7.3 Tool Support .................................................................................................................... 194
7.3.1 Modelling Tools ....................................................................................................... 194
7.3.2 Monitoring Tools ...................................................................................................... 196
7.3.3 Adaptation Tools ...................................................................................................... 197
7.4 Summary.......................................................................................................................... 199
Chapter 8 Case Study ................................................................................................................ 200
8.1 Defining the Organisational Structure ............................................................................. 201
8.1.1 Identifying Roles, Contracts and Players .................................................................. 201
8.1.2 Defining Interactions of Contracts ........................................................................... 203
8.2 Defining the Processes .................................................................................................... 204
8.2.1 Defining Organisational Behaviour .......................................................................... 205
8.2.2 Defining Tasks .......................................................................................................... 208
8.2.3 Defining Processes ................................................................................................... 208
8.2.4 Specialising Behaviour Units ................................................................................... 211
8.3 Message Interpretations and Transformations ................................................................. 213
8.3.1 Specifying Contractual Rules ................................................................................... 213
8.3.2 Specifying Message Transformations....................................................................... 217
8.4 Adaptations ...................................................................................................................... 220
8.4.1 Operation-based Adaptations ................................................................................... 220
8.4.2 Batch Mode Adaptations .......................................................................................... 222
8.4.3 Controlled Adaptations ............................................................................................. 225
8.5 Summary.......................................................................................................................... 229
Chapter 9 Evaluation ................................................................................................................. 230
ix
9.1 Support for Change Patterns ........................................................................................... 230
9.1.1 Adaptation Patterns .................................................................................................. 231
9.1.2 Patterns for Predefined Changes .............................................................................. 239
9.1.3 Change Support Features ......................................................................................... 242
9.1.4 Results and Analysis ................................................................................................ 245
9.2 Runtime Performance Overhead ..................................................................................... 246
9.2.1 Experimentation Set-up............................................................................................ 246
9.2.2 Results and Analysis ................................................................................................ 247
9.3 Comparative Assessment ................................................................................................ 249
9.3.1 Flexible Processes .................................................................................................... 249
9.3.2 Capturing Commonalities ........................................................................................ 251
9.3.3 Allowing Variations ................................................................................................. 252
9.3.4 Controlled Changes .................................................................................................. 252
9.3.5 Predictable Change Impacts ..................................................................................... 253
9.3.6 Managing the Complexity ........................................................................................ 254
9.3.7 Summary of Comparison ......................................................................................... 257
9.4 Summary ......................................................................................................................... 259
Chapter 10 Conclusion .............................................................................................................. 260
10.1 Contributions................................................................................................................. 260
10.2 Future Work .................................................................................................................. 263
Appendix A. SerendipLang Grammar ................................................................................ 266
Appendix B. RoSAS Description ....................................................................................... 268
Appendix C. Organiser Operations .................................................................................... 285
Appendix D. Adaptation Scripting Language..................................................................... 292
Appendix E. Schema Definitions ....................................................................................... 293
Bibliography ............................................................................................................................. 302
x
List of Figures Figure 1-1. The Spectrum of Flexibility ................................................................................. 10 Figure 1-2. Processes vs. Business Requirement ................................................................... 10 Figure 1-3. Service Eco-System and the Amount of Flexibility ............................................ 13 Figure 1-4. The Research Goal ............................................................................................... 13 Figure 1-5. Traditional Approaches vs. Serendip ................................................................... 15 Figure 1-6. Adaptive Organisational Structure Providing Required Amount of Flexibility .. 15 Figure 2-1. The Collaborators of Rosas Business Model ....................................................... 20 Figure 3-1. The Relationship between BPM Theories, Standards and Systems, Cf. [95] ...... 29 Figure 3-2. Categories of Process Modelling Approaches ..................................................... 29 Figure 3-3. Complementary Existence of BPM and SOA...................................................... 32 Figure 3-4. The Classification Scheme for Flexibility, Cf. [59] ............................................. 34 Figure 3-5. Dominant Underpinning Adaptation Techniques ................................................ 35 Figure 3-6. The Generic Proxy Technique in TRAP/BPEL (Cf. [140]) ................................. 37 Figure 3-7. Composing Segments Using Plug-and-Play Technique (Cf. [169]) .................... 40 Figure 3-8. Decision Logic Extraction from Process to Rule (Cf. [181] ) ............................. 43 Figure 3-9. Hybrid Composition of Business Rules and BPEL (Cf. [190]) ........................... 45 Figure 3-10. A Template Controller Generates an Executable Process (Cf. [203]) ............... 48 Figure 3-11. A Simple DecSerFlow Model (Cf. [71]) ........................................................... 54 Figure 3-12. An Illustration of Different Techniques Used to Improve the Adaptability ...... 56 Figure 3-13. Towards an Adaptive Service Orchestration ..................................................... 62 Figure 4-1. Process Structure and Organisational Structure................................................... 69 Figure 4-2. Meta-model: Organisation, Contracts, Roles and Players ................................... 71 Figure 4-3. A Contract Captures the Relationship among Two Roles ................................... 72 Figure 4-4. Meta-model: Contracts, Facts and Rules ............................................................. 72 Figure 4-5. Roles and Contracts of the RoSAS Composite .................................................... 72 Figure 4-6. Layers of a Serendip Orchestration Design ......................................................... 76 Figure 4-7. Meta-model: Tasks, Roles and Players ................................................................ 77 Figure 4-8. The Loose-coupling Between Tasks .................................................................... 78 Figure 4-9. Complex Dependencies among Tasks ................................................................. 79 Figure 4-10. Events Independent of Where They Originated and Referenced ....................... 80 Figure 4-11. Meta-model: Events and Tasks .......................................................................... 80 Figure 4-12. Advantages of Loose-coupling .......................................................................... 82 Figure 4-13. Organisational Behaviours are Reused across Process Definitions ................... 85 Figure 4-14. Meta-model: Process Definitions, Behaviour Units and Tasks ......................... 87 Figure 4-15. Boundary for Safe Modification ........................................................................ 89 Figure 4-16. Meta-model: Types of Constraints .................................................................... 91 Figure 4-17. Process-collaboration Linkage in Constraint Specification ............................... 92 Figure 4-18. Variations In Separate Behaviour Units (a) bTowingSilv (b) bTowingPlat ...... 94 Figure 4-19. Meta-Model: Behaviour Specialisation ............................................................. 94 Figure 4-20. Process Definitions Can Refer to Specialised Behaviour Units ........................ 95 Figure 4-21. Rules of Specialisation ...................................................................................... 97 Figure 4-22. Role Players Interact via the Organisation ........................................................ 99 Figure 4-23. The Interaction Membrane .............................................................................. 100 Figure 4-24. The Indirection of Indirections Assisted by Tasks .......................................... 101 Figure 4-25. Meta-model: Tasks, Events and Interactions ................................................... 102 Figure 4-26. Tasks Associate Internal and External Interactions ......................................... 104 Figure 4-27. Formation of the Interaction Membrane .......................................................... 104 Figure 4-28. Adaptability in the Organisation ...................................................................... 106 Figure 4-29. Meta Model: The Organiser ............................................................................. 107 Figure 4-30. Organiser Player .............................................................................................. 108 Figure 4-31. Reducing a Complex Orchestration into Manageable Sub-Orchestrations ..... 110 Figure 4-32. A Hierarchy of Service Compositions ............................................................. 110
xi
Figure 4-33. Implementation of Task Execution is Unknown to Core Orchestration ......... 111 Figure 4-34. Service Relationships vs. SLA ........................................................................ 113 Figure 4-35. Contracts Support Explicit Representation of Service Relationships ............. 113 Figure 4-36. Serendip Meta-Model...................................................................................... 113 Figure 5-1. Core Components of Serendip Runtime ........................................................... 118 Figure 5-2. The Life Cycle of a Serendip Process Instance ................................................. 120 Figure 5-3. Process Execution Stage ................................................................................... 122 Figure 5-4. The Event Cloud ............................................................................................... 123 Figure 5-5. Message Interpretation ...................................................................................... 128 Figure 5-6. Data Synthesis and Synchronisation Design ..................................................... 130 Figure 5-7. Generating Process Graphs from Declarative Behaviour Units ........................ 133 Figure 5-8. An Atomic EPC Graph...................................................................................... 135 Figure 5-9. Mapping Identical Events ................................................................................. 138 Figure 5-10. Predecessor and Successor Processing for Merging Events ........................... 138 Figure 5-11. A Linked Graph .............................................................................................. 140 Figure 5-12. A Dynamically Constructed EPC Graph for pdSilv Process .......................... 143 Figure 6-1. (a) Process Lifecycle, Cf. [286]. (b) BPM Lifecycle, Cf. [7] ............................ 148 Figure 6-2. Process Adaptation Phases ................................................................................ 148 Figure 6-3. Separation of Management and Functional Systems ........................................ 152 Figure 6-4. Types of Operations in Organiser Interface ...................................................... 155 Figure 6-5. (a) Operations-based Adaptation (b) Batch Mode Adaptation .......................... 156 Figure 6-6. The Runtime Architecture of Adaptation Mechanism ...................................... 159 Figure 6-7. Change Validation Mechanism ......................................................................... 162 Figure 6-8. Rules of EPC to Petri-Net Mapping, Cf. [295] ................................................. 163 Figure 6-9. Preparing the Dependency Specification .......................................................... 163 Figure 6-10. Preparing the Corresponding Constraint Specification ................................... 164 Figure 6-11. States of (a) A Process Instance (b) A Task of a Process Instance ................. 165 Figure 7-1. Layers of Framework Implementation .............................................................. 170 Figure 7-2. Serendip Framework Implementation and Related Technologies .................... 170 Figure 7-3. Class Diagram: Event Cloud Implementation ................................................... 173 Figure 7-4. A Binary Tree to Evaluate Event Patterns ........................................................ 174 Figure 7-5. Example Event-Pattern Evaluation ................................................................... 174 Figure 7-6. Class Diagram: Contract Implementation ......................................................... 176 Figure 7-7. Class Diagram: Message Synchronisation Implementation .............................. 178 Figure 7-8. Class Diagram: Validation Module Implementation ........................................ 180 Figure 7-9. Class Diagram: MPF Implementation ............................................................... 182 Figure 7-10. Marshalling and Unmarshalling via JAXB 2.0 Bindings ................................ 183 Figure 7-11. Class Diagram: Adaptation Engine Implementation ....................................... 184 Figure 7-12. ROAD4WS - Extending Apache Axis2 .......................................................... 186 Figure 7-13. ROAD4WS Directory Structure ..................................................................... 188 Figure 7-14. Role Operations Exposed as Web service Operations .................................... 188 Figure 7-15. Orchestrating Partner Web services ................................................................ 190 Figure 7-16. Message Routing towards/from Service Composites ...................................... 190 Figure 7-17. Message Delivery Patterns .............................................................................. 192 Figure 7-18. Message Delivery Mechanism ........................................................................ 192 Figure 7-19. SerendipLang Editor – An Eclipse Plugin ...................................................... 195 Figure 7-20. Generating ROAD Artifacts ............................................................................ 195 Figure 7-21. Monitoring the Serendip Runtime ................................................................... 196 Figure 7-22. Serendip Monitoring Tool ............................................................................... 196 Figure 7-23. Remote and Local Organiser Players .............................................................. 198 Figure 7-24. The Adaptation Tool ....................................................................................... 198 Figure 7-25. Serendip Adaptation Script Editor– An Eclipse Plugin .................................. 198 Figure 8-1. Design of the RoSAS Composite Structure ...................................................... 202 Figure 8-2. Capturing Commonalities and Variations via Behaviour Specialisation .......... 213
xii
Figure 8-3. An Evolutionary Adaptation via Operations ..................................................... 221 Figure 8-4. An Ad-Hoc Adaptation via Operations ............................................................. 222 Figure 8-5. An Evolutionary Adaptation via Adaptation Scripts ......................................... 223 Figure 8-6. An Ad-hoc Adaptation via Adaptation Scripts .................................................. 225 Figure 8-7. The Deviation Due to Introduction of New Task .............................................. 227 Figure 8-8. Alternative and Valid Solutions ......................................................................... 227 Figure 8-9. Changes to Constraints and Dependencies are Controlled by Each Other ........ 228 Figure 9-1. AP1: Insert Process Fragment ........................................................................... 232 Figure 9-2. AP2: Delete Process Fragment .......................................................................... 233 Figure 9-3. AP3: Move Process Fragment ........................................................................... 233 Figure 9-4. AP4: Replace Process Fragment ........................................................................ 234 Figure 9-5. AP5: Swap Process Fragment ............................................................................ 234 Figure 9-6. AP6: Extract Sub Process .................................................................................. 235 Figure 9-7. AP7: Inline Sub Process .................................................................................... 235 Figure 9-8. AP8: Embed Process Fragment in Loop ............................................................ 236 Figure 9-9. AP9: Parallelise Process Fragment .................................................................... 236 Figure 9-10. AP10: Embed Process Fragment in Conditional Branch ................................. 237 Figure 9-11. AP11: Add Control Dependency ..................................................................... 237 Figure 9-12. AP12: Remove Control Dependency ............................................................... 238 Figure 9-13. AP13: Update Condition.................................................................................. 238 Figure 9-14. PP1: Late Selection of Process Fragments ...................................................... 239 Figure 9-15. PP2: Late Modelling of Process Fragments ..................................................... 240 Figure 9-16. PP3: Late Composition of Process Fragments ................................................. 241 Figure 9-17. PP4: Multi-Instance Activity ........................................................................... 241 Figure 9-18. Performance Overhead Evaluation - Experiment Setup .................................. 247 Figure 9-19. Performance Comparison ................................................................................ 248 Figure 9-20. Performance Comparison - Neglecting the First Request................................ 248 Figure 9-21. Two-Dimensional Decoupling via Events ....................................................... 250 Figure 9-22. Late Modifications to Tasks of a Process Instance .......................................... 251 Figure 9-23. Capturing Commonalities in a Behaviour Hierarchy ...................................... 251
xiii
List of Tables Table 3-1. Categories of Process Modelling Approaches ...................................................... 31 Table 3-2. The Summary of Evaluation of Adaptive Service Orchestration Approaches ..... 57 Table 5-1. Types of Event Publishers .................................................................................. 124 Table 5-2. Types of Event Subscribers ................................................................................ 124 Table 5-3. EPC Connectors ................................................................................................. 135 Table 5-4. The Mapping Patterns and the Actions .............................................................. 139 Table 5-5. The Truth Table for Pattern Pair Selection ......................................................... 141 Table 6-1. Adaptation Modes and Paths of Execution......................................................... 161 Table 6-2. State Checks for Different Adaptation Actions .................................................. 166 Table 7-1. Subscribed Event Patterns and Subsequent Actions ........................................... 173 Table 9-1. Evaluation Summary - The Support for Change Patterns ................................. 246 Table 9-2. Results of Runtime Performance Overhead Evaluation ..................................... 249 Table 9-3. Serendip Features to Satisfy Key Requirements – A Summary ......................... 257 Table 9-4. Results of the Comparative Assessment ............................................................. 258
xiv
List of Listings Listing 4-1. High Level RoSAS Description in SerendipLang .............................................. 72 Listing 4-2. A Sample Contract .............................................................................................. 73 Listing 4-3. Task Definitions of Role Case-Officer ............................................................... 77 Listing 4-4. Task tTow of Role TT ........................................................................................ 80 Listing 4-5. Task tPayTT of Role CO .................................................................................... 80 Listing 4-6. A Sample Behaviour Description, bTowing ....................................................... 85 Listing 4-7. A Sample Process Definition, pdGold ................................................................ 86 Listing 4-8. A Sample Process Definition, pdPlat .................................................................. 87 Listing 4-9. Behaviour-level Constraints ............................................................................... 92 Listing 4-10. Process-level Constraints .................................................................................. 92 Listing 4-11. A Specialised Behaviour Unit Overriding a Property of a Task ....................... 95 Listing 4-12. A Specialised Behaviour Unit Introducing a New Task ................................... 95 Listing 4-13. An Abstract Behaviour Unit ............................................................................. 96 Listing 4-14. Task Definition Specifies the Source and Resulting Messages/Interactions .. 104 Listing 5-1. The Algorithm to Construct an Atomic Graph ................................................. 136 Listing 5-2. EBNF Grammar for Event Pattern .................................................................... 137 Listing 5-3. Algorithm to Link EPC Graphs ........................................................................ 142 Listing 6-1. Sample Adaptation Script ................................................................................. 157 Listing 7-1. Two Rules to Interpret Interaction orderTow Specified in CO_TT.drl ............ 177 Listing 7-2. A Sample Task Description .............................................................................. 179 Listing 7-3. XSLT File for Out–Transformation, tTow.xsl .................................................. 179 Listing 7-4. XSLT File for In-Transformation, CO_TT_orderTow_Res.xsl ....................... 179 Listing 7-5. Generated Romeo Compatible Petri-Net File ................................................... 181 Listing 7-6. Generated Romeo Compatible TCTL Expressions ........................................... 181 Listing 7-7. A Sample Adaptation Action ............................................................................ 185 Listing 8-1. Serendip Description of the RoSAS Composite Structure................................ 203 Listing 8-2. Player Bindings ................................................................................................. 203 Listing 8-3. The Contract CO_TT ........................................................................................ 204 Listing 8-4. Behaviour Units of RoSAS Organisation ......................................................... 207 Listing 8-5. Task Description ............................................................................................... 208 Listing 8-6. Process Definitions ........................................................................................... 211 Listing 8-7. Behaviour Specialisation .................................................................................. 213 Listing 8-8. pdPlat Uses The Specialised Behaviour ........................................................... 213 Listing 8-9. Drools Rule File (CO_TT.drl) of CO_TT Contract .......................................... 215 Listing 8-10. Contextual Decision Making via Rules .......................................................... 217 Listing 8-11. A Process Instance Specific Rule ................................................................... 217 Listing 8-12. XSLT File to Generate an Invocation Request ............................................... 219 Listing 8-13. XSLT File to Derive a Result Message from an Service Response ................ 219 Listing 8-14. An Adaptation Script for an Evolutionary Adaptation ................................... 223 Listing 8-15. An Adaptation Script for an Ad-hoc Adaptation ............................................ 225 Listing 8-16. A Behaviour Constraint of bTaxiProviding .................................................... 226
xv
List of Publications 1. M. Kapuruge, J. Han and A. Colman. 2012. Representing Service-Relationships as First
Class Entities in Service Orchestrations. 13th International Conference on Web Information System Engineering (WISE '12). Springer-Verlag, Berlin, Heidelberg, [to appear].
2. M. Kapuruge, J. Han and A. Colman. 2011. Controlled Flexibility in Business Processes Defined for Service Compositions. 8th IEEE International Conference on Services Computing (SCC '11). IEEE Computer Society, pp. 346-353.
3. M. Kapuruge, A. Colman and J. Han. 2011. Achieving Multi-tenanted Business Processes in SaaS Applications. 12th International Conference on Web Information System Engineering (WISE '11). Springer-Verlag, Berlin, Heidelberg, pp. 143-157.
4. M. Kapuruge, A. Colman and J. Han. 2011. Defining Customizable Business Processes without Compromising the Maintainability in Multi-tenant SaaS Applications. 4th IEEE International Conference on Cloud Computing (CLOUD '11). IEEE Computer Society, pp. 748-749.
5. M. Kapuruge, A. Colman and J. King. 2011. ROAD4WS -- Extending Apache Axis2 for Adaptive Service Compositions. 15th IEEE International Enterprise Distributed Object Computing Conference (EDOC '11). pp. 183-192.
6. M. Kapuruge, J. Han, and A. Colman, 2010. Support for Business Process Flexibility in Service Compositions: An Evaluative Survey, 21st Australian Software Engineering Conference (ASWEC '10). IEEE Computer Society, 2010, pp. 97-106.
7. M. Kapuruge, J. Han and A. Colman. 2010. Towards On-demand Business Process Customization Framework for Software-as-a-Service Delivery Model. 17th Asia-Pacific Software Engineering Conference (APSEC '10) - Cloud Workshop, Sydney, Australia. pp. 21-26.
8. M. Talib, A. Colman, J. Han, J. King, M. Kapuruge. 2010. A Service Packaging Platform for Delivering Services, 7th IEEE International Conference on Services Computing (SCC '10). IEEE Computer Society, pp. 202-209.
9. T. Phan., J. Han, I. Mueler, M. Kapuruge, S. Versteeg. 2009. SOABSE: An Approach to Realising Business-Oriented Security Requirements with Web Service Security Policies. IEEE International Conference on Service-Oriented Computing and Applications (SOCA '09). pp. 1-10.
Part I
1
Chapter 1
Introduction
Business organisations have been increasingly adopting Business Process Management (BPM)
as a part of their core business strategy due to benefits such as business-IT-align and optimised
operational efficiency [1]. Service Oriented Architecture (SOA) has emerged as an architecture
style that provides a loosely coupled, re-usable and standards-based way to integrate distributed
systems [2]. Both SOA and BPM complements the mutual existence through their development
in enterprise architecture [3, 4]. By combining the benefits of these complementary
developments, service orchestration approaches are providing methodologies and technologies
to enable the creation of integrated IT systems that meet the needs of rapidly changing
enterprises and market-places.
Over the past few years, service orchestration methodologies have evolved. New business and
IT paradigms have emerged. The online market places have increasingly become open, diverse
and complex as more computer-mediated services become available to consumers, and as
demand for those services increases [5]. The repurposing of systems to open up new revenue
channels is an emerging norm. In such service marketplaces, the changes to systems become
unpredictable and frequent while consumers expect the downtime to implement these changes to
be zero.
These developments have challenged the existing service orchestration methodologies. This
thesis introduces a service orchestration methodology called Serendip, to address those
challenges. In this chapter, we introduce the research context, research goals and an overview of
the proposed approach. Sections 1.1 and Section 1.2 introduce the context of this research. In
Section 1.1 we provide an overview of BPM and how BPM is applied in Service Oriented
systems. In Section 1.2 we discuss service orchestration in the context of novel requirements
and challenges. The research goals are introduced in Section 1.3. An overview of the approach
is presented in Section 1.4 while Section 1.5 lists the key contributions of this thesis. Finally,
Section 1.6 provides an overview of the thesis structure.
2
1.1 Business Process Management
According to Gartner [6], the Business Process Management (BPM1) market in 2009 totalled
USD 1.9 billion in revenue, compared with $1.6 billion in 2008, which was an increase of 15%.
This means that there has been a vast increase in adopting BPM as part of the core business
strategies of the business organisations.
Due to the cross-disciplinary nature, BPM has multiple definitions. These definitions have
changed over time and varied from one discipline to another. One of the widely recognised
definitions of BPM in Information Technology (IT) was given by van der Aalst [7]. According
to van der Aalst, “Business Process Management (BPM) includes methods, techniques, and
tools to support the design, enactment, management, and analysis of operational business
processes”.
The above definition leads to the question, what is a business process? Curtis [8] in a broader
context defines a process as a “partially ordered set of tasks or steps undertaken towards a
specific goal”. In the business context, Hammer and Champy [9] define a business process as a
“set of activities that together produce a result of value to the customer”. In the literature, there
are many other definitions with slight variations. Irrespective of these variations, the
fundamental characteristics of a business process is the ordering of activities that is required to
achieve business requirement/goal.
The attraction of BPM to business organizations is primarily driven by the requirement of
managing the business processes and continuously optimising and leveraging their IT assets.
BPM provides the management model to efficiently achieve this objective by using Information
Technology (IT). Such efficiency becomes an important requirement as businesses demand
shorter cycle times for executing processes [10]. Hence, the main objective of BPM can be seen
as the provision of efficient business IT support, compared to its non-IT or manual counterparts,
which helps the organisation to function in a productive manner [11].
The adoption of BPM in businesses also formalises the understanding of business processes.
Formalised business process descriptions provide a unified view on the processes of a business
organisation. A unified view helps identify the gaps and the required improvements in business
strategies. Consequently, the business strategists can fill the gaps or find better ways of
accomplishing tasks [11]. The Barings Bank Failure [12] is a good example that highlighted the
disastrous effects on a business of unidentified gaps in business processes. The improvements
and adoption of computing theories, notations and verification methodologies into BPM has
assisted the communication, understanding and verification of defined business processes.
1 Hereafter the BPM means the Business Process Management via Information Technology.
3
Furthermore, the advancing capabilities of BPM suites such as logging, monitoring,
visualisation and process-analysis were also assisting the adoption of BPM.
The ever increasing competition within business environments, force business organisations
to change their business processes frequently. According to Gartner, “more than two-thirds of
organisations using BPM do so because they expect that they will have to change business
processes at least twice per year” [6]. The report also stated “18% of companies have said that
they needed to change processes at least monthly and 10% said that their processes changed
weekly”. Manually adapting and verifying such frequent changes can be impractical; hence, a
proper IT support is must.
In conclusion, the factors such as requirement for increased efficiency, the ability to support
frequent changes collectively forced business organizations to practice BPM methodologies as
part of their core business strategy.
1.1.1 Business Process Management in Practice
Business organisations practice BPM in different ways. These practices are selected depending
on the business and technical requirements of the organisation.
The most rudimentary use of IT support involves the computer-assisted drafting of process
flows or business process definitions. Usually, a software tool is used by business strategists or
policymakers to quickly draft the activity flows or the process definition and distribute among a
set of involved partners, e.g., business owners, line managers, clerks, labourers. The objective
of this practice is to convey a specific business strategy captured via a process definition. The
partners abide by the process definition. If a change is required, the process definition is
reloaded to the software tool to apply due changes. Later, the changed process definition is
redistributed among the partners. The process definitions used in this practice are usually
diagrammatic. The idea is to let humans understand the business strategy quickly and easily.
There is no intention to automate the enactment via machines. During the early stages, business
organisations developed their own proprietary notations to define such process definitions. Both
business strategists and partners were required to learn these notations. Later, in order to
standardise the way such processes are modelled and communicated among different partners,
many standard process modelling notations were proposed [13-17]. Yet, the process execution
was carried out manually without any automation.
Due to the lack of efficiency due to manual execution, business organisations required
executable languages [7, 18, 19]. Such languages allow machines to understand and automate
the execution of the business processes. Many business process instances can be instantiated
and executed simultaneously with little or no human involvement. The advent of business-to-
4
business communication intensified of the need for automated process execution. Subsequently,
the emerging process executable languages demand business process execution engines. This
requirement is backed by leading vendors such as Microsoft, IBM, BEA and SAP who created a
number of process execution engines, usually as part of a complete BPM suite. This
development paves the way to placing the business processes execution engines and the BPM
suites at the heart of the enterprise systems [20].
One of the potent catalysts for increased business process automation is the advancement in
distributed computing [21]. In distributed computing multiple automated systems that are
geographically distributed communicate with each other over a network. With the advancement
of Service Oriented Architecture (SOA)[2, 5] and wide adoption of service oriented computing
techniques such as Web services [22], the requirement for providing automated business process
execution has intensified [2, 23].
1.1.2 Business Process Management in Service Oriented Systems
In recent years, service oriented systems that compose reusable services in a loosely
coupled manner have emerged as a new paradigm for modelling and enacting enterprise
applications. Service Oriented Architecture (SOA) has been used either to consolidate and
repurpose legacy applications or to combine them with new applications [5]. SOA provides a set
of principles and architectural methodologies to integrate loosely coupled, distributed and
autonomous software services.
The services in SOA can be described as well-defined, self-contained and autonomous
computing modules that provide standard business functionality and are independent of the
state or context of other services [2]. A business organisation may integrate a group of such
services together to provide a combined business offering. The entity that aggregates these
services is known as the Service Aggregator. Such an aggregation of services is known as a
service composition. In a service ecosystem such a composition also can be used as another
service. In service composition, it is required to specify the logical coordination in which these
services are invoked. The logical coordination of service invocations is known as a service
orchestration which is defined by the Service Aggregator.
Service orchestration differs from service choreography [24]. Service orchestration provides
a participant’s or a service’s point of view of how it interacts with other services in
collaboration. A service orchestration is executable; in contrast, service choreography provides
a global view among a set of participants or services and is not executable. A choreographic
description is usually considered as a global contract or agreement that all participants commit
5
[25-27]. Such a choreographic description is required to be mapped to a service orchestration
description for execution purposes [28].
At the beginning, general purpose programming languages such as Java and C++, were used
for orchestrating services. However, such general purpose programming languages lack the ease
of re-design and the comprehensibility for business users. Such properties are essential to
understand, analyse and iteratively refine the service orchestration. Consequently, a need
developed for more specific service orchestration languages.
To fill this gap, workflow modelling and enactment concepts were used [29-31]. One of the
influential factors to use workflows for service orchestration is the fact that workflows are being
extensively used in describing assembly lines and office procedures [31]. For example, each
steps of processing in an assembly line is described as an activity of a workflow.
Correspondingly, in service orchestration, each service call is described as an activity of a flow.
Following this trend of utilising workflows, many process language standards have been
proposed.
One of the key objectives of these process languages is to achieve the machine readability.
Basically, the languages are to be understood by machines to automate the service calls or
service invocations. Furthermore, the development of XML related technologies also facilitated
this trend [32]. XML has been widely used for structuring data and providing information due to
inherent advantages such openness, extensibility and portability. Subsequently, XML became
the norm for describing processes in emerging standards [33]. A typical early example is
BPML [34], which is an initiative of BPMI. Another example is XPDL [35], which is based on
XML serialisation, replacing its predecessor WSFL [36].
As of now, the de-facto standard for automated enactment of service orchestrations is WS-
BPEL [37], formally known as BPEL4WS [22]. WS-BPEL is built on top the WSDL [38],
which is the standard for describing the service signatures [22, 39]. However, WSDL cannot be
used to describe the service interaction model and WS-BPEL filled this gap.
1.2 Service Orchestration and Its Adaptation
In order to align service orchestration to the business needs of an organisation, the service
orchestration needs to be continuously managed. Service Orchestration Management can be
seen as a progressive extension of traditional Workflow Management [7, 31, 40]. The concepts
of workflow management fit with service orchestration management in most cases. However,
service orchestration needs to tackle a few additional requirements as it is grounded on service-
based systems in a distributed and autonomous environment. This includes, managing the
service endpoints, service-service interactions, message transformations and exceptional
6
situations such as network failures. The service orchestration logic needs to be managed with
respect to the autonomy of the services that are composed. In a nutshell, the owner has less
control compared to traditional in-house Workflow Management Systems, where the owner can
manage the workflow with relatively greater discretion and control.
New business models have emerged changing the way the service orchestration is used in the
enterprises. Consequently, there is a demand for an increased flexibility as well as
improvements in the ways service orchestration systems are designed, enacted and managed.
The upcoming two sections will introduce these new business models and the importance of
runtime adaptability of a service orchestration.
1.2.1 Novel Requirements for Service Orchestration
In classical service orchestration, a number of services are invoked in orderly fashion to get
some work done. This classical model has been evolved. New business models to create online
market places have emerged in the enterprise. Among them, Service-Brokering and Software-
as-a-Service (SaaS) can be seen as influential models for defining online market places [2, 5,
41-45]. Both Service-Brokering and SaaS heavily depend on service orchestration as the
underlying realisation technique, because the service orchestration combines many advantages
offered by both BPM and SOA. Consequently, service orchestration methodologies and best-
practices need to be modernised to support the inherent properties of these influential
developments. Service Brokers and SaaS Vendors can be seen as Service Aggregators, but with
varied business models.
Service Brokers play a significant role in the Web services ecosystem [5]. Service Brokers
position themselves in between the traditional service consumers and service providers and
generate value by bringing the service providers closer to service consumers. Service Brokers
may re-purpose the provided services and may add additional functionalities such as payment
and monitoring mechanisms to existing service offerings. A fundamental requirement for
Service Brokers is to aggregate a number of services to achieve a value addition. Therefore,
improved service orchestration capabilities are paramount to ensure the efficient and more agile
coordination of these services.
Software-as-a-Service (SaaS) is gaining popularity as a way of on demand delivery of
software over the Internet [44]. SaaS has become popular due to advantages such as quick
Return on Investment and Economies of Scale [46]. SaaS vendors can benefit from the
advantages provided by the service orchestration. SaaS and SOA complement each other - SaaS
is a software delivery model whereas SOA is a software construction model [45]. In order to
build SaaS applications using SOA and deliver service offerings using service orchestration
7
techniques, it is necessary to provide the agility as well as the support for multi-tenancy [42,
47] in service orchestration management systems.
In a nutshell, these new developments call for improvements in the way we design, enact and
manage the service orchestration systems. This includes improvements to the existing service
orchestration practices, service composition techniques, available middleware and tool support
[33]. One of the fundamental requirements supporting continuous improvement is adaptability
[48-50]. The level of adaptability as well as the manner in which the adaptability is provided
needs improvement due to the emergence of Service Brokering and SaaS business models.
1.2.2 Runtime Adaptability in Service Orchestration
A service orchestration system that is optimally designed for a current given business
environment might not be optimal enough in future. Firstly, the internal factors such as business
goal changes, strategy and technology changes, continuous performance optimisations and
exception handling, can call for changes to the defined service orchestration. Secondly, the
external factors such as service/network failures, new government/business regulations,
unsatisfactory QoS in service delivery in partner services and change in service consumer
requirements can also stimulate changes in defined service orchestration.
Irrespective of whether they are internally or externally induced, these changes may occur in
an unpredictable manner [51]. The practices such as specifying all the execution paths or
having a set of pre-defined adaptations for the service orchestration might not be possible. Such
challenges are evident in service orchestrations used in Service Brokering and SaaS
applications, where the consumers/tenant demands as well as the service offerings can be
subjected to frequent changes. Therefore, the service orchestration needs to be continuously
adapted at runtime to ensure the ‘continuing fit’ between the IT support and the real world
business processes requirements.
According to Taylor et al., “Runtime software adaptability, is the ability to change an
application’s behaviour during runtime” [50]. Business organisations not only require the
adaptability but also the ability to continuously function while the adaptations are taking place
[49]. That means, the system behaviour needs to be changed but without requiring a system
stoppage and restart. A service composition is designed and deployed to achieve a business
requirement. Accordingly, a single stop/start of the service orchestration engine can have
significant effects to the business as well as on its customers [52, 53]. The service-level
agreements [54] may get violated and the reputation of the business can be degraded. Therefore,
it is necessary that the changes are carried out on the service orchestration while the service
composition is functioning with minimal or no impact to the operations. In order to achieve this,
8
the service orchestration language as well as the enactment middleware both needs to be flexible
to accommodate runtime changes.
Two main categories of changes in business processes is defined by van der Aalst et al.
known as ad-hoc and evolutionary [55, 56]. Ad-hoc changes are carried out on a particular
business case or a process instance [57, 58] at runtime. Usually, this type of change is carried
out in order to handle exceptions or to optimise a particular process instance. Ad-hoc changes
do not affect the system as a whole but only a single process instance. In contrast, the
evolutionary changes are carried out on process definitions affecting the system as a whole;
allowing both these categories of change during runtime is important for a business
organisation.
Two similar categories are also highlighted by Heinl et al. [59] as Type Adaptation and
Instance Adaptation. Heinl et al. further differentiate between the terms version and variant of a
process definition: a new version replaces the old version, where multiple variants can co-exist.
Mapping this to the business domain, business organisations might need to re-engineer/replace
existing process definitions with newer versions. Alternatively, a new variant can be created so
that both the original and variant process definitions can co-exist. More recent classifications
while following the above two categories have further extended them to broaden our
understanding on business process flexibility [51, 60-64].
Many approaches were proposed to achieve increased flexibility in business processes. Some
approaches attempted to improve the flexibility of existing standards [57, 65-68], while others
have attempted to provide alternative paradigms [61, 69-73]. These approaches have contributed
to immensely improving the flexibility of business process support by borrowing and adopting
the techniques from other fields of software engineering such as Aspect-Orientation [74] and
Generative-Programming [75].
Nevertheless, the increased flexibility of the system can also lead to undesirable situations
[76]. In service orchestrations, safeguarding the business partnerships among the collaborating
services is important. There can be numerous change requests arising during the runtime within
the aggregator organisation itself (internal) or from the collaborating service providers and
consumers (external). Therefore, it is necessary that the changes are carried out in a systematic
and tractable manner without violating the business goals of the aggregator organisation and the
collaborators [77].
The amount of flexibility required in a business process is a function of both business domain
and time. The amount of flexibility varies from one business domain to another. For example, a
service orchestration used in handling emergency situations might require more flexibility to
9
adapt to unpredictable situations compared to service orchestration used in traditional billing
system, where set standards and predictable protocols are followed. In addition, the same
business domain may expect varying levels of flexibility as time goes on; for example, the
constraints that strictly enforce a specific ordering of business activities may become invalid
due to changes in business policies and procedures.
For simplicity, if we indicate an imaginary spectrum of flexibility as shown in Figure 1-1,
with two extremes, from nothing is changeable (left) to everything is changeable (right), the
required amount of flexibility for a given business domain for a given time can be positioned at
any point between these two extremes. Rigid approaches for modelling service orchestrations
suit businesses towards the left of the spectrum (less changeability); whereas, extremely flexible
approaches suit businesses positions towards the right (more changeability). The current de-
facto standards for service orchestration, WS-BPEL [37] provides a limited amount of
flexibility to adapt a service orchestration. Therefore, WS-BPEL suits to model business
processes towards the left of the line. Relative to WS-BPEL, AO4BPEL [68] can be used to
model businesses relatively more towards right. Usually, a business organisation selects the
service orchestration approach depending on the amount of flexibility demanded by its business
requirements.
Therefore, it is important that a service orchestration approach allows the correct positioning
and re-positioning of the service orchestration along the flexibility spectrum to handle the
unpredictability of amount of flexibility. In other words, the approach should let the business
domain and time decide the amount of flexibility, i.e., decide what can be changed and what
remains unchanged. The factors that restrict the flexibility such as business regulations can
appear and disappear. New business opportunities that demand higher flexibility can emerge
and stay for a limited time until a competitor grabs the opportunity. If the service orchestration
approach is inherently rigid, it prevents the business from quickly positioning itself towards the
right end of the line even if the real world business can be carried out in a more flexible manner
currently, e.g., upon disappearance of a regulation that earlier limited the flexibility.
Therefore, the amount of flexibility afforded by service orchestration should not be limited by
the rigidity of the orchestration approach but by the domain specific business
regulations/constraints that may change over time. These domain specific constraints stem from
the challenges and opportunities that the business faces at runtime. The role of the service
orchestration approach should be to provide the required concepts and IT capabilities to let the
business to model the service orchestration and position itself at the most suitable point over the
flexibility spectrum line in Figure 1-1.
10
Figure 1-1. The Spectrum of Flexibility
1.3 Research Goals
A fundamental issue that limits the runtime adaptability of service orchestration is the way in
which the current approaches integrate services. The current popular standards for service
orchestration such as WS-BPEL wire a set of atomic services together according to a process.
Nonetheless, the processes are short lived relative to the business requirements or goals that
lead to service composition or aggregation, as illustrated in Figure 1-2. The same business
requirement could be achieved using many different ways or processes (1 and 2). In addition,
the ways or processes can be changed frequently in comparison to the business requirement (3
and 3’).
Figure 1-2. Processes vs. Business Requirement
However, as mentioned, current standards, including the popular WS-BPEL, wire a number
of services according to a process to define a service composition. In this sense the process
becomes the fundamental concept that forms the service composition, not the business
requirements/goals. By grounding a service composition upon a process or a workflow invites a
problem.
The problem is, that each time there is a change in the way the business requirement is
achieved, the complete composite is changed resulting in a greater impact on the business. As
mentioned, the processes (i.e., the way of achieving the business requirement) are short lived
compared to the business requirement. It follows, instead of the processes; a representation of
the business requirement should be the fundamental concept of composing services. Having
established such a service composition which represents a business requirement, multiple
processes (ways) may be defined upon the same composition as an added advantage. In fact,
leveraging the services using the same system infrastructure according to varied business
demands of service consumers is a requirement in both SaaS and Service Brokering business
Nothing is Changeable
Everything is Changeable
Required Amount of Flexibility
Desire for Agility
Controlling Factors
time
Process 2
Process 1
Process 3
Business requirement or goal that leads to compositionWay
s to
ach
ieve
th
e sa
me
goal Process 3'
11
models. Both these business models attempt to increase the profit by repurposing and reusing
the existing resources and infrastructure.
In addition, the relationships among the collaborating services are not explicitly represented
in the existing standards [78]. This leads to the lack of a proper abstraction required for
capturing the underlying operational environment. Such a representation is also essential to
provide the required stability in achieving flexible processes. In the absence of such an
abstraction, the processes are based directly on existing services and the concrete interactions
among them. Such direct grounding of processes is unstable in a volatile environment where
partner services can frequently emerge and disappear. As the partner services disappear, the
relationships among them also disappear, resulting in unstable processes. Therefore, in
architecting a service composition, the collaborators and the relationships with them should not
be treated as second class entities. Rather, they are a core part of the composition and should be
treated in that way to ensure the viability of the composition in the long run.
Due to the potentially numerous runtime modifications, the service composition can become
complex and consequently hard to maintain [79]. Under such circumstances, the adaptations in
service orchestration are naturally discouraged as it leads to increased complexity and possible
violations of system integrity. As such, the ability to achieve adaptation to process definitions or
enacted process instances can be limited. Therefore, in designing a service orchestration
language, it is paramount to support a proper separation of concerns, which ultimately allows
the application of changes without increasing the complexity of the overall system. This
includes the separation of control-flow and data-flow, the separation of functional structure
from the management structure and the separation of internal and external data exchange of a
service composite. Furthermore, the modularity in service orchestration logic is also important
to ensure that the adaptations do not lead to increased complexity.
As mentioned earlier, while a service orchestration language should provide better flexibility,
it also needs to have the necessary measures to control the flexibility to protect the business
goals [76]. Complete reliance towards the manual verification techniques such as peer-
reviewing (upon a proposed modification) can be tedious and inefficient [80]. They may
potentially leave many errors in the service orchestration due to human error. Such errors can be
harmful in two different ways. Firstly, a wrongful modification itself can lead to loss of
businesses causing immediate damage to the organisation. Secondly, the confidence level of
applying modifications to service orchestrations can decline, causing implicit and gradual
damage. This leads to a frequent bypass of business process support systems making the
business process support more a liability than an asset [56]. On the one hand, it is necessary to
have flexibility to deal with and survive upon unexpected change requirements. On the other
12
hand, unrestrained flexibility can be counterproductive, as it leads to wrongful modifications
that threaten the integrity of the composition. In this sense, the adaptability of a service
orchestration should neither (a) so rigid to prevent possible changes nor (b) uncontrollably
changeable, compromising the integrity of the composition.
It should be noted that the amount of flexibility required in service orchestration is not
determined by the service aggregator alone. The aggregator has to operate in a service eco-
system as shown in Figure 1-3. The other stake holders (collaborators) such as service
consumers and service providers play a major role. The service consumers and the service
providers are also businesses that have their own goals to achieve. Hindering those goals can be
counterproductive in the long run. While still providing the required flexibility for the service
aggregator to optimise the service offerings and maximise the profit, the service orchestration
mechanism needs to provide methodologies and techniques to correctly reflect such goals and
constraints in the service orchestration. The way in which such goals and constraints are
specified in service orchestration should not unnecessarily restrict the possible modifications.
For example, a global or system-wide specification of all the constraints of all the service
providers and consumers can be overly restrictive. Instead, better modularity need to be
provided to define the scope of the constraint in a service orchestration. Such unnecessary
restriction due to the global specification of constraints will hinder the aggregator from
capturing a market opportunity.
Therefore, it is necessary to re-visit the methodologies to model, enact and manage service
orchestrations. Business organisations who wish to combine and repurpose existing services
require service orchestration methodologies that provide an increased but well-controlled
flexibility. These service orchestration methodologies should satisfy the aforementioned novel
requirements of emerging service delivery models such as SaaS and Service Brokering.
This research is motivated towards creating a novel service orchestration approach to
comprehensively define, enact and manage service orchestrations. As shown in Figure 1-4, the
service orchestration approach should be capable of serving multiple service consumers using
the same application instance and service infrastructure as demanded by novel usages of service
orchestrations. The same service infrastructure should be re-proposed to suit the demands of
service consumers along multiple service delivery channels to maximise reuse and thereby
profit. The unpredictability of change requirements of both service consumers as well as
underlying service providers needs to be handled by swiftly adapting to those changes, but in a
well-controlled manner to ensure uninterrupted and well-optimised service delivery. New and
varied consumer requirements need to be met. New emerging service providers need to be
13
bound to offer the services in a wider spectrum. New collaborations need to be established and
existing collaborations need to be optimised.
Figure 1-3. Service Eco-System and the Amount of Flexibility
Figure 1-4. The Research Goal
The beneficial outcome of this research is a service orchestration approach to quickly define
and deploy adaptive service orchestration. The approach allows a Service Aggregator to model,
enact and manage a service orchestration in an environment where unpredictability is high. In
order to achieve the above outcome, this research aims to achieve the following technical
research objectives.
1. Provide a meta-model and a language to realise adaptable orchestrations in service
compositions.
2. Design a process enactment engine to execute adaptable business processes.
Successfully use the enactment engine in a service oriented environment to realise
adaptive service orchestration.
3. Design an adaptation management system to manage both ad-hoc and evolutionary
adaptations on service orchestrations at runtime.
Goals Goals
Goals
Goals GoalsGoals
Ser
vice
Del
iver
y
Required Amount of Flexibility
Service Consumers
Service Aggregator
Service Providers
GoalsGoals
Service Orchestrationupon a single application instance
Multiple Service Consumers (Tenants / Groups)
Multiple Service Providers(Concrete Servicers)
Service Aggregator
Serv
ice
Del
iver
y
How to Model, Enact and Manage ?
Unpredictable Changes
14
4. Improve existing Web services middleware to provide a runtime platform for
realising adaptive service orchestration.
1.4 Approach Overview
To address the above mentioned research objectives, we envisage a service composition as an
organisation, where the relationships between partner services are explicitly captured. These
relationships form the basis of the organisation representing the service composition and
providing the required abstraction and stability for defining business processes. Then the
coordination logic of the organisation (i.e. organisational behaviour) is defined in a flexible and
reusable manner. Multiple business processes could be defined and changed to fulfil the varied
and changing business requirements of many service consumers.
We call this new approach Serendip. The fundamental difference between the traditional
approaches for service orchestration and the approach used in Serendip is shown in Figure 1-5.
In traditional service orchestration approaches, a process layer is usually ground on top of a
services layer. As such, services are wired according to a business process. In contrast, rather
than grounding the processes directly in concrete services, the processes are grounded upon an
adaptive organisational structure. Consequently, Serendip introduces a new layer (i.e. the
organisation) between the process and service layers.
The organisation is made up of different organisational roles and relationships between them.
Concrete services work for the organisation (i.e. are bound to the composition) by playing roles
in it. The mutual obligations, interactions and constraints among the collaborators (service
providers and consumers) are explicitly captured via the service relationships among these roles
in the organisation. Multiple processes can be defined for such a service organisation to achieve
varied business goals. However, they share and are grounded upon the same organisational
structure. The processes are defined in terms of the interactions in the organisation as captured
in its service relationships between the partners. The service relationships maintain state and
interpret the runtime interactions according its state to determine the next action among many
possibilities as defined by the processes. The organisation or its service relationships provide
the basis for anchoring the business processes, providing a stable basis for realising changes in
the business processes and the interactions between the collaborators.
Ownership of the concrete services can be external or internal to the business organisation.
The organisational structure consists of the defined service relationships between the
collaborators and provides a flexible but managed platform for defining and adapting business
processes at runtime. Each service relationship captures the collaboration aspects between two
roles. The concrete services honour these collaboration aspects while working for the
15
organisation by performing obligatory tasks. Much organisational behaviours are defined to
specify valid ordering of task executions and to specify the constraints that should not be
violated. These organisational behaviours subjected to change according to the business
requirements however, without violating the defined constraints.
Figure 1-5. Traditional Approaches vs. Serendip
The way the stability for business processes has been provided in Serendip is different from
the existing imperative appraoches. The approaches such as WS-BPEL provide stability
through the rigidity embodied in the prescriptive nature of the language. As such, the task
ordering is strictly structured. However, if the stability for processes is provided by inherent
rigidity, it comes with the disadvantage of lacking business agility upon due changes. In
contrast, Serendip provides the stability for processes via an adaptable organisation. The amount
of flexibility that can be leveraged is determined by the business domain and time. As such, a
business may start with less flexibility following a more cautious approach and can gradually
become more flexible in exploiting the available market opportunities. That is, the amount of
flexibility exhibited in the IT composition modelled as an organisation truly reflects the amount
of flexibility required by the business domain at a given time.
Figure 1-6. Adaptive Organisational Structure Providing Required Amount of Flexibility
The Role Oriented Adaptive Design (ROAD) [81-83] approach to adaptive software systems
has such an organisational structure as the basis for service composition. However, it lacks
support for defining adaptive business processes. This research adopts the ROAD concepts and
Business Process
...
Business Processes
...
Traditional Approaches Organisational Approach
Abstraction and StabilityAdaptive Organisational Structure
...
Ser
vice
Pro
vide
rs /
Con
sum
ers
Goals Goals
Goals
Goals Goals Goals
Service Consumers
Service Broker
Service Providers
Serv
ice
Del
iver
y Tenants
SaaS Vendor
Service Providers
SaaS Brokering
Adaptive Organisational Structure
Business Processes
Required Amount of Flexibility
Service Consumers
Service Aggregator
Service Providers
16
further extends it to provide an adaptive service orchestration language and meet the new
challenges for service orchestration; in particular, those by the service brokering and SaaS
delivery models, as highlighted in sections 1.2 and 1.3.
1.5 Contributions
Followings are the original contributions of this thesis:
An analysis of existing approaches, in terms of employed techniques to improve the
adaptability, inherent limitations and lessons learned.
A meta-model that underpins a set of identified design principles to model an
adaptive service orchestration.
A flexible language (SerendipLang) based on the meta-model to implement adaptive
service orchestration.
An event-driven process enactment and execution engine.
An adaptation management system to allow both evolutionary and ad-hoc
modifications during the runtime.
A Petri-Net-based temporal and causal process validation approach and toolset to
ensure the integrity of service orchestrations.
An extension to the Apache Axis2 Web services engine to leverage the adaptive
service orchestration support in the Web services middleware layer.
1.6 Thesis Structure
This thesis is structured into three main parts consisting of ten chapters. Part I consists of the
introduction (Chapter 1), a business example (Chapter 2) and a literature review (Chapter 3) to
provide the required background knowledge for this thesis. Part II presents the Serendip
approach for adaptive service orchestration by presenting the meta-model and the language
(Chapter 4), the design of the Serendip runtime (Chapter 5) and the adaptation management
mechanism (Chapter 6). Part III discusses the implementation details of the Serendip framework
(Chapter 7), a case study (Chapter 8), the evaluation results (Chapter 9) and finally a conclusion
recapping the contributions of this thesis and related topics for future investigation (Chapter 10).
Chapter 2 provides a business example of an online market place where road side assistance
is provided to motorists by integrating a number of distributed services. The introduced business
scenario is used as a running example by the remaining chapters of this thesis. The example
captures the technical challenges that a service orchestration approach has to meet in designing
17
and managing service orchestrations. Based on these challenges a set of criteria have been
developed that should be satisfied by a service orchestration approach.
Chapter 3 examines the existing literature to analyse past attempts to improve the adaptability
of service orchestration. As a result of this study, a number of underpinning techniques to
improve the adaptability have been identified. These techniques are introduced by providing
example approaches from the existing literature. The existing approaches are evaluated based on
the criteria introduced in Chapter 2. Then, a number of observations and lessons learnt are
identified. Finally, an analysis of what remains to be done to achieve an adaptive service
orchestration is presented with respect to the findings from the existing literature.
Chapter 4 introduces the organisation-based approach, i.e., Serendip, to achieving adaptive
service orchestrations. The underlying basic concepts, the meta-model and the language are
presented. The basic concepts of the approach include loosely-coupled processes, behaviour-
based process modelling, behaviour-specialisation, two-tier constraints and interaction
membranes. The Serendip approach and the process modelling language are demonstrated using
examples drawn from Chapter 2. The capabilities of the Serendip approach towards achieving
the objectives of this research are also discussed.
Chapter 5 investigates the design of an adaptive service orchestration runtime that is capable
of enacting adaptive service orchestrations according to the Serendip approach. The design
expectations as well as the design decisions that fulfil those expectations are presented. It
clarifies different aspects of such an orchestration runtime, including process life-cycle
management, event processing, data synthesis and synchronisation, and dynamic process
visualisation.
Chapter 6 discusses how process-adaptations at both definition level and instance level are
managed in an adaptation management system. The challenges of designing such an adaptation
management system are first presented, followed by a detailed discussion on how those
challenges are addressed. Specific aspects discussed include how to ensure the separation of
management and functional concerns, how to facilitate remote management of service
orchestrations, and how to ensure the integrity of service orchestrations. A novel scripting
language is also introduced to support safe, batch-mode adaptation to achieve consistency,
isolation and durability properties for process adaptations.
Chapter 7 presents the Serendip Orchestration Framework, which is the middleware
implementation supporting the service orchestration approach introduce in this thesis. This
chapter focuses on providing the concrete technical details on how to provide the middleware
support, including how the available technologies and standards are used to develop the
18
framework. It has three main parts. Firstly, the implementation details of the core components
of the framework are introduced. Secondly, the deployment environment which is based on
Apache Axis2 is presented. Thirdly, the available tool support, which includes tools for
modelling, monitoring and adapting service orchestrations, is introduced.
Chapter 8 revisits the business example given in Chapter 2 and applies our approach to it as a
systematic case study. On the one hand, this chapter evaluates the applicability of Serendip to a
business scenario. On the other hand it provides guidance on how to model a Serendip
orchestration from scratch. Firstly, the guidelines to model a Serendip orchestration is provided
independent of the business example so that they can be used in general. Then, these guidelines
are applied to the business example. Later, various adaptation capabilities of the Serendip
approach, as applicable to the case study, are presented.
Chapter 9 further evaluates the work of this thesis. Firstly, the adaptation capabilities of the
approach are systematically evaluated against the change support patterns and features
introduced by Weber et al. [84, 85]. Secondly, an evaluation of the amount of performance
overhead to realise adaptation in the Serendip framework is presented. Thirdly, a comparative
assessment of the approach is carried out against the criteria developed in Chapter 2.
Chapter 10 concludes the thesis by summarising the key contributions of the thesis to achieve
adaptive service orchestration. It also outlines some topics for future investigations to further
improve the Serendip approach and framework.
19
Chapter 2
Motivational Scenario
In this chapter we present a motivational scenario based on a business that provides road side
assistance to motorists who require assistance when their cars break down. The objective of the
discussion is twofold. The first objective is to present a motivational scenario, which can be
used throughout the chapter to explain the Serendip concepts by providing suitable examples.
The second objective is to use the business example later as a case study to show how Serendip
is applied on a concrete business scenario.
Firstly, in Section 2.1 we will introduce the business model of the RoSAS2 business
organisation that provides roadside assistance as a service by integrating a number of other
related services such as garages and tow-trucks. Secondly, we will identify the requirement of
performing controlled changes to the defined service orchestration due to unforseen business
change requirements in the given scenario. This discussion is included in Section 2.2. Thirdly,
in Section 2.3 we introduce the benefits of designing a service orchestration to support the
Single-Instance Multi-Tenancy (SIMT). Finally in Section 2.4, we list a number of requirements
that should be satisfied by a service orchestration approach, summarising the overall discussion.
2.1 RoSAS Business Model
RoSAS is a business organisation that provides roadside assistance services to motorists. To
provide roadside assistance RoSAS expects to integrate a number of service providers that are
already available and repurpose their business offerings as part of roadside assistance business.
This includes Garage/Auto-repair Services to repair the cars, Tow-Trucks for towing damaged
cars and Taxi Services to provide alternative transportations. Apart from these external services,
RoSAS requires hiring Case-Officers to handle the assistance requests. All these are different
functional requirements that should be satisfied in order to carry out a roadside assistance
business.
Consequently, as the aggregator RoSAS envisions multiple functional positions in its
business model that needs to be filled. These positions will be filled by both the service
consumers and providers. For example, a motorist is a service consumer whilst a Garage is a
2 RoSAS=Road Side Assistance Service
20
provider. In common, let us call these all are collaborators, because what drives the RoSAS’
business is the collaboration among these service providers and consumers. These collaborators
can be either internal or external to the RoSAS business organisation in terms of the ownership,
e.g., a Case-Officer is internal to the RoSAS whilst Taxi Service is external to RoSAS.
Figure 2-1. The Collaborators of Rosas Business Model
These collaborators are located in a distributed environment as shown in Figure 2-1. For
example the web application that accepts taxi bookings for a Taxi Service may be hosted in a
remote server. A motorist may use a mobile application installed in a handheld device to send
the assistance requests and receive notifications. Similarly, a case officer may use a desktop
application to handle assistance requests. Therefore, the IT support system of RoSAS business
model should be capable of functioning and coordinating the tasks of these collaborators in a
distributed environment. RoSAS uses Service Oriented Architecture (SOA) and Business
Process Modelling (BPM) techniques due to their inherent advantages [3, 4, 86] to build its IT
support system. That means, the RoSAS system is defined as a composition of services with the
business process support to coordinate the tasks among the collaborators.
What keep such business collaboration sustainable are the benefits available to its
collaborators. RoSAS’ customers benefit due to the well-coordinated road-side-assistance they
receive. Service providers such as Garages, Tow-Trucks and Taxis Services benefit due to
increased volume of businesses they receive from being affiliated with RoSAS compared to
being stand-alone business entities. Case-Officers benefits due the wages they receive by
providing the labour to analyse and resolve assistance requests.
As a service aggregator, RoSAS has its own business goals. While depending on the other
service providers to deliver its services, RoSAS needs to vend its business offerings. As a
business organisation, RoSAS is trying to increase the number of consumers of its “roadside
assistance service”, by attempting to fulfil the requirements of various market needs for
improved roadside assistance. Such a service composition as an IT system should align the
RoSAS business model as accurately as possible in its design. The business relationships among
the collaborators need to be explicitly represented. While there is a relationship with aggregator
Motorist (MM)Case Officer (CO)
Garage Service (GR)
Tow-Truck Service (TT) Taxi Service (TX)
RoSASThe Aggregator
21
(i.e., RoSAS) and a particular collaborator (e.g., a Tow-Truck service), there are relationships
between the two collaborators from the service composition point of view. Usually the
aggregator-collaborator relationships are maintained as a Service-Level-Agreement (SLA).
However, it is important that the relationships between collaborators, i.e., collaborator-
collaborator relationships are also captured in the design of the service composition or the IT
system. These relationships need to capture the mutual obligations among the collaborators in
order to achieve the goals of the aggregator. For example, what are the obligations of a Garage
towards a Tow-Truck, What are the obligations of Case-Officer towards a Garage and so on,
need to be explicitly represented and managed. All these relationships need to be directed
towards achieving the goal of RoSAS, i.e., to provide road side assistance.
In addition, RoSAS wishes to define multiple processes, to provide roadside assistance in a
variety of forms to address a wide variety of market needs. For example, RoSAS may define
three different service delivery packages Platinum, Gold, and Silver, where each package may
respond to customer requests in different ways. For example, the activities involving towing and
repairing may be included in all packages but a complementary Taxi service will only be
included in the Platinum package.
The ultimate target is to use the same system as much as possible to provide differently
tailored business offerings in a customisable manner. In order to effectively achieve the above
objective, RoSAS expects to use service orchestration technologies to automate the road side
assistance processes. In this sense, the services (e.g., Web services) exposed by the
collaborating service providers such as Garage chains, Tow-Trucks chains, Case-Officers3 and
Taxi providers are invoked according to service orchestration description, defined by RoSAS.
For example, when assistance request need to be analysed, RoSAS invokes the Web service
exposed by Case-Officer’s software system; if a car repair is required, RoSAS invokes the Web
service exposed by the currently bound Garage chain and so on. In response, the collaborators
perform the obliged tasks. For example, a Case-Officer may analyse the assistance request; a
Web service of the Garage chain may allocate the most suitable Garage to send the location
details. Nevertheless, the actual activities of partner services are transparent to RoSAS, i.e.,
RoSAS cannot perceive or assume any knowledge of how the partner services are implemented,
organised and internal coordination.
2.2 Support for Controlled Change
Inevitably, a service orchestration is subject to many changes at runtime. The changes may be
applied to the service orchestration permanently or sometimes to a single customer’s case, i.e.,
3 The application Case-Officer uses exposed as a Web service.
22
specific to a particular process instance. For example, a customer of RoSAS possibly request to
delay the towing until a friend picks her up, because the only shelter she has at the moment is
inside the car. This requirement is an isolated and uncommon requirement. Such process
instance specific (ad-hoc) change requirements [55, 56] can or may not be captured in a process
definition during the modelling phase, predominantly due to two reasons. Firstly, it is
impossible to predict all such isolated requirements during design time and hence captured in
the process design. Secondly, even if it is possible to predict, the complexity of the process
design can significantly increase due to the need to accommodate many such uncommon
isolated requirements. However, being flexible in catering such isolated uncommon
requirements would give RoSAS an advantage over its competitors in attracting and retaining
customers.
On the other hand, a change, such as payment protocol update due to switching from cash-
based payments to more sophisticated credit-based payments, needs to be applied to the whole
service orchestration. Such changes are called evolutionary changes [55, 56]. Evolutionary
changes will ensure the continuing fit between the IT service orchestration and the real world
business processes. Upon such an evolutionary change, all the future process instances behave
according to the changed or the new definitions. Consequently, the level of persistency is higher
in evolutionary changes compared to the process instance specific changes mentioned above.
An incorrect evolutionary change can cause devastating damages to the business.
Being the service aggregator, supporting both process instance specific and evolutionary
changes to business processes are very important for RoSAS. Nevertheless, both the process
instance specific and evolutionary changes need to respect the business goals of the composition
(RoSAS) as well as those of its bound collaborators, i.e., both service providers and consumers.
From one hand RoSAS has to facilitate the demands of its service consumers but without
violating the goals of service providers. For example, a delayed towing may be accommodated
in for a specific process instance but only if such a change does not lead to any violations to the
underlying operations such as towing and repairing. On the other hand, the changes in
underlying service provisioning should not violate the goals of service consumers as well. For
example, the changes in underlying payment protocols should not compromise achieving the
business objectives of consumers due to added overhead. In a nutshell, the same service
orchestration should be modified as needed but without compromising the business goals of
consumers, service providers and the aggregator. Therefore, it is important that the service
orchestration mechanism provides an appropriate mechanism to represent and safeguard such
business constraints, but without unnecessarily restricting the possible changes.
23
Apart from changes to business processes, there can be changes to the partner services as
well. RoSAS as a service aggregator may start and terminate the agreements with certain service
providers depending on factors such as performance and availability. For example, if a
particular Garage chain constantly underperforms and delays repairs, RoSAS may opt to find
another suitable Garage chain. Likewise, if a Tow-Truck service does not cover some
geographical regions, RoSAS needs to find alternative Tow-Truck services to handle accidents
in those regions.
RoSAS is also hopeful to expand its business offerings. Depending on customer demands,
for example, more services such as Paramedic and Accommodation services may be integrated
as part of the roadside assistance process. Such aggregation of a wide variety of services gives
RoSAS the competitive edge over its competitors. The design of the service orchestration
should allow such future extensibility without a complete re-engineering of the service
orchestration.
2.3 Support for Single-instance Multi-tenancy
With the emergence of Cloud Computing and maturity of Service Oriented Architecture, the
Software-as-a-Service (SaaS) delivery model has gained popularity, due to advantages such as
economies of scale, lower IT costs and reduced time to market [20, 45]. In order to exploit the
benefits of SaaS delivery model, RoSAS is also expecting to provide roadside assistance as a
service. The target business sector in this case is Small and Medium Businesses (SMB), whose
core business is not, but can leverage road-side-assistance. For example, SMBs like Car-sellers,
Travel agents, Insurance companies may want to attract customers by providing roadside
assistance complementing their core business offerings to their customers, i.e., car buyers,
travellers and policy holders. In this sense, RoSAS is also acting as a SaaS vendor by providing
its software system as a service on a subscription basis. SMEs become SaaS tenants who use the
RoSAS software service [45, 87]. In order to fulfil this business need, RoSAS needs to model
the service composition and the service orchestration to take advantage the SaaS paradigm.
Multi-tenancy [88, 89] is one of the most important characteristics in the SaaS paradigm.
Multi-tenancy of SaaS applications has been provided with different maturity levels [90, 91]. In
higher maturity models, multi-tenancy is provided by a single shared application instance,
which is utilised by all its tenants [89]. This is usually known as Single-Instance-Multi-Tenancy
(SIMT). In lower maturity models, multi-tenancy is achieved by using techniques such as
allocating dedicated application instances for each and every tenant. However, such solutions
fail to exploit the benefits associated with SIMT such as economies of scale.
24
In SIMT each tenant interacts with the system as if they are the sole user [92]. Here, the term
application instance should not be confused with process instance. These are two separate
concepts. A single application instance may host multiple process instances. During the lifetime
of an application instance, many process instances may be enacted and terminated. A tenant can
request modifications to the SaaS service to suit their changed business objectives. However,
these modifications are applied on a single shared application instance [90]. Subsequently
modifications could be available to other tenants who use the same application instance. In
some cases this might be a necessity, e.g. applying a patch or upgrade. However, in other cases,
modifying a common application instance can challenge the integrity of the application,
compromising the objectives of RoSAS and other tenants.
One naïve solution to achieve isolation of process modifications is to allocate separate
process definitions for each and every tenant, i.e. at a lower level of the maturity model for SaaS
[90]. However, it should be noted that tenants of a SaaS application have common business
interests. Hence, there can be significant overlapping between those business processes. Having
separate process definitions obviously leads to code duplication. For example, how the towing
should be done will be duplicated among several process definitions. Such duplication deprives
the SaaS vendor (i.e., RoSAS) from exploiting the benefits of SIMT. When required, the SaaS
vendor has to apply modifications repeatedly to these process definitions, which is not efficient,
cost-effective and could be error prone. Therefore, the service orchestration approach should be
able to capture commonalities among processes.
Although capturing commonalities is essential and beneficial to any SaaS vendor, it is allied
with two accompanying challenges. Firstly, while the tenants have common business
requirements, these requirements may vary in practice. RoSAS cannot assume a common
code/script can continue to serve all the tenants in the same manner. These variations can be
major variations or minor variations. For example, a Car-seller may want to provide an
alternative transport (Taxi) for its motorists, which might not be required by the insurance
company who needs only towing and repairing. This is a major variation. Even though the
towing is common to all customers, in practice some variations may exist in how the towing
should be done. For example, a tenant might request a change to the pre-conditions for the
towing in his process definition. This is a minor variation. Therefore, the service orchestration
approach should be able to allow such major and minor variations, while capturing the
commonalities.
Secondly, capturing commonalities might lead to invalid boundary crossings as a single
service instance is shared. To elaborate, suppose that tenant Car-seller requests a modification,
e.g., add a new task to handle the credit check prior to a credit-based payment. However, this
25
modified business process might violate a business requirement of another tenant such as
Insurance Company, because that will expand the overall time of completion. Moreover, such
changes may even lead to violations to the business models of some of its service providers
such as Tow-Trucks and Garages, e.g., such an addition will further delay the payment.
Overall, SIMT brings many benefits to RoSAS as it allows exploiting the advantages such as
economies of scale. Other SMBs who needs to provide roadside assistance to their customers
benefit due to advantages such as quick return on investment. Most work on meeting similar
challenges has occurred in providing virtual and individualised data service to tenants.
However, defining processes is also a key aspect of SOA, and one that needs addressing if
service orchestration approaches are to be deployed as true multi-tenanted SaaS applications.
2.4 Requirements of Service Orchestration
The above discussions reveal that there are a number of business-level desires to be satisfied by
a service orchestration approach. To support these business level requirements, the service
orchestration as the IT solution, should address key IT-level requirements. Let us list these key
requirements as follows.
Req1. Flexible Processes: As business changes, the business processes and their support
need to change accordingly, including changes to individual process instances or to a
process definition (i.e., all its instances). The changes may be due to the needs from
the collaborators, the aggregator or the customers. These changes can be about
exception handling, process improvement or to capture strategic business
opportunities.
Req2. Capturing Commonalities: A common application instance need to be shared and
repurposed as much possible in order to achieve the benefits of economies of scale.
The common requirements need to be captured in the service orchestration. The
approach should provide necessary modularisation support to facilitate this
requirement.
Req3. Allowing Variations: Customer requirements are similar but not the same. The
ways to achieve these business requirements can show major as well as minor
variations. While capturing the commonalities, it is essential to allow variations as
required. Variations should be allowed in a manner that it does not lead to
unnecessary redundancy.
Req4. Controlled Changes: Amidst numerous changes to business processes (i) the
business goals of collaborators (ii) the business goals of the composition and (iii)
26
business goals of the clients, as represented by business processes, need to be safe-
guarded. This is trivial when a shared application instance is used to define multiple
process definitions.
Req5. Predictable Change Impacts: The impact of changes (i) to business processes and
(ii) to collaborator coordination needs to be easily identified, so that remedial actions
or subsequent changes can be taken if the change is to be fully and consistently
realised. Such impacts could follow from one process to another, from a collaborator
to a whole process, or from a process to its collaborators.
Req6. Managing Complexity: A service orchestration is subjected to numerous runtime
modifications. A single instance of a service composition is customised, repurposed
in a shared manner. Such requirements can easily lead to a complex service
orchestration system. If necessary arrangements are not made to address the increased
complexity of service orchestration system, it eventually may become unusable.
Therefore, the service orchestration mechanism should provide a well-modularised
architecture to manage the complexity.
2.5 Summary
In this chapter, we have presented a business organisation that requires better IT support to
define, enact and manage its business processes. Firstly, we introduced a business model for an
organisation (RoSAS) that integrates a number of third-party partner services to provide
roadside assistance to motorists. The nature of such a collaborative business environment was
explained giving suitable examples.
Secondly, we explained the requirement carrying out controlled changes in such as
collaborative business environment. These changes might need to be carried out to handle a
specific customer case or a system as a whole. Nevertheless, the changes need to be carried out
by safeguarding the business goals of the service aggregator, service consumers as well as of the
service providers.
Thirdly, we introduced the importance of supporting the single-instance-multi-tenancy
(SIMT) in a service orchestration. Apart from being a mere service aggregator, RoSAS can
attract many customers with the support of SIMT. Roadside assistance can be leverage as a
service for other business organisations whose core competency is not roadside assistance.
Finally, we summarised the overall discussion into a set of key requirements that a service
orchestration approach should support.
27
Chapter 3
Literature Review
This chapter presents a review of existing literature to analyse the past attempts that improved
the adaptability in service orchestration. As a pretext to the discussion, Section 3.1 provides an
overview of BPM (Business Process Management). Section 3.2 then describes how BPM and
SOA (Service Oriented Architecture) complements each other in the enterprise architecture to
provide a combined solution to orchestrate distributed applications or services. Section 3.3
discusses the current understanding of the adaptability in BPM.
On the basis of these analysis and discussions, Section 3.4 examines different techniques
proposed in the literature to improve the adaptability in service orchestration. Each technique
will be clarified with a number of example approaches. These approaches are selected based on
the significance and practicality. The selection is not limited to approaches explicitly proposed
for service orchestration per se. In some cases we go beyond the scope of service orchestration
to include approaches proposed in business process modelling in general. This is done with the
intention of identifying and analysing the underpinning techniques that could be used to
improve the adaptability in service orchestration.
There are two main aims of this discussion. The first aim is to provide the reader a broader
understanding of how different techniques have attempted to improve the adaptability of
business processes, laying a firm basis for upcoming discussions in this thesis. The second aim
is to identify the inherent advantages and limitations of the different techniques, providing input
to the work of this thesis.
Finally, a summary and the observed outcome of the reviewed approaches were presented in
Section 3.5 based on the criteria presented in Chapter 2. The summary of review is presented in
Section 3.5.1. A number of observations and lessons learnt from the existing literature have
been presented in Section 3.5.2. Grounded upon the observations and lessons learnt, Section 3.6
discusses the remaining challenges in improving the adaptability in service orchestration to
provide flexible support for business processes.
28
3.1 Business Process Management - an Overview
Prior to the era of Information Technology, organisations used manual and paper-based business
processes to coordinate the organisational activities. Usually a file (or an artifact) is passed from
one person to another within the organisation, each performing a certain task. However, this
method has been unproductive, as the file or the artifact is usually delayed or misplaced [11]. In
addition, it is difficult to identify the gaps and possible optimisations in business processes due
to lack of a unified view within the organisation [93, 94]. Consequently, with the advancement
of Information Technology, software systems are used in organisations to assist humans for this
purpose. These systems naturally evolved to what we call Business Process Management
Systems (BPMS) today. The activity of managing business processes is known as Business
Process Management (BPM) [1].
According to van der Aalst et al. [7], BPM is an extension to Workflow Management.
Authors define a BPMS as “a generic software system that is driven by explicit process designs
to enact and manage operational business processes”. The term ‘BPM’ was misused with the
hype both in industry and academia. In addition, other acronyms such as BPA, BAM, WFM,
and STP have been used with overlapping meanings. The above definition is of high importance
due to its capability of separating BPM from other related disciplines such as BPA (Business
Process Analysis), BAM (Business Activity Monitoring), WFM (Workflow Management), STP
(Straight Through Process) [7]. For clarity, throughout this thesis we adhere to the above
definition of BPM by van der Aalst et al.
BPM is recognised as a theory in practice subject [95]. In this sense, both theoretical
advancements and technical innovations are directly utilised by industry applications. Theories
such as Pi-calculus [96, 97] and Petri-Nets [98, 99] have been employed in the field of BPM.
These theories provided an important formal foundation to be used in industry applications via
BPM systems. Leading vendors in IT industry backed this trend [6, 100]. It could be seen that
BPM standards and specifications are proposed to fill the gap between the theory and practice as
shown in Figure 3-1 (Cf. [95]). These are usually known as BPM languages. These languages
have paved the way for practitioners to put BPM theories easily into practice to address day-to-
day business requirements [101].
BPM standards and specifications differ from each other due to the underlying objectives for
which they were proposed. One of the most recent categorisation of these specifications is given
by Ko et al. [102] . This categorisation is based on the utilised BPM phases as proposed by van
der Aalst et al. [7]. According to Ko et al. these standards can be categorised as Graphical,
Execution, Interchange and Diagnosis. The Graphical standards are used to model the business
processes. The Execution standards are used to help with the process deployment and
29
automation. The Interchange Standards are used to interchange the business processes across
different BPMS systems. Finally, the Diagnosis standards are used to provide administrative
and monitoring capabilities in a BPMS. However, it should be noted that the boundary between
these categories is not always sharp, as acknowledged by the author. For example YAWL [103]
is a both a Graphical and an Execution Language.
Figure 3-1. The Relationship between BPM Theories, Standards and Systems, Cf. [95]
We further examine and categorise below the Graphical and Execution Standards. A
discussion on Interchange and Diagnosis standards is out of the scope of this thesis. In this
discussion, however, we opt to use the term Modelling standards instead of the term Graphical
standards for clarity as we consider the former is better aligned with the BPM phase in which
the standards are used. The categorisation we propose identifies the fundamental concepts that
the standards or the languages are based on. We call such a concept, the principle concept,
which anchors all other related concepts around. For example some approaches use ‘activity’ as
the principle concept while other approaches use ‘artifact’ or ‘data object’ as the principle
concept to model business processes. A proper understanding of the fundamental concept of an
existing language provides a basis to further extend the discussion on the applicability of BPM
techniques in adaptable SOA systems. The categorisation is given in Figure 3-2. Each category
is discussed in detail below.
Figure 3-2. Categories of Process Modelling Approaches
Activity-based: In activity-based process modelling, the principle concept is the activity,
which is an execution step of a complete process. The activities are arranged by procedurally
specifying the order in which the activities should be executed. The activities are arranged as
flow-based or block-structured manner. In flow-based approaches, activities are arranged as a
flow. Many graphical process notations are flow-based due to its ease of understanding. Typical
examples for flow-based approaches are WSFL [36], EPC [17] and BPMN[13]. In block-
structured approaches such as BPEL [104], BPML [34] and XPDL [105] the activities are
BPM Systems and Software
BPM Standards and Specifications
BPM Theory
Modeling and Execution Specifications
Activity-based
Constraint-based
Artifact-based
Time -based
Role -based
30
arranged using block structures such as loops and conditional branches. Influenced by the
imperative programming principles, block-structured approaches [106] became popular due to
their ease of expressing execution semantics that can directly be interpreted by an execution
engine. The main advantage of activity-based process modelling in general is its explicit
ordering of each step in a business process.
Constraint-based: In constraint-based approaches the process execution is usually viewed as
a constraint satisfaction problem (CSP). In a CSP, a solution is an assignment of variables over
a variable-domain such that all constraints are satisfied [107]. Mapping this concept into BPM,
constraint-based approaches use a set of constraints to specify what cannot be done rather than
what should be done throughout the process execution. Constraint is the principle concept in
these approaches. Such constraints were described based on objects [108, 109] or by temporal
properties [110, 111] among discrete activities. Constraint-based approaches are used to define
the goal driven business processes where it is easier to specify ‘what’ needs to be done to
achieve the goal rather than ‘how’. A path of execution or a procedure is just a single solution
(among many) that satisfies the set of specified constraints.
Artifact-based: Artifact-based approaches define processes based on the lifecycle of a set of
business artifacts and how/when those are invoked [112]. Here, a business artifact can be a
document, product or a data-record [113]. Artifact-based approaches show a close relationship
with the activity-based approaches as the artifacts are modified as a result of performing
activities. However, artifact-based approaches do not use activity as the principle concept.
Instead, the principle-driver for progression of a process instance is not the completion of an
activity but how the status of the artifact that are changed. Depending on how and when the
status of the artifact changes the next activities are chosen and the process progresses [61].
Case-driven process modelling or case-handling [70, 114] can also be seen as an evolution of
the artifact-based approach [114], where a ‘case’ can be seen as a business artifact and the
process progression is determined by the completion of a case. Artifact-based process modelling
has been used heavily to model product lines, inventory management and case handling systems
[70, 115].
Time-based: Time-based process modelling approaches are usually used for process
scheduling and estimation purposes. In time-based approaches, the activities are represented
along a time-axis by using the ‘time’ as the principle concept. The start and end times are
marked by explicitly representing the time dimension. Two of the early examples of this are
Gantt-charts [116] and PERT charts [117, 118], which are used for decades for project
management and planning purposes.
31
Role-based: The role-based approaches use a role as the principle concept. Initially the roles
are defined. Then the relationship among the roles is represented via objects or via interaction
specification [119]. Role-based approaches add much value to business process modelling in
collaborative environments, where each participant has a role to play [120]. Role-Interaction-
Networks (RIN) [40], Role-Activity-Diagrams(RAD) [121] and OOram [122] can be shown as
such popular role-based approaches. Role-based process modelling has been used to model
collaborative or inter-organisational business processes.
A summary of the different categories of process modelling approaches along with popular
examples from the literature is given in Table 3-1.
Table 3-1. Categories of Process Modelling Approaches
Category Key Characteristics Main Usages Examples
Activity-based Flow or Block structured specification
of activity ordering by using the activity
as the principle concept.
All activity execution alternatives are
explicitly specified.
Procedural Task
Execution,
Scientific
Workflows
WSFL[36],
BPMN[13],
EPC[17], BPEL[104],
BPML[34],
XPDL[105]
Constraint-based Implicit specification of alternative
activity executions by using constraint
as the principle concept.
Process execution is viewed as a
constraint satisfaction problem.
Goal driven
process
modelling
Condec[110],
DecSerFlow[71],
SEAM[121]
Artifact-based Explicit representation of
data/artifact/product state as the
principle concept.
Process execution is viewed as an
artifact-lifecycle management problem.
Product Lines,
Case handling
Product-based[123],
Artifact[113],
Case Handling[70]
Time-based Explicit representation of time
dimension as the principle concept.
The tasks are organised/scheduled along
a time dimension.
Project
Management
and Planning
Gantt-charts[116],
PERT charts[117,
118]
Role-based Explicit representation of roles and
interactions among by using roles as the
principle concept.
Process execution is viewed as task
completion of roles.
Collaborative
Business
Interactions
RIN[124], RAD[121],
OOram[122]
32
3.2 Business Process Management and Service Oriented Architecture
Service Oriented Architecture (SOA) has emerged as an architectural style that provides a
loosely coupled, reusable and standard-based method to integrate distributed systems [2]. The
vendor-driven push to Web services-based middleware escalated this trend [5, 22]. With SOA,
legacy applications are exposed as reusable services. Services are composed to create new and
value added service offerings. This leads to the requirement of orchestrating the activities of the
integrated services.
Initially the general purpose programming languages such as C and Java have been used to
integrate the distributed services. However, due to lack of agility and to support frequently
changing business requirements, more agile languages and standards were required to
orchestrate services.
The advent of BPM in enterprise architecture provided a solution for this requirement. In fact,
SOA and BPM complement each other and exhibit a fine fit. This fit is mainly due to the
fundamental but complementary differences between the two disciplines as illustrated in Figure
3-3. BPM is primarily business driven and used to efficiently coordinate the organisational
activities. BPM follows a top-down approach and intend to find a solution to a specific business
problem, e.g., how to complete a shipment, how to book a ticket. In contrast SOA is primarily
IT driven and used to specify loosely coupled and reusable distributed systems, leveraging
business capabilities across organisational boundaries [4]. SOA usually employs a bottom up
development [125] and addresses the problem of how to design the system infrastructure. It
should be noted that both BPM and SOA can survive without each other. In fact, they did well
so in the past. However, collectively BPM and SOA can bring more benefits for the enterprise
architecture [3, 86, 126]. The BPM practices are used to integrate and orchestrate the distributed
services in the enterprise architecture [11, 127, 128]. This leads to many languages to
orchestrate services, called service orchestration languages as we know today.
Figure 3-3. Complementary Existence of BPM and SOA
Reflection on early proposals for service orchestration languages reveals that activity-based
process modelling (Section 3.1) is the most influential compared to the rest. Wide usage of
workflow management systems and the already established imperative programming principles
BPM SOABusiness Driven
DisciplineInformation Technology
Driven Discipline
Top-down Development
Bottom-up Development
Addresses a Business Problem
Addresses a System Infrastructure Problem
33
provided a solid foundation to define business processes for service compositions [129-131].
Furthermore, it could be seen that the technology push towards XML as a textual data format
has also influenced in designing the XML-based service orchestration languages [132].
One of the early examples for XML-based languages is WSFL [36] developed by IBM. The
motivation was to develop a language that can define the execution order and data exchange
among web services. WSFL was influenced by Flowmark/MQ (now, WebSphere® MQ
Workflow [133]), which was a workflow language developed by IBM. Thus, many
characteristics of workflows were evident in WSFL as well. In parallel, Microsoft was
developing the XLANG [19], as their own business process modelling language. XLANG
followed a block-structured methodology to define the order of activity execution. Furthermore,
there are many other approaches that were proposed by leading software vendors and
standardisation bodies, e.g., BPML [34] by Intalio, WSCI [134] by Sun, BEA and SAP
collectively, ebXML [135] by OASIS, just to name a few. This has led to a situation of having
many competing languages to define service orchestrations.
Consequently, to find a standardised manner to orchestrate the services, Microsoft, IBM and
BEA jointly proposed BPEL4WS 1.0 specification. BPEL4WS specification was influenced by
its predecessors WSFL and XLANG. Hence BPEL4WS can be seen as a compromised
specification of the duo [136]. This was a significant step towards the standardisation process.
Then SAP and Siebel joined hands to produce the BPEL4WS 1.1 [104], which received a wide
acceptance from the industry. Later, OASIS4 officially made the WS-BPEL 2.0 [137] as an
OASIS standard with participation of many software vendors. Due to this wide acceptance and
standardisation process, as of now WS-BPEL is the de-facto standard for defining business
processes in service compositions [136, 138]. WS-BPEL marked an important milestone in the
history of service orchestration and even today it is the most used standards for service
orchestration.
Although WS-BPEL has become the de-facto standards and widely accepted by the
community, mainly from the industry, it has been often criticised for the lack of adaptability
[53, 66, 67, 139, 140]. WS-BPEL provides many fault handling mechanisms in its
specifications, but such provisions are not enough for adequately handling many runtime errors
including partner service failures [140] and ad-hoc deviations in execution paths [63, 84, 141].
Therefore, there is an increased attention from both industry and academia to improve the
adaptability of business process management in service compositions.
4 http://www.oasis-open.org/
34
3.3 Adaptability in Business Process Management
The requirement for adaptability in business process management is well-understood [60, 142-
144]. The primary goal of introducing adaptability to a system in general is to facilitate the
survival upon the changes to the environment under which the system operates. In order to
facilitate the adaptability, the business process management approach needs to be flexible.
Meaning, the business process management systems should provide flexible modelling
approaches to model the adaptable service compositions. Much work has been done to identify
the types of flexibility that should be present in business process management [59, 62, 63, 145-
148].
Heinl et al. [59] identifies flexibility as an objective of a workflow system and provides a
classification schema in terms of concepts and methods to achieve the objective as shown in
Figure 3-4. According to the classification by Heinl et al., the flexibility can be achieved either
‘by selection’ or ‘by adaption’ concepts. The flexibility by selection is usually achieved by
modelling various possible execution paths. Authors further specify two methods, i.e., late
modelling and advanced modelling to achieve the flexibility by selection. On the other hand,
flexibility by adaption is achieved by changing the processes to reflect the new business
requirements. Meaning, the execution path is ‘altered’ to adapt to the changing requirements.
He further divides flexibility by adaption into ‘type’ or ‘instance’ adaptation, where the former
refers to adapting a process definition and the latter refers to adapting a process instance.
Figure 3-4. The Classification Scheme for Flexibility, Cf. [59]
By further extending, the above classification, Schonenberg et al. [63] proposes an extensive
taxonomy of process flexibility. In her taxonomy, she defines four flexibility types, i.e.,
Flexibility by Design, Flexibility by Deviation, Flexibility by Under-specification and Flexibility
by Change. This provides a terminology and a foundation to analyse various flexibilities
presented in the proposed approaches. This classification has a significant overlapping with the
classification of Heinl et al., but provides a concrete taxonomy to identify the techniques to
achieve flexibility in business process support. Furthermore, many other classifications can be
found in the literature [60, 62, 145-148]. However, this thesis uses the aforementioned
taxonomy developed by Schonenberg et al. due to its widespread recognition and applicability
for this work.
Flexibility by Selection
Flexibility
Flexibility by Adaption
Late Modelling Advance Modelling Type Adaption Instance Adaption
Objective
Concept
Method
35
3.4 Techniques to Improve Adaptability in Business Process Management
Over the past decade, a plethora of approaches have been proposed to improve the adaptability
of the business processes support. Inevitably, a large number of approaches have targeted
common standards such as WS-BPEL, BPMN due to their wide use [53, 65-67, 123, 139, 140,
149]. These approaches tried to find solutions to the adaptability concerns of these popular
standards. On the other hand, another set of approaches [61, 70, 71, 150] have attempted to find
alternative languages that can provide better business process support. A close inspection of
these approaches taken as a whole reveals that these improvements to the adaptability have been
achieved on the basis of some underpinning techniques employed.
In this section we introduce the dominant underpinning adaptation techniques as shown in
Figure 3-5. These techniques are analysed, citing suitable example approaches from the
literature. Note that the example approaches mentioned in this section might have some
overlapping in terms of techniques they have employed to achieve adaptability. In addition, the
terms used by the individual authors might be confusing, overlapping or misleading as the terms
are used in different contexts. However, this categorisation is based on the most dominant
underlying technique that had been employed to improve the adaptability. Due to a large
number of approaches proposed in the past, this section obviously cannot provide a
comprehensive analysis of all the available approaches. Therefore, only a sufficient number of
approaches were selected based on their significant contribution, practicality and originality.
These approaches will be presented with details, highlighting their inherent advantages and
limitations.
Figure 3-5. Dominant Underpinning Adaptation Techniques
3.4.1 Proxy-based Adaptation
One of the issues that service orchestration has to deal with is service unavailability. A service
composition depends on its partner services. However, these partner services are autonomic
[151], hence the management of partner services is beyond the control of the service aggregator.
When a partner service fails, the whole process tends to fail. As a solution to this problem the
technique of proxy services has been employed.
Underpinning Adptation Techniques
Proxy-based Adaptation
Dynamic Explicit Changs
Business Rules Integration
Aspect-Orientation
Template Customisation
Constraint Satisfaction
36
In Robust-BPEL approach, an adapt-ready or more robust version of a BPEL process is
generated to add the self-healing behaviours using static-proxy [152]. This solution enhances
the fault handling capabilities of BPEL-based service orchestration upon service failures. The
approach detects the requirement for adaptation by including the service invocation monitoring
code that monitors the individual services. Upon an invocation failure, a statically generated
proxy service replaces the failed service [152]. Although previously proposed approaches such
as BPELJ [153] handle faults in a similar fashion by allowing Java code snippets to be used
along with the core process, it requires a special engine to enact the processes. In contrast,
Robust-BPEL achieves the same without any modification to the BPEL orchestration engine.
Furthermore, as the more robust version of a BPEL process is automatically generated, the
engineering effort is also reduced. However, in terms of adaptability there are some limitations
with this approach. Firstly, it is not possible to dynamically discover and invoke an alternative
service upon failures. Only the statically bound alternative proxy services are used. This
limitation has been later addressed in Robust-BPEL2 [65], which replaces the static proxy with
a dynamic proxy service to locate equivalent services dynamically at runtime. Secondly, even if
it is possible to dynamically discover alternative services, there is no means to prevent invoking
the failed service again. Thirdly, service invocation is changed only upon a monitored failure. It
is not possible to provide a ‘choice’ based on several candidate services.
These limitations of Robust-BPEL2 are later addressed by TRAP/BPEL [140]. With the
enhanced generic proxy technique, instead of mere fault handling, the best service out of a set of
candidate services can be dynamically selected. In order to achieve this, TRAP/BPEL uses
transparent shaping [154, 155] to augment the BPEL processes to achieve adaptive behaviour
as shown in Figure 3-6. Transparent shaping is a technique, where an existing application is
augmented with hooks to intercept the control when an adaptation is necessary and then redirect
the control to a special code segment that handles the adaptation. This is a better technique as it
is not required to change the existing behaviour of the WS-BPEL code. As such, the adaptation
code is transparent to the existing WS-BPEL code. The generic proxy represents the adaptation
code, where it keeps all the recovery policies to provide the self-healing behaviour. The generic
proxy, developed once, can be used to provide the adaptive behaviour to many WS-BPEL
processes. If there is no fault, the specified WS-BPEL process follows the originally specified
behaviour. If there is a fault, the generic proxy will take a remedial action based on the specified
recovery policies.
37
Figure 3-6. The Generic Proxy Technique in TRAP/BPEL (Cf. [140])
The eFlow [156] approach allows changes to service endpoints. It introduces such concepts as
dynamic service discovery, multiservice nodes and generic nodes to achieve runtime
adaptability. Dynamic service discovery avoids having static and rigid endpoints. Instead,
services are dynamically selected based on service selection rules. These rules take decisions
based on organisational policies and customer preferences mapped as parameters. Multiservice
nodes allow the activation of the same service node a number of times as dynamically
determined, whilst the generic nodes allow the dynamical selection of service nodes. In a sense,
these multiservice nodes and generic nodes act as proxies allowing late binding. Another
important characteristic of the approach is the ability to insert consistency and authorisation
rules that verifies whether the adaptation is authorised and preserves the consistency of the
composition. This is a vital requirement when the approach allows process instance specific
changes. While eFlow improves the adaptability in service orchestration and most importantly
even at the process instance level, the adaptations are again limited to selecting and associating
eServices with a predefined flow or process.
A similar approach is taken by Canfora et al. [157] to achieve QoS-aware service
composition. A service composition is specified as an orchestration of services. However, in
contrast to TRAP/BPEL, the usage of proxy is somewhat different. Here, a proxy service acts as
a midpoint to forward service requests to one of the services out of a set of candidate services.
This is different from the generic-proxy where self-healing capabilities are provided to many
38
processes as per the transparent shaping technique. Here, a proxy acts on behalf of a single
abstract service of a BPEL process. The proxy maintains a list of candidates and periodically
refreshes the list by adding new ones and discarding the unavailable ones. When the service
invocation is necessary, the proxy will be bound according to the best concrete set of services
that is selected from the most up-to-date QoS parameters [158]. The proxy provides the ability
to dynamically adapt the BPEL process according to the changed QoS values without redefining
the process.
Overall, proxy-based approaches are mostly used to provide self-healing capabilities upon
service failures or runtime optimisations through service discovery and selection. Most
importantly, the proxies decouple the concrete services and the core orchestration, allowing late
binding. However, the adaptations in the service orchestration should not be limited to late or
re-binding of services. Instead adaptability needs to be addressed in a broader context, i.e., the
core orchestration also needs to change in order to optimise the activity dependencies to suit the
ever changing runtime environment. Efforts to change the core service orchestration will be
analysed in the following sections.
3.4.2 Dynamic Explicit Changes
Dynamically applying changes on processes is a technique used to change the core service
orchestration. In such cases, the processes are modelled using a standard process modelling
language and are enacted by a standard process engine (that usually lacks inbuilt support for
adaptability). Then a special module monitors the enactment environment and performs
adaptations on the enacted process instances to adjust the running processes to the changed
business requirements. Such an approach is suitable when a business already employs a process
modelling approach and later attempting to improve its adaptability, usually transparently to the
existing enactment environment. In other words, the adaptability is built-on-top rather than
built-within.
One of the earliest example approaches supporting such dynamic changes in processes is
proposed by Weske et al. [159] based on the WASA framework [160]. In the WASA
framework, workflows stored in a database are enacted via a workflow server (enactment
engine) [161, 162]. The above work provides the adaptation capabilities by performing several
operations on those enacted workflow instances or more precisely on the activity models. The
activity models are either atomic or more complex models of activities having their own internal
activity structures. Operations such as addActivity, delActivity, addEdge, delEdge are used to
dynamically adapt these activity models. Similar efforts in adapting the running processes can
be found in other approaches such as Chautauqua [163] and WIDE [164, 165]. For example,
39
WIDE allows cancelling tasks and jumping back and forth in a workflow instance upon
triggered exceptions or events [164]. Chautauqua [163] allows dynamic structural changes on
the enacted processes or more precisely on Information Control Networks (ICN) [166] via the
special operation call edit. In here the fundamental characteristic is ‘monitor and adapt’.
Nonetheless, one of the limitations of these approaches is that they do not provide enough
control over changes. As such, these ad-hoc changes can lead to erroneous situations, thus
raising the need for better control when performing adaptations.
While adhering to a similar principle, ADEPTflex [167] pays a special attention to controlling
the changes. ADEPTflex supports dynamic changes to the workflows or ADEPT models [168].
The tasks can be added or deleted from a workflow instance. Certain tasks may be skipped to
speed up the execution of a workflow instance. Tasks that were modelled to be executed in
parallel can be serialised (or vice versa) via another set of operations. A set of these operations
is included in a change process. These changes can be applied on a permanent or temporary
basis. A temporary change may be undone during the change process whereas permanent
change cannot. Later the adaptation capabilities were improved by employing engineering
techniques such as plug & play of process segments in ADEPT2 [169], a successor of
ADEPTflex. As shown in Figure 3-7, a software engineer can select the application functions
from a repository and insert them into the process template via drag and drop. One of the
important features of the ADEPTflex approach is its ability to carry out a minimal but complete
set of changes ensuring the correctness and consistency of running processes. The approach
introduces several rules or properties to ensure the correctness and consistency upon changes to
process instances. The ADEPTflex framework rejects the changes if the correctness and
consistency properties are violated. Note that these correctness and consistency rules are defined
in terms of states of the tasks and processes. The properties ensure that the changes do not lead a
process to an incorrect or inconsistent state. For example, the framework does not allow delete
operation on tasks that are RUNNING, COMPLETED, FAILED, or SKIPPED. However, in
business environments, such general state-based control to ensure consistency and correctness
may not be sufficient. There are domain specific constraints on modifications. For example, “A
payment must be carried out after a confirmation of a car repair” is such a domain specific
constraint. It is possible that a modification that ends up in another consistent state of the
process instance might violate a specific business requirement as the framework lacks support
for such domain specific consistency checks.
A similar but more service orchestration specific approach is proposed by Fang et al. [57] to
adapt enacted BPEL processes. Similar to ADEPTflex [167], activity states are considered to
realise adaptation decisions in this approach. However, one improvement in this approach is that
40
the approach proposes a set of change patterns to perform these adaptations or navigation
changes, which is its main focus. This is in contrast to operation-based adaptations proposed
earlier. These change patterns, i.e., Skip, Activate, Repeat, Retry and Stop, determine how an
activity should be adapted or navigated. Each pattern is described in terms of change to produce,
action to carry out, preconditions and consequences of adaptation aspects. Such aspects are
grounded upon ‘activity and link-based’ state transition diagrams that specify whether an
activity is RUNNING, INACTIVE, FINISHED, SKIPPED, FAILED, TERMINATED,
EXPIRED and a link is UNDETRMINED or DETEMINED (i.e. TRUE or FALSE). These
statuses are transited according to the adaptation patterns. Furthermore, these statuses help to
identify the suitable statuses to perform the adaptations as specified by the patterns. For
example, an activity X may be skipped if the activity is in the INACTIVE state. The
consequence of the Skip action is that the activity is now in the SKIPEED state.
Figure 3-7. Composing Segments Using Plug-and-Play Technique (Cf. [169])
Automating dynamic modifications is essential in business environments where changes are
frequent. SML4BPEL (Schema Modification Language for BPEL) [170] is a language proposed
to aid dynamically modifying the BPEL process definitions. The approach allows writing down
the process modification policies as rules. The SML4BPEL engine dynamically modifies the
schemas according to the parsed rules. This allows automating the process modifications and
instance migrations according to pre-planned modification policies. However, this approach has
two main limitations. Firstly, the adaptations need to be known at design time, which is not
always the case. Secondly, it does not support adaptations specific to a particular process
instance.
The limitations in the above approach were addressed by the approach proposed by Yonglin
et al. [141]. The approach pays a special attention to representing the process context as part of
adaptation decision making. The rules reason about the current context represented by facts.
Then adaptations are carried out on running process instances. The context is represented at two
main levels, i.e. business and computing (IT). The business context captures the current
business environment, whereas the computing context captures versions of schema, status of
process instance and resources allocated.
41
Overall, these approaches supporting dynamic explicit changes are used as a remedy for the
inherent limitations of the process modelling and enactment standards; and enactment
environments. The ability to continuously optimise the running processes to suit the changing
business environments has made a major impact on improving business process management
systems (BPMS). Furthermore, the separation of the execution concerns from the adaptation
concerns is advantageous from the architectural point of view. Nonetheless, the process
definitions were modelled with little apprehension for adaptability and therefore the adaptability
allowed is limited. The adaptability is seen as something that could be introduced later via
improvements to the BPMS. The corresponding systems to perform the adaptations were
proposed as an additional or a special layer on top of the existing process models. This has some
similarity to a recovery-based solution to improving the adaptability of existing standards. Yet,
the limitations were inherent in the way the processes are modelled or the lack of flexibility in
process modelling concepts in the available standards. Consequently, it is needed to find more
radical and flexible orchestration solutions that provide better conceptual support for
adaptability, rather than completely handing the adaptability concerns to the design of BPMS.
3.4.3 Business Rules Integration
One of the issues with the popular orchestration standards is the imperative or procedural nature
of process modelling. The businesses operating in more dynamic environments challenge the
imperative and procedural process modelling paradigm as it fails to successfully capture the
unpredictability and complexity nature of businesses. Intrinsically, the business policies (rules)
that are subject to frequent changes is intermingled with the process logic leading to an
unmanageable process/rule spaghetti [106].
This consideration leads to more descriptive approaches such as the use of business rules
[171] in modelling business processes. The business rule engines work hand-in-glove with
process enactment engines to provide the required flexibility for runtime modifications. The
business policies for achieving business goals are specified as business rules [172]. A widely
accepted definition for a business rule is provided by the GUIDE group [173]. According to
them “a business rule is a statement that defines or constraints some aspects of the business. It
is intended to assert business structure and to control or influence the behaviour of the
business”. Rather than defining these business assertions as part of business processes, they
were defined as separate artifacts using more expressive rule languages. The required agility of
dynamic changes to the business logic is achieved through dynamic changes to these rules as
enabled by the advancements of rule engines and languages such as Jess [174], JRules [175] and
Drools [176].
42
Traditionally these rule engines are used to reason about the software behaviour based on the
knowledge supplied as declarative rules. The policies were specified as Event-Condition-
Actions (ECA) rules [177] or IF-THEN (Condition-Action or CA) rules [176, 178] without
requiring much knowledge of programming languages. Therefore, from the language
comprehension point of view, business rules are much easier to understand for business users
compared to traditional programming languages [179]. This is in fact a major step forward, as
now the business users have the ability to specify and continuously change the business policies
themselves in software systems according to a more natural and business-like language. In other
words the ability of reflecting the business know-how is now made more direct and agile. With
the support of Business Rules Management Systems, the business policies are not hard-wired to
the application logic as is the case with traditional programming languages.
In order to exploit above advantages of business rules, Rosenberg et al. [179] propose
integrating business rules with a service orchestration. A service orchestration specified in the
BPEL language is integrated with business rules. Business rules are categorised into Integrity
rules, Derivation rules and Reaction rules. Such categorisation distinguishes the types of rules
that can be used on a service orchestration. The Integrity rules check data conformance, while
inference rules derive information from the knowledge available via inference and mathematical
calculations. The reaction rules are ECA rules that perform actions in response to an event
under a certain condition.
These rules are integrated with the orchestration using two interceptors call before and after.
The before interceptor allows rules to be executed before a BPEL activity, while the after
interceptor allows rules to be executed after a BPEL activity. These interceptions are specified
in a mapping document, which maps process activities and rules. Such integration exploits the
aforementioned advantages of business rules, including the ease of understanding and
adaptation. Another important architectural contribution of this approach is the way in which
the business rules engine is integrated with the orchestration engine. The integration is loosely-
coupled with the support of the Business Rules Broker [180]. The Business Rules Broker, which
is a broker service, makes the Business Rules Engine independent from the orchestration via a
unified web service interface. However, the points of adaptations need to be foreseen and
cannot be changed during the runtime. Furthermore, there is limited support for controlling the
changes and analysing the change impacts on business collaborations.
Several patterns on how business rules should be integrated with business processes has been
introduced by Graml et al. [181]. For example, Figure 3-8 shows how the decision logic from
the main process is extracted and specified in terms of more agile business rules. The rules can
be complex but the outcome which is a simple Boolean value (yes/no) that can be used to drive
43
the process. Similar to [115], this approach calls a rule service to evaluate rules before and/or
after an activity of a process. From an architectural point of view both share the same concepts
of rule integration. However, this approach has made several improvements compared to [179].
Firstly, the approach does not limit its adaptation capabilities to a mere rule evaluation. The
introduced patterns specify several actions that can be taken based on the rule evaluation,
including even more drastic changes such as insertion of sub-processes to the process flow.
Secondly, the approach allows rules to take decisions based on a shared process context, which
is important when multiple rules are collectively used to make decisions. Thirdly, the post-
processing of the BPEL code is done automatically via XSLT [182, 183] transformations, which
reduces the engineering effort.
Figure 3-8. Decision Logic Extraction from Process to Rule (Cf. [181] )
Rule-based BPMN (rBPMN) [184] is another rule and process integration effort. The
rBPMN approach integrates BPMN processes with R2ML rules (Reverse Rule Markup
Language) [185]. The adaptability is provided by dynamically changing rules. R2ML rules are
used as gateways of BPMN and connected to other parts of a process flow. The rBPMN
approach claims to support all the change patterns proposed in [181]. However, rBPMN is more
advanced compared to [181], as the rule integration is done at the meta-model level leading to a
definition of a rule-based process modelling language. The approach uses reaction rules which
are capable of generating complete service descriptions. This is not usually supported by
approaches that use rule integration technique. The integrity rules are used to ensure the
integrity of the overall control flow. However, the replacement of process fragments is limited
to pre-identified places of a complete process flow.
This way of integrating rules along with an orchestration engine can be problematic in terms
of understanding the overall process flow. The adaptations carried out by the rules can lead to
many complex flows that may harm the integrity of the process. This is especially the case in
such environments as service brokering and SaaS, where requirements of many
tenants/consumers need to be satisfied within the same composite service instance. In such
environments it is important to facilitate capturing commonalities and allowing variations
44
systematically. Furthermore, special attention should be paid to control the adaptations so that
the integrity of the overall service orchestration is protected.
Overall, while Business Rules Languages provide a powerful way to express business policies
in a business-user-friendly manner even in natural languages [122], careful attention needs to be
paid to ‘how’ these rules should be integrated with a process flow. It should be noted that
flexibility is not an explicit characteristic of business rules development. Boyer et al. states that
“the agility is not given with business rules development, (rather) we need to engineer it” [186].
The same applies for integration of business rules with business process modelling and
enactment as well. The separation of adaptation concerns should be done in a manner that the
overall understanding of the process is preserved. Furthermore, providing modularity and
improving reuse are also important.
3.4.4 Aspect Orientation
With Aspect Oriented Programming (AOP) [123] the cross-cutting concerns can be well-
modularised improving the runtime agility and reuse [187]. The additional behaviours can be
introduced by separating these cross cutting concerns. The handling of Security and Logging
requirements of a software program is a typical example of such cross cutting concern [188,
189]. Therefore, the application of AOP in service orchestration has also attracted a substantial
attention from the research community [66, 187, 190-194]. A Join-Point model (JPM) in AOP is
based on three main concepts pointcut definition, advices and joinpoints [74, 195]. The advices
are an additional or separate behaviour for the core program; joinpoints specifies where in the
core program the advices are applied; and pointcut definition detect whether a given joinpoints
matches. Accordingly, in service orchestration approaches additional behaviours as advices are
woven into a core process to alter its behaviour without damaging the modularity and
understandability [193].
The aspect viewing technique to integrate the business rules with business processes is used
in AO4BPEL [68]. In this approach the complete service composition is broken into two main
parts, core process and rules as shown in Figure 3-9. First, a core process which defines the flow
represents the fixed part. Second, a set of well-modularised business rules that can exist and
evolve independently represents the dynamic parts. A service composition is formed by
weaving the business rules into the core business process (BPEL) via pointcuts as per AOP
principles. In AO4BPEL, each BPEL activity is considered as a potential pointcut. This
approach is motivated by the argument that the knowledge represented by rules tend to change
more often than the core process [68]. This argument may be valid for businesses that can
predict the core process and the pointcuts in advance. However, in certain situations, the core
45
process can also be changed in an unpredictable manner and there can also be changes to the
pointcuts. Furthermore, the way the aspects are woven into BPEL code can cause problems to a
business user who might not be familiar with the low-level BPEL code.
Figure 3-9. Hybrid Composition of Business Rules and BPEL (Cf. [190])
This limitation has been resolved by MoDAR [149] using Aspects along with Model-Driven-
Development. The MoDAR approach distinguishes between a volatile part (variable model) and
a fixed part (base model) of a business process. Again the volatile part is represented as rules
while the fixed part is represented by a business process. At the composition level, the rules are
woven into the core process via a weave model that defines the location (before, around and
after) of an activity where the rules should be woven. The key advantage of this approach,
compared to AO4BPEL, is its model driven approach to generate the lower level executable
code, i.e., Drools rules and BPEL scripts. It follows that the use of aspect viewing is done at a
higher level without limiting to a single execution platform. Such a higher level aspect-viewing
benefits the business users, who are not familiar with the lower level (XML-based executable)
languages such as BPEL. However, a major drawback is that if there is a change requirement of
the pointcuts, the process needs to be remodelled, code need to be generated and then deployed.
Therefore, Similar to AO4BPEL, this approach provides adaptability only for application
scenarios where it is possible to identify a base model and a variable model in advance.
VxBPEL [67] uses aspect-viewing at the modelling language level by introducing a few
extensions to BPEL language constructs. The approach facilitates different variations or
configurations of a BPEL process. Variation points that are introduced in [196] are a key
concept used to express the variability in software product lines. One of the key differences in
46
VxBPEL compared to the rest is that the variability information is located right inside the
business process. The advantages are clear visibility of variation points and efficient parsing.
Although this requires modifying the BPEL language and the enactment engines, it is a helpful
capability in capturing the variations and commonalities in service orchestration. However, the
points of the variation are still needed to be pre-defined. A configuration file will determine the
current or most suitable configurations for given pointcuts. Furthermore, there is no attention
paid to analysing the impacts of making choices over different configurations or variations, and
to protecting the integrity of the composition.
Padus [193] also uses AOP concepts to dynamically adapt the service orchestration. One
improvement they have made in their approach is that the pointcuts are less-fragile to changes in
the core business process. The reason behind this is the use of higher level pointcuts in contrast
to lower-level pointcuts such as XPath used in [68] and [191]. Karastoyanova et al. [66] also
use AOP concepts. One of the key advantages of their approach is that it does not limit to
BPEL, hence does not require any modification to the BPEL language nor the BPEL engine,
compared to [67]. However, again, the requirement of pre-identifying the core and volatile parts
limits the amount of adaptability catered by the runtime environment.
In addition to the process definition level, aspect-orientation has been used at the service
orchestration engine level as well [191]. Advices are written in Java and woven into processes
via XPath pointcuts. Aspects can be plugged-in or unplugged during runtime. However, the
aspects do not dynamically evolve as in AO4BPEL and MoDAR. In addition, the engine is
required to check for aspect-registry constantly during execution.
VieDAME [192], which is an adaptation extension to the ActiveBPEL engine [197], also uses
AOP concepts in its adaptation layer. The intention of using AOP principles in VieDAME is in
fact to achieve a clear separation of the engine (ActiveBPEL) and the adaptation extension
(VieDAME). Therefore, they successfully avoid having a tighter coupling between the
adaptation layer and the engine. Nonetheless, the adaptation in the approach is limited to service
replacement, i.e., the adaptations on partner services. The approach cannot support adaptations
in the control-flow. Similarly AdaptiveBPEL [194] uses aspect-weaving at the middleware level
to enforce the negotiated collaboration policies [198, 199]. In both approaches the objective is to
modularise the crosscutting concerns at the middleware (system) level rather than at the process
definition (language) level.
Overall, aspect orientation has brought many advantages for business process modelling. It
avoids code-scattering and tangling [200, 201], which can lead to increased complexity of
service orchestration, due to crosscutting concerns. This is especially true when aspect
orientation is carried out at the process definition or the schema level as in [67, 68]. However,
47
the flexibility is still limited to the points where the pointcuts are defined. Irrespective of the
level at which the aspects are woven, this is still a major limitation in achieving adaptability for
service orchestration at runtime. Especially the requirement of identifying the fixed and volatile
parts of a business process may not be feasible in environments where the collaborators play a
major role. In reflecting tenants’ unpredictable business requirements, for example, it may be
impractical to distinguish between a fixed and volatile part of the business process.
Nevertheless, from the point of capturing commonalities and allowing variations requirement,
aspect orientation technique has a major advantage. The commonalities are captured in a core-
process and the variations are captured via the advices. For example, the variation points in
VxBPEL [67] and the rules in AO4BPEL[68] capture variations while the core BPEL process
captures the commonality. The same goes for MoDAR [149], where volatile part captures the
variations while the fixed part captures the commonality. However, again, the commonalities
and variations captured via AOP are limited to the way the pointcuts are specified. There is
always a requirement of foreseeing these pointcuts during the design time. This is a limitation
common to aspect orientation technique in general.
From the requirement of achieving controlled changes, the Aspect-Orientation achieves the
control by pre-specifying the points where adaptation can happen. The join point model
determines what can be changed and what not.
In addition, the inherent capability of AOP for achieving separation of concerns has
contributed to better managing the complexity that could be introduced during runtime process
change. The complexity of crosscutting concerns is handled in advices, e.g., business rules in
AO4BPEL [68] and MoDAR [149]. This helps maintain the core process without increasing its
complexity amidst modification at runtime.
3.4.5 Template Customisation
Template-customisation is another technique used to address the adaptability concerns of
service orchestration. In template customisation, a master process is customised using available
templates to dynamically create a business process. This category of adaptation approaches has
its roots in Generative Programming [75, 202]. A template library or a database is used by a
controller to select the templates to generate a process definition that is optimised for the
business requirement. In this section we will look at how the template customisation technique
has been used to achieve adaptability in service orchestration.
A typical approach for template customisation is proposed by Geebelen et al. [203]. They use
the concepts from dynamic web pages [204] in web services orchestration. The authors use the
Model-View-Controller (MVC) pattern to achieve adaptable BPEL processes. Templates that
48
consist of modules, i.e., clusters of BPEL activities were dynamically integrated to generate a
process according to a pre-defined master process as shown in Figure 3-10. This selection of
modules for the generation of a new process is carried out based on several parameters such as
response time of services etc. The use of such techniques as meta-level master process
construction and interpreted language (Ruby) has made the approach more adaptive. The
modules and master process capture the commonalities. How the modules are used in the master
process can be used to define variations. Nonetheless, it has two major limitations. First, the
packaging and deployment is required each time a change is realised. Second, the approach
cannot be used to achieve runtime process instance level adaptation.
Figure 3-10. A Template Controller Generates an Executable Process (Cf. [203])
These limitations are evident in the use of reference processes [205]. The reference processes
approach has been used as an extension to the Websphere Integration Developer [206] to allow
high-level business process design in web service compositions. Here, a reference process is
basically a designed process that captures and maintains a set of best practices specific to the
business domain. In other words, the reference process acts as a template. Such a reference
process is subjected to a series of customisations until the customer requirements are satisfied.
The customisations involve adding, removing and modifying process elements, including
activities, control and data-flow connectors. The approach defines a formal model for
customisation with detailed rules that are independent of the underlying process modelling
language or the notation. In addition, the approach proposes an annotation language that can
describe what the possible customisations and constraints are. The reference process is
annotated with these constraints. In this way, the approach also captures the commonalities via
the reference process. In addition, newly generated processes can be seen as variations.
However, this way of capturing commonalities and allowing variations leads to code
duplication. When a new upgrade needs to be done, all the newly created customisations require
to be separately upgraded. This possibly can lead to inconsistencies. The approach also
provides limited support for controlling the adaptations. The constraints annotated prevent the
49
invalid adaptations to a certain extent. Nonetheless, there is no direct support for analysing
change impacts and behavioural correctness.
Mietzner et al. [42] propose to customise a process template into a customer specific
deployable SaaS [45] application solution. The SaaS application domain requires highly
customer/tenant-specific solutions. Different variations in customer requirements need to be
supported. To do so, the approach uses variability points or points of an abstract process
template that are customisable. These variability points are filled with concrete values during
the customisation process. At the end of the customisation process, a highly optimised and
customer-specific solution, e.g., a WS-BPEL process model is generated. Note that the
approach is actually independent of WS-BPEL and can be applied to other process languages. In
another work, the same authors extend the SCA [207] architecture using variability descriptors
to package and deploy multi-tenant-aware configurable composite SaaS applications [92]. In
general the core principle used is that the process template captures the commonalities while the
variability descriptors and the customisation process identify and capture the required
variations. As such, the potential alternatives or changes are treated as first class entities. A
collection of such variations to address a particular business need forms a configuration. From
the business process adaptability point of view, such a configuration can lead to an adapted
process [208]. However, the approach assumes that such variations and customisations can be
predicted at the design time. But this can be problematic as it is not possible to predict all the
points in a process template where the variations are required for future tenants.
PAWS [209] is a framework developed to improve the adaptability in web service processes
in terms of service retrieval, optimisation and supervision. The framework allows a process
designer to first design the main business processes as a BPEL process that describes the
activity dependencies. This process captures the main business requirements without
considering the actual services that perform the tasks. This initial design can thus be seen as a
process template. Then the tasks of the designed process definition are annotated with
constraints that define domain-dependent non-functional properties for each task. For example,
the given time or cost of performing each task is annotated. Then the annotated business process
is enacted leaving the problem of service selection to runtime. The framework has its own
advanced-service-retrieval module that selects the services based on the annotated constraints.
As such, the service selection becomes an optimisation problem that considers several
parameters such as user preferences and runtime context. While PAWS provides better runtime
adaptability via exception handling and several self-healing features upon failures of task
execution and service selection, it provides only limited support for adapting the core process.
The core process always needs to be designed and annotated at the design time.
50
This limitation can also be seen in Discorso [210], which can be seen as a successor to PAWS
[209]. Discorso is also a framework for designing and enacting dynamic and flexible business
processes. In contrast to PAWS, Discorso provides richer support and user experience in service
retrieval, optimisation and supervision. Discorso couples the QoS-based service selection with
the automated generation of user interfaces and runtime supervision, which was not available in
PAWS. Discorso also supports both human and automated activities in service orchestration.
However, Discorso follows the same footsteps of PAWS. It annotates process template and uses
late binding techniques to facilitate runtime process optimisation and healing. Hence runtime
adaptations are still limited to service selection and error recovery. The core process represented
by the annotated template cannot be changed at runtime.
Another template-based approach has recently been proposed by Zan et al. [211]. In this
approach the placeholders of a process template are replaced by process fragments. Here a
process fragment is a small process that can be integrated into a larger process by filling the
placeholders of a process template. Multiple fragments of the same fragment type can be
developed as possible candidates to replace a placeholder. Fragments are kept in a fragment
library. Declarative fragment constraints [212] preserve the relationships among fragments, i.e.,
how the fragments should be used in the placeholders. Adaptation policies, which are extensions
to WS-Policy [22], define how the fragments should be selected. Depending on the business
specific requirements, fragments are selected based on adaptation policies and used as allowed
by fragment constraints. Compared to the previously presented approaches this approach [211]
provides greater flexibility as the execution order of fragments is not predefined. The
adaptations are carried out by adding/removing fragments and modifying the relationships
among these fragments. Another significant characteristic is the clear separation of adaptation
logic from the business logic. Along with the modularity provided with fragments, such
separation of concerns assists managing the complexity. Furthermore, there is support for
capturing commonalities and allowing variations. The fragments can capture the commonalities.
In addition, multiple variants of the same fragment can coexist. However, even a slight variation
can lead to duplication. The reason behind this problem is that a completely new fragment needs
to be designed to model the variation. In addition to this limitation, this approach has the
following limitations in terms of adaptation as well. First, the placeholders are pre-defined and
cannot be changed. Only the fragments that fill the placeholders can be changed. In fact, this is a
common weakness of any template-based approach. Second, the approach does not support
process instance adaptation. Once the process is enacted there is no support to change the
process instance.
51
Overall, the template customisation approaches commonly facilitate adaptability by delaying
the formation of the eventual processes to runtime. For example, to generate a fitting process to
customer needs, the MVC based approach proposed by Geebelen et al. [203] integrates BPEL
clusters according to a master process, and Mietzner et al. [42] use a customisation procedure to
address different variation requirements. This allows delivering a highly customised process to a
particular business environment. The templates provide agility by allowing the specification of
‘half-done’ processes.
The requirement of capturing commonalities and allowing variations is also supported to a
certain extent. The templates captured the commonalities across the potential business
environments in advance, whilst the variations are captured during the customisation process to
produce a final customised process. However, if a new variation is required the process needs to
be completely re-generated. Hence, the ability of allowing variations is limited only during the
process of template customisation.
Similar to the approaches adopting aspect orientation, these template-based approaches also
use the rigidity provided via the template to govern the controllability of the adaptations. The
customisation process is controlled by the amount of flexibility allowed in the templates. These
approaches do not explicitly consider change impact analysis as only the allowed templates can
be used to generate the final process and these processes are not shared among multiple
users/tenants.
In comparison to aspect orientation, template-based customisation does not specifically
support either separation of concerns or handling runtime complexity, in general. One reason for
this is that the purpose of these approaches is different and there is less attention paid to the
process enactment aspect. Consequently, these approaches do not intend to perform runtime
modifications but rather concentrate on generating a customised solution. For example, once the
customised solution (e.g., a WS-BPEL process) is generated, an enactment environment (e.g., a
BPEL-engine) may execute this process in a way similar to executing any other (WS-BPEL)
processes without specific support for adapting the running process.
3.4.6 Constraint Satisfaction
As mentioned in Section 3.1, Constraint Satisfaction is a technique where a solution is chosen
based on assignment of variables over a variable-domain such that all constraints are satisfied
[107]. This technique has been applied to service orchestration or to business process modelling
in general, to improve the adaptability. The adaptability is achieved by not explicitly specifying
the flow of activities. Instead, the exact flow of activities, i.e., the solution, is determined by
satisfying a number of explicitly specified constraints.
52
Mangan et al. [213] propose an approach that uses constraint satisfaction to define
customisable business processes. Although this approach does not directly relate to service
orchestration, many basic principles presented in business process modelling in general can be
used to improve the flexibility in service orchestration. The approach targets how to build a
customised process definition successfully, out of many available process fragments and tasks
in a pool. The success of a customised process definition is the satisfaction of domain specific
constraints that reflect business goals.
The constraints are defined at three different specification levels, i.e., selection, build and
termination [214]. Selection constraints define how the process fragments are selected whilst
the build constraints define what need to be respected whilst the customised instance is
gradually built. The termination constraints are the constraints that need to be satisfied by the
completely built process definition or workflow. The approach supports adaptability by
allowing a full process definition to be built at runtime. As such, this approach has many
similarities to generative programming, which we have seen in template customisation
approaches presented earlier. However, the overall problem of building a valid workflow
schema through a customisation process that satisfies defined constraints is treated as a
constraint satisfaction problem. The most influential underpinning technique used here is
constraint satisfaction to achieve a customisable process.
The approach successfully tackles the problem of defining business processes in
environments where the execution paths are unpredictable or the number of alternative
execution paths is extremely high. This is a clear improvement compared to template
customisation where the execution paths are pre-defined in the templates. However, the
flexibility is again limited to the design or customisation phase. There is no special flexibility
provided once the process is enacted. Instead, the enacted process instance needs to follow the
exact schema that was built in the design/customisation phase. The underlying problem for this
limitation is that the constraints are used in the customisation phase to build a workflow, which
is procedural in nature. Such procedural specifications afford little support for runtime
adaptation due to the strict ordering of activities once enacted.
This limitation is also evident in the approach proposed by Rychkova et al. [215]. The
approach has a declarative specification that captures the business invariants [76] to model an
imperative process design or a domain specific customisation. By formalising the concepts of
SEAM modelling language [216] with the support from the Alloy specification language [217],
the approach attempts to ensure the system integrity when business processes are redesigned or
customised. The business goals and strategies are captured in a SEAM specification, which is
declarative. SEAM supports hierarchical composition of objects which represents the system.
53
SEAM properties represent the states of the working objects whilst actions change these
properties of working objects [216]. The approach specifies these properties in first order logic,
to allow mapping into Alloy [217], which provides formal verification support. The actions may
modify the states of working objects as long as the actions lead to states where invariants are
protected. The formalism and verification support from the Alloy Analyser tool [218] helps the
engineering effort to confidently redesign business processes that suit the changing high-level
strategic goals [108] of the business. Yet, the final result (i.e., the built process) is again an
imperative process that is constructed by mapping the declarative SEAM specification to an
imperative BPMN design. As such, the adaptability of the approach is limited to the
customisation or process redesign phase as the resulting process is imperative and cannot be
changed at runtime.
In order to avoid such limitations, more declarative specifications and, perhaps most
importantly, enactment environments that support such declarative specifications, are needed.
The Declarative Service Flow Language (DecSerFlow) [71] is a language proposed to
declaratively specify, enact and monitor service flows. The DecSerFlow language, similar to its
sister-language Condec [73], envisions that both process modelling and enactment is a
constraint satisfaction problem. This is a clear difference and an advantage over the approaches
[73] that use constraints satisfaction to (re) design or customised process definitions. A
DecSerFlow model consists of activities that need to be executed and a set of constraints that
describe the temporal properties among them. An example DecSerFlow model is shown in
Figure 3-11 (a). The corresponding automation is given in Figure 3-11 (b). The given example
consists of three activities (curse, pray and bless) and one constraint (response) between curse
and pray, which specifies that every time a curse activity occurred, a bless activity should
follow it. These (graphical) constraints that represent the execution constraints among activities
are translated to Linear-Temporal-Logic (LTL) expressions [150]. A process instance is
executed in a way that constraints are not violated. A process is considered to be complete when
all the constraints are satisfied.
The constraints in DecSerFlow are specified in terms of existence, relation and negation
formulas. For example, existence formula may specify that an activity must be executed. A
relation formula may specify that an activity should precede/succeed another. A negation
formula may specify that an activity should not be executed (e.g., because another alternative
activity has been executed). The approach provides greater benefits in terms of adaptability as it
completely avoids having explicit activity execution alternatives, which is evident in the
previously introduced approaches [213, 214] and also in the mainstream standards such as WS-
BPEL. When the number of alternatives are very large and a procedural language is used to
54
specify those alternatives according to flexibility-by-design [63], the process definition can be
very complex. In contrast, with DecSerFlow [71], the activities do not directly relate to each
other and can be executed in any order as long as the constraints are not violated. The same
statement applies to the Condec [73] language. Process execution and completion is a problem
of satisfying all the constraints for a given model. Furthermore, the constraints can be added or
removed depending on the business requirements [219].
Figure 3-11. A Simple DecSerFlow Model (Cf. [71])
Guenther et al. [69] propose to improve the flexibility of process modelling via case handling.
In case handling the data of a product/case is used as the principle driver of the process rather
than the completion of activities [70, 115]. Although case-handling is a completely different
paradigm for business process modelling (Section 3.1), the underpinning technique that
provides the flexibility has a lot in common with constraint satisfaction. In case handling, the
next possible activities are only constrained by the state of the case; there is no explicit ordering
of activities; the process specification specifies what needs to be done rather than how that
should be done based on the case. This type of flexibility is similar to that achieved by
constraint-based approaches as they all attempt to avoid the need for explicit change at runtime.
The flexibility is given to the runtime to select the path of execution. Such flexibility is better
suited to domains where there is a strong case-metaphor such as the medical field. A case study
can be found in [220] for more details.
Overall, the flexibility provided by the constraint-based declarative approaches fundamentally
relies on their ability of specifying on what need to be done rather than how. Pesic in her thesis
[150] calls this as outside-to-inside approach, in contrast the inside-to-outside way of process
modelling in imperative approaches. The term inside-to-outside refers to that all alternatives are
explicitly specified in a procedural way and new alternatives need to be explicitly added. Later,
Fahland [221] attributes the reason for the improved flexibility of process modelling to the sort
of information the declarative definitions deliver. According to Fahland the declarative process
definitions deliver circumstantial information in contrast to the sequential information delivered
by the imperative or procedural process definitions. Consequently, such declarative process
modelling approaches have made an important step in process adaptability. Especially, likes of
DecSerFlow [71] can extend the adaptability support to the enactment environment.
55
Another important contribution of the constraint-based approaches is their ability to specify
the system invariants. Regev points out that “Business processes are often the result of a fit
between the needs and capabilities of the internal stakeholders of the business and the
opportunities and threats the business identifies in its environment” [142]. Changes to the
business process are required in order to ensure this fit. So undoubtedly we need to identify the
properties that ensure this fit and thereby ensure the survival of the business. In service
composites, the goals of the collaborators such as third party service providers and consumers
also need to be safeguarded as much as possible, in addition to those of the aggregator. While
constraint-based process modelling contributes to this requirement, the monolithic
representation of constraints in a service orchestration may still lead to unnecessary complexity
and even restriction on possible modifications.
Nonetheless, the constraint-based approaches are criticised for their lack of support for
process comprehension by human users. This problem is somewhat tackled by DecSerFlow [71]
by providing a graphical notation to design the constraint model. Yet, the graphical notation
does not provide sufficient support for understanding the possible flows of activities. For
service orchestration systems, this can cause problems as multiple service invocations may have
frequent tight dependencies and it is easier to say what need to be done, rather than what should
not be done to define the control flow. Therefore, specifying a constraint specification is not
always a practical solution. It is important that a service orchestration approach should leverage
the advantages of constraint-based approaches (such as ensuring the system integrity amidst
flexibility) but need to provide stronger support to process comprehension.
3.5 Summary and Observations
This section summarises the presented approaches and evaluates against the requirements of a
service orchestration approach described in Chapter 2. The, Section 3.5.1 will present this
summary and the evaluation. In addition, a number of observations and lessons that can be
learnt are presented in Section 3.5.2.
3.5.1 Summary and Evaluation
A high-level abstract view on the presented groups of approaches to improving adaptability in
business process modelling is given in Figure 3-12. The figure illustrates how each group
provides adaptability with different techniques. The figure only captures a generalised view by
highlighting the distinct features among groups but should not be taken as a true representation
for each approach of the group.
56
Figure 3-12. An Illustration of Different Techniques Used to Improve the Adaptability
In proxy-based approaches, the adaptations are carried out in the proxy, which separate the
error handling and optimisation from the core service orchestration. However, the proxy design
and the extent of its capabilities are unique to each and every approach in the group. In the
approaches that provide dynamic changes, the adaptations are dynamically carried out in the
core service orchestration itself. In contrast, the approaches based on rule integration use
business rules to achieve the required adaptability, mainly due to the high agility provided by
the business rules techniques.
Bringing the advantages of aspect-oriented programming (AOP) [222] into the BPM domain,
the aspect-oriented approaches use the pointcuts and advices to improve the ability of
continuously optimise the service orchestration. In contrast, the template-based approaches use
generative programming [75, 202] principles to dynamically generate the service orchestration
out of templates that suit a particular business condition/environment. The approaches that use
the constraint satisfaction technique employ constraints as an explicit concept in business
process modelling. These approaches provide a rather different perspective to adaptability. They
provide greater flexibility by only specifying the restricting boundaries/condition of process
modelling/execution. In other words what cannot be done is explicit. However, the actual path
of execution is not explicitly represented and only determined by the specified constraints. Any
path that does not violate the constraints is deemed valid.
Core Service Orchestration
Proxy
Optimisation, Error-recovery
Core Service Orchestration
Adaptation Manager
Decision Making
Point-cuts
Core Service Orchestration
Advices (e.g., Rule Files, Code-Segments)
Continuous Optimisation
Core Service Orchestration
Rule Engine
Execution
Execution
Execution
Execution
Knowledge Inference
Customization Process
Metadata
Common Templates
Optimal Service Orchestration to Fit the Environment 1
Environment 1
Environment 2
}
Dynamic Changes
(3.4.1) Proxy-based (3.4.2) Dynamic Changes
(3.4.3) Rule Integration (3.4.4) Aspect Orientation
(3.4.5) Template Customization (3.4.6) Constraint Satisfaction
Adaptation
A Solution
X
√
XX
X
√
√
√
X
√
Violation
Satisfaction
57
A summary evaluation of the examined approaches in this chapter against the adaptation
requirements for service orchestration from Chapter 2 is given in Table 3-2.
Table 3-2. The Summary of Evaluation of Adaptive Service Orchestration Approaches
Approach Improved
Flexibility
Capturing
Commonalities
Allowing
Variations
Controlled
Changes
Predictable
Change Impacts
Managing
Complexity
Proxy-based Robust-BPEL[223] + - - - - -
TRAP-BPEL[140] + - ~ - - -
Robust-BPEL2[65] + - - - - -
eFLow [156] + - - + - -
Canfora et al.[157] + - ~ - - -
Dynamic Explicit Changes WASA[159] [160] + - - - - -
Chautauqua[163] + - - - - -
WIDE[164, 165] + - - - - -
ADAPTflex [167] + - - + - -
Fang et al.[57] + - - - - -
SML4BPEL [170] + - - - - -
Yonglin et al.[141] + - - - - -
Business Rule Integration Rosenberg[179] + - - - - -
Graml et al.[181] + ~ + - - -
rBPMN[184] + ~ ~ + - -
Aspect Orientation AO4BPEL[68] + ~ ~ ~ - +
MoDAR[149] + ~ ~ ~ - +
Courbis[191] + - - ~ - -
VieDAME[192] + - - ~ - ~
AdaptiveBPEL[194] + - - ~ - ~
VxBPEL[67] + ~ ~ ~ - +
Karastoyanova et
al.[66]
+ ~ ~ - - +
Padus[193] + ~ ~ - - +
Template Customisation Geeblan 2008 [203] + ~ ~ - - -
Lazovic et al. [205] + ~ ~ ~ ~ -
Mietzner[42] + ~ ~ - - -
PAWS 2007 [209] + - - - - -
Discorso 2011 [210] + - - - - -
Zan et al.[211]. + + + ~ ~ ~
Constraint Satisfaction Mangan et al. [213] + - - + ~ -
Rychkova et al. [215] + - - + ~ -
DecSerFlow[71] + ~ ~ + - -
Condec[73] + ~ ~ + - -
Guenther [69] + - - + - -
+ Explicitly Supported, ~ Supported to a Certain Extent, - Not Supported
58
3.5.2 Observations and Lessons Learnt
Based on our study on existing literature, a number of observations have been made, and they
are discussed below.
Observation 1. The proxy-based approaches are primarily useful to address the
complexities in partner service failures. The partner service endpoints are decoupled
from the core service orchestration. The concerns associated with partner service
selection are separated from the core service orchestration. However, the adaptability
is limited to partner service selection. There is a very little support to change the core
service orchestration.
Lesson: While it is important to tackle the volatile environment that is inherent in
SOA or distributed computing in general, the service orchestration is also business
driven. The core service orchestration is defined to address a business requirement.
Hence as the business requirements do change, the core service orchestration also
needs to be changed. A service orchestration approach should be capable of providing
adaptability beyond partner service failures or changes.
Observation 2. The approaches offering dynamic explicit changes are proposed to handle
unforeseen requirements, and are done so as a remedy for the inherent limitations of
the process modelling and enactment standards and enactment environments already
in use in practice. Nevertheless, the underlying standards and supporting enactment
environments lack the required support for flexibility, limiting the amount of
adaptability that can be achieved.
Lesson: The adaptability cannot be built on top of rigid standards and enactment
environments. Rather the standards and enactment environments should be
manufactured with adaptability in mind.
Observation 3. Both rules-based and aspect-oriented approaches attempt to separate the
concerns of adaptation. This is an important step towards improving adaptability in
business processes. Rather than providing a monolithic representation of a business
process, the core process has been separated from the rest of the business logic that
need to be adapted. The rule-based approaches use the business rules technology to
specify and realise the adaptable business logic. In aspect-oriented approaches, the
already established aspect-oriented concepts have been exploited to separate the
adaptability concerns via pointcuts and advices.
59
Lesson: Separation of concerns is an important aspect in software engineering, which
is equally valid for service orchestrations. In an adaptive service orchestration, the
separation of concerns needs to be carried out in multiple dimensions. This includes
separation of core process from the business rules/policies, separation of control-flow
from data-flow, separation of adaptation concerns from management concerns.
Observation 4. Both aspect-oriented and template customisation approaches are more
suited to business domains where it is required to quickly customise a process to suit
to a particular business environment. The aspects-woven base process definitions and
the templates provide partially-completed processes. Depending on the factors posed
by runtime environments the remaining steps of completing the processes are carried
out. The aspect-oriented approaches provide the advices woven to a process via
pointcuts. Depending on the factors posed by the runtime environment, the advices
make the process adapt to the environment in a tunable manner.
Lesson: The amount of adaptability is limited by the templates or the pointcuts. The
agility offered by the already completed process parts can help quickly address the
consumer needs. In SaaS applications this provides a significant advantage as the
tenants can be offered with such partially-completed process as features and the final
process can be assembled from them. Furthermore, the aspects can be used to provide
the required tunability. Therefore, a service orchestration approach should provide
the ability of partial specification for increased agility.
Observation 5. Constraint-based approaches provide flexibility without requiring an
explicit change during runtime. A large number of possible execution alternatives are
captured in the specification with a small number of constraints. These approaches
provide better flexibility and integrity protection, although there are no explicit
adaptations as such. The path of execution is implicit and controlled by the defined
constraints. However, these approaches do not provide a clear view of the path of
execution. Hence, they are not applicable to approaches where it is required to
explicitly represent the path of execution.
Lesson: BPM aligns business and IT. The ability to visualise a process from start to
the end is one of the requirements of BPM to provide the required abstraction for a
business user. Therefore, while constraints are used to ensure the integrity, there
should be a way to visually represent the path of execution. In addition, in business
scenarios where it is easier to specify what should be done, rather than what should
not be done, the constraint-driven approaches do not provide a sufficient solution.
Therefore, in such cases the process modelling can be cumbersome and possibly lead
60
to undetected errors. Overall, the two fields of process modelling and constraint
specification/satisfaction should be used in a complementary manner by carefully
using their inherent capabilities for the advantage.
Observation 6. A lesser attention has been paid to capturing commonalities and allowing
variations. Even when capturing commonalities and allowing variations are
considered, it requires determination before runtime. For instance, VxBPEL [67]
require to pre-specify the points that require variation. Likewise AO4BPEL [68] also
requires to pre-specify the joinpoints. Such prior knowledge may not be available all
the time.
Lesson: For a service orchestration approach to be used in a highly volatile
environment, such pre-specification of commonalities and variations may not be the
desirable option. In SaaS applications, for example, it is extremely difficult to predict
all the requirements of tenants, and notably the tenants usually join in during runtime.
Therefore, there is a requirement of capturing commonalities and allowing variations
without requiring such early determination on commonalities and variations.
Observation 7. For a highly adaptable system, it is likely that the runtime complexity is
increased with ongoing adaptations. However, less attention has been paid to handle
this increased complexity at runtime. The modern use of service orchestration
requires better attention to this aspect. From the BPM perspective, multiple variants
of the same process may exist. From the SaaS perspective, the same application is
used to facilitate the different requirements of multiple tenants (in higher maturity
models). From the SOA perspective, a business application may wrongfully designed
as large and monolithic service composites that need to be broken down into
manageable smaller composites. Usually ad-hoc solutions are used to address the
runtime complexity. However, such solutions may lead to the complete collapse of
the service orchestration and thereby the reliant business.
Lesson: A service orchestration approach should provide a sound architecture that is
capable of handling the increased complexity, instead of ad-hoc solutions. For small
service compositions this may look like overkill. But the service compositions should
be able to grow with the business, accumulating more capabilities and seizing more
business opportunities. Therefore, it is necessary to enforce proper service
orchestration design principles to handle the runtime complexity in a consistent and
more predictable manner.
61
Observation 8. Current service composition approaches view service composition as a
process and underlying services as functions that can be invoked to carry out a
specific task. Perceiving services as functions is valid to a certain extent as one of the
objectives of SOA is to expose the distributed applications as reusable execution
entities. However, the philosophy of viewing service composition as a process is
detrimental in representing the business goals of a service composition, because a
process is just a way to achieve a business goal. Compared to the business goal
expected from the composite, a process (or the way) is short lived. Processes can be
replaced over and over again to optimise the way of achieving the same goal.
Lesson: Rather than defining service composition based on a process, it should be
defined based on a particular business requirement or a goal. More specifically, rather
than defining a service composition as a process, it should be viewed as a
representation of the business requirement that satisfies the goals of both the
aggregator as well as the collaborators (i.e., partner services and consumers). To
optimise the operations to achieve the goals, processes meeting these goals can be
defined and changed during runtime, and the services can be bound, replaced or
unbound during runtime. But, the goals expected by the business organisations that
define the service composition should remain and be represented. Furthermore, when
goals of a business change, the changes to the processes and services are requried to
be carefully managed.
3.6 Towards an Adaptive Service Orchestration Framework
The above observations and consequential lessons call for improvements in the way service
orchestrations are designed and enacted. While process modelling is very important, the service
composition and orchestration cannot simply be considered as a mere process modelling and
enactment problem. Instead, the problem needs to be tackled ground up by addressing how the
services should be organised and represented.
This includes an explicit representation of what the required services are and how they are
connected to each other from the aggregator’s point of view. Such a collection of services and
their connections form the composition structure. A very little attention has been paid to
represent this structure in existing approaches. The advantage of having the representation of
such a structure is that it provides the required abstraction and stability to define multiple
business processes and continuously optimise them as required. Moreover, both the structure
and processes need to be adaptable. Such adaptability is required to host the service composite
and orchestrate the services in highly volatile business environments. In the next chapter we will
62
present a novel approach to model service orchestration to address these needs, including a
number of design principles, a meta-model and a service orchestration language.
While the design principles play a major role in designing adaptive service orchestrations,
there should be equally capable middleware support for process enactment. The middleware
allows running the service orchestration and most importantly need to be adaptation-aware. As
such, the service orchestration enactment software (i.e. the middleware framework) needs to be
designed in a manner that it is possible to continuously adapt the already running processes. The
adaptability needs to be built-in rather than built-on-top-of. If there is no built-in support in the
enactment middleware, any application built-on-top would inherit the same limitations. As
shown earlier, the use of existing middleware such as BEPL-engines hinders the runtime
adaptation efforts due to their lack of built-in support for adaptability. Therefore, this thesis also
presents a service orchestration runtime (or an enactment platform), which provides built-in
support for runtime adaptability for already enacted service orchestrations. The related
discussion can be found in Chapter 5.
In an adaptive BPMS, a mere service orchestration runtime that supports adaptability is not
sufficient. There should be proper mechanisms to manage the adaptations to the service
orchestration. The adaptations need to be carried out in a consistent and tractable manner,
ensuring the integrity of the service composition and running process instances. There are
certain challenges to overcome. Chapter 6 presents these challenges and the corresponding
mechanism to address these challenges, achieving adaptation management in adaptive service
orchestration.
In a nutshell, there are three essential aspects towards an adaptable service orchestration, i.e.,
flexible modelling, an adapt-ready enactment environment and an adaptation management
mechanism as shown in Figure 3-13. Each aspect has its own challenges to address. These
challenges and corresponding solutions are discussed in Chapters 4, 5 and 6 respectively.
Figure 3-13. Towards an Adaptive Service Orchestration
Flexibile Modeling
[Chapter 4]
Adapt-ready Enactment
[Chapter 5]
Adaptation Maagement
[Chapter 6]
63
3.7 Summary
In this chapter we examined the related literature to identify the past attempts to improve the
adaptability of service orchestrations. Firstly, an overview of BPM was given, and different
categories of process modelling approaches were presented. Secondly, the complementary
nature of BPM and SOA and how that led to the advancement of service orchestration
techniques have been explained. Thirdly, classifications of process adaptability have been
introduced.
Based on the understanding gained from the first three sections, the approaches used to
improve the adaptability in BPM have been discussed. These approaches have been classified to
six different groups based on the underpinning techniques used to achieve adaptability in
service orchestration and business process support in general. This includes proxy-based
adaptation, dynamic explicit changes, business rules integration, aspect orientation, template
customisation and constraint satisfaction. The strengths and weaknesses of individual
approaches as well as the strengths and weaknesses of underpinning adaptability techniques
have been discussed.
The presented approaches have been evaluated against the requirements introduced in
Chapter 2. Based on the evaluation results, a number of observations have been made. These
observations lead to a set of valuable lessons that can be used in devising better support for
adaptable service orchestration, which is the objective of this thesis.
64
Part II
65
Chapter 4
Orchestration as Organisation
According to Luftman [224], the term Business-IT alignment refers to “applying the
Information Technology in an appropriate and timely way, in harmony with business strategies,
goals and needs”. This explanation highlights two important properties of aligning Information
Technology with business strategies, goals and needs.
1. Appropriateness: How appropriate the selected IT support is to capture business
strategies, goals and needs.
2. Timeliness: How quickly the IT support can adapt when the business strategies, goals and
needs evolve.
Serendip views a service orchestration as an organisation that can be adapted upon unforeseen
changes. The organisation represent a unit of entities brought together to achieve a business
goal. The relationships among entities are explicitly represented forming the organisational
structure, where multiple processes could be defined. This view is different from the traditional
service orchestration approaches, where a service orchestration is defined by wiring a set of
services together via a process. In contrast, Serendip identifies the organisation, which
represents a business requirement, as the fundamental concept of defining a service
orchestration. Typically, compared to business requirements, the ways (processes) of achieving
business requirements are short lived. From one hand, there are multiple ways of achieving the
same business requirement may co-exist (variations). On the other hand, the ways of achieving
business requirement can be subjected to frequent changes (evolutions). While identifying the
organisation as the fundamental concept of defining a service orchestration, Serendip allows
multiple processes to co-exist in the organisation and evolve them continuously. It follows, that
the perception a service orchestration as an organisation, better aligns the BPM (IT) support
with the business requirements of SaaS vendors and Service Brokers, who need to timely define
multiple processes over the same application instance to reuse and repurpose it in order to
exploit new market opportunities.
This chapter presents the core concepts of the Serendip approach. Firstly, Section 4.1
introduces the concept of ‘the organisation’, explaining the two aspects of the structure and
processes. The following sections explain the underpinning core concepts of the Serendip
66
approach, and gradually build up the Serendip meta-model. Section 4.2 discusses how the task
dependencies are specified and its benefits. Section 4.3 explains the behaviour modelling in the
organisation on which processes are defined. How the organisation processes protect the
integrity amidst modifications and yet without unnecessarily restricting the modifications is
discussed in Section 4.4. Then, Section 4.5 discusses how to allow variations of organisational
behaviour and yet without increasing redundancy. The concept of interaction membrane is
explained in Section 4.6, clarifying how the data-flow concerns are addressed.
Section 4.7 discusses how an orchestration designed as an organisation that supports
adaptability, while Section 4.8 explains the concepts that help manage the complexity of an
adaptable organisation. Finally, the Serendip meta-model that formally captures all the core
concepts of Serendip is presented in Section 4.9.
All the sections are provided with suitable examples from the business scenario presented in
Chapter 2.
4.1 The Organisation
The term organisation has many definitions. One of the earliest definitions given by Barnard
[225], who defines an organisation as “a system of consciously coordinated activities or forces
of two or more persons”. According to Daft [226] “organisations are social entities that are
goal directed, deliberately structured, and coordinated activity systems, linked to the
environment”. This definition is broader compared to Barnard’s as it specifies that there is a
purpose of organising. A similar view has taken by Rollison [227], who says “organisations are
artifacts, that are goal directed social entities with structured activities”. In conclusion, we can
identify following basic characteristics of an organisation.
A goal-directed unity of multiple entities linked to its environment.
Deliberately structured relationships among entities, interactions and their activities.
Therefore, there is always a goal that defines the purpose of act of organising relative to the
environment. In addition, the formation of structure and execution of activities are not chaotic or
randomly carried out. Instead, the structure and the activities are deliberately structured to
achieve its goal.
The structure defines how the organisation is composed, i.e., what units it consist of and what
are the relationships or the connections between these units. The activities or tasks of the
organisation are organised as processes. For example, a hospital is a human organisation formed
with the goal of treating patients. The hospital organisation consists of a well-defined structure
67
of roles (e.g., Doctor, Nurse, and Patient) and the relationships among these roles (e.g., Doctor-
Nurse, Nurse-Patient). The roles are filled with role-players such as patients who expect
treatments; qualified doctors and nurses who recommend and provide treatments. The
organisation defines the mutual obligations and allowed interactions among these roles. In
addition, processes define how the activities such as patient-identification, consultation,
specimen collection, issue medicine, conducting tests and treatment are carried out. Potentially,
multiple processes can be realised over the same well-defined organisational structure. In the
case of a hospital, there are many ways to treat patients. Subsequently, multiple processes are
defined to treat inpatients, to treat outpatients and to conduct maternity clinics.
Similarly, in a service orchestration, services collaborate with each other to attain a business
goal. For example, in the RoSAS scenario introduced in Chapter 2, the service providers such as
Garages, Tow-Trucks, Taxis and Case-Officers collaborate with each other to fulfil the goal of
providing roadside assistance. RoSAS as the service aggregator needs to design both the
structure and processes, leading to the achievement of its goal. Potentially, multiple processes
need to be defined to meet the diverse requirements of the different consumers of RoSAS, but
upon the same set of aggregated services to maximise the reuse and deliver organisational
efficiency.
In pursuing its goals, the organisation has to ensure its survival in the presence of the
changes in its environments by adapting to these changes suitably. For instance, a hospital
organisation may adapt its processes and structure to changes such as high-demand for
treatments, advances in the techniques of treatments, unexpected epidemic conditions, industrial
actions or riots. To maintain an uninterrupted and sufficient service (the goal) amidst such
changing conditions, the organisation needs to reconfigure and regulate its structure and
processes (adaptation). For example, the hospital may temporarily skip certain procedures that
slow down the intake of patients, allocate more staff or update the responsibilities of existing
staff so that it is possible to treat more patients.
Similarly, a service orchestration also needs the adaptability to survive in its changing
environment. The business requirements of RoSAS can vary over time. The service-service
interactions and the performance and compliance levels are subject to continuous and
unforeseen changes. New service providers such as Garages with better quality of services
(QoS) emerge, while existing service providers disappear [5]. The customers expect better
roadside assistance services and perhaps have some unpredictable variations in expectations
from RoSAS. Business competitors may emerge, forcing RoSAS to optimise its structure and
processes continuously to provide a better service in a cost-effective manner.
68
Due to these similarities in the characteristics of organisations and service orchestrations, we
model a service orchestration as an adaptable organisation that can adapt both its structure and
processes to suit changing environments [81, 228]. The next two sections clarify these two
aspects of an organisation, i.e., the structure and the processes in details.
4.1.1 Structure
The current service composition methodologies are mostly process-driven. Generally, a set of
services are invoked according to a process. While processes are important in composing
services, the structure also plays a significant role, providing the stability to define and execute
the processes. A close inspection of current standards for service orchestration revealed that
little attention has been paid to representing the mutual relationships among services in
collaboration. In terms of the roles and their relationships, the organisational structure provides
an abstraction of the underlying services in collaboration. While the underlying service
providers appear and disappear, from the aggregator’s point of view, the goals expected by the
act of composing services need to be preserved. This provides the stability and abstraction in
contrast to grounding processes directly on concrete services. By grounding processes on a
well-defined structure, which provides the stability and abstraction, the processes become more
resilient to the turbulences or changes in the underlying concrete services layer.
It should be noted that the organisation structure should not be confused with the process
structures as described in popular service orchestration standards. For example, XLANG [229]
and WS-BPEL [104] uses a block-structured [136] way of defining service orchestration. WSFL
[36] and XPDL [35, 105] have a flow-structure. All these structures are process-based structures
that explain how the activities are ordered or structured. They do not provide sufficient details
on how the underlying services are organised and related to each other. In contrast to process
structures, the organisation structure captures the aspects of how a service relates to another,
what is allowed and not-allowed in terms of interactions, and what are the mutual obligations
among the services.
Although the above organisation-based service orchestration may sound like the service
choreography, there is a difference between the organisational structure and the service
choreography. They address two completely different concerns. To elaborate, let us consider
WS-CDL [230], which is a popular standard that captures the choreography among services. It
allows defining the global view on how the collaborative processes involving multiple services
where the interactions between these services are seen from a global perspective [24]. This
viewpoint is still a single projection of individual orchestrations of collaborating services.
However, the purpose of the organisational structure in this thesis is not to project the
69
interactions of individual services or to provide the observable behaviours of a set of services.
In contrast, the organisational structure provides the stability and abstraction over the
underlying services, by explicitly capturing and representing the structural aspects (i.e., roles
and their relationships) expected of the underlying services, rather than the process or
coordination aspects. In this sense, service choreography addresses a process level concern,
whereas the organisational structure addresses the structure level concerns of a service
composition.
The key difference between the process-structure and the organisation-structure and their
usage is illustrated in Figure 4-1. As shown, these structures are orthogonal to each other and
serve different purposes. While process structures capture how the activities are organised or the
activity relationships, the organisational structure captures how the roles of an organisation are
organised or the role-role relationships. The fundamental purposes of the organisational
structure are to provide the abstraction and stability for the organisation or service
collaboration. The abstraction represents the underlying services and their relationships, and
underpins the modelling of the business processes. The stability afforded by the organisational
structure makes the business processes (defined on top of the organisation structure) less-
vulnerable to the turbulences of the underlying services.
Figure 4-1. Process Structure and Organisational Structure
In an organisation structure, a Role is a position description within the organisation. For
example a human organisation such as a hospital, the ‘Doctor’ is a role that is obliged to treat
patients on behalf of the hospital. In essence, a role represents a functional requirement of the
organisation that need to be fulfilled. The actual entities that occupy or play these roles called
role-players may change during the organisation’s operation. But the role that represents the
requirements and the organisation that represents the goals remains unchanged irrespective of
the presence/absence of a particular role-player.
Similarly, within a service composition there are positions to be filled or functional
requirements that need to be fulfilled. In the RoSAS service composition, for example, the user
complaints/requests need to be accepted and then processed; vehicles need to be towed and then
70
repaired; motorist may need alternative transportations to continue their journey or perhaps
some accommodation to stay overnight. For these functional requirements, the corresponding
roles need to be created such as Case-Officer, Motorist, Tow-Truck, Garage, Taxi-Providers and
Hotels. The players of these roles can be bound/unbound during the runtime, but the roles
representing the functional requirements continue to exist. For example, the role Garage may be
played by a Web service provided by a garage chain Tom’s-Repair, which accepts car repair
bookings. However, if the service is unsatisfactory, this can be replaced by another Web service
offered by another garage chain BetterRepairs.
A set of roles themselves cannot describe an organisation structure. In fact, a role itself
cannot explain the purpose of its existence in the organisation. There are relationships among
the roles that describe how roles connect with each other in the organisation defining their
purpose of existence. These connections capture the mutual obligations among different roles
and the allowed interactions between roles. The organisation description should capture these
relationships. The relationships that a role maintains with the rest of the roles in the organisation
collectively define or describe the role itself. For example, the allowed interactions and
obligations between the roles Doctor and Patient and between Patient and Nurse are clearly
described within the hospital organisation. Most importantly, the role Doctor is defined by its
obligations and interactions with other roles i.e., with Nurse and with Patient. The role Doctor
cannot exist alone.
Likewise, there are relationships among the roles of a service composition as well. For
example, there is an obligation that a Motorist informs the Case-Officer a breakdown, an
obligation that the Case-Officer notifies the Tow-Truck the towing requirement, an obligation
that a Garage tells the Tow-Truck the towing-destination, etc. These interactions are carried out
during the runtime as defined by the service aggregator. Given the requirement of roles Ri and
Rj to interact, the aggregator explicitly specifies the terms of interactions in the Ri-Rj
relationship treating them as first class entities.
Role Oriented Adaptive Design (ROAD) [81] introduces such an organisation structure to
define self-managed software composites. In this thesis, we use ROAD and further extend it to
provide a process definition language to define adaptable service orchestrations. ROAD defines
roles or positions of a composite. Players (software entities or humans) can come and play these
roles [81]. The key characteristic of ROAD relating to this work is its contract-oriented view of
a service composite. A ROAD composite defines the contacts or connections between different
roles and thereby services that form the composition. The meta-model in Figure 4-2 presents
these key concepts an organisation-based ROAD composite.
71
Figure 4-2. Meta-model: Organisation, Contracts, Roles and Players
For example, the contract between the Garage (GR) and Tow-Truck (TT) are captured via the
GR_TT contract as shown in Figure 4-3. The GR_TT contract defines the mutual obligations
between the roles irrespective of who plays the roles GR and TT. The players, i.e., Web services
provided by Tom’s repair garage chain and Fast-Tow towing provider may play the roles GR
and TT respectively by interacting with each other according to and through the interactions
defined in contracts. These interactions are evaluated according to the current state and the
defined rules of the contract. The current state of the contract is captured by a number of facts
(parameters) maintained by the contract. Rules may use these facts to evaluate the interactions
as well as update them due to ongoing interactions. The corresponding meta-model is shown in
Figure 4-4.
ROAD supports reconfiguration and regulation of the organisational composite to maintain a
homeostatic relationship with its environment [48]. As such, the contracts and the roles can be
added, modified and removed depending on the changing environments.
Figure 4-5 shows graphically how the RoSAS structure is modelled according to Role
Oriented Adaptive Design [81]. The structure consists of roles and the identified contracts
between them. The identified roles are Case-Officer (CO), Tow-Truck (TT), Garage (GR),
Motorist (MM) and Taxi (TX). These roles will be played by the services such as Web services
provided by garage chains and Web services deployed in motorists’ mobiles, etc. The identified
contracts are CO_MM, GR_TT, CO_GR, CO_TT and CO_TX. An existence of a contract
between two roles indicates that these two roles need to interact with each other. For example,
the CO_TT contract indicates that the CO and TT roles (and their role-players) need to interact
with each other. The corresponding description of RoSAS is given in Listing 4-1 in the form of
the SerendipLang. SerendipLang (Appendix A) is designed and used in this thesis to support the
Serendip meta-model, and present the examples. The RoSAS description lists all the roles and
contracts, and specifies the specific players (service endpoints) bound to roles.
Role Contract
Organisation
Bound byPlayer Played by Interaction
72
Figure 4-3. A Contract Captures the Relationship among Two Roles
Figure 4-4. Meta-model: Contracts, Facts and Rules
Figure 4-5. Roles and Contracts of the RoSAS Composite
Listing 4-1. High Level RoSAS Description in SerendipLang
Organisation RoSAS { Role CO{…} Role MM{…} Role TT{…} Role GR{…} Role TX{…} Contract CO_TT {… } Contract GR_TT {… } Contract CO_TX {… } Contract CO_MM {… } Contract CO_GR {… } PlayerBinding copb "http://www.rosas.com.../CaseOfficerService" is a CO ; PlayerBinding ttpb "http:// www.fasttow.com.../TowCarService" is a TT ; PlayerBinding grpb "http:// www.tomsrepair.com.../GarageService" is a GR ; PlayerBinding txpb "http:// www.quickcab.com.../TaxiService" is a TX ; }
A contract specifies the terms of interactions (interaction terms) between the players
concerned. These interaction terms defines the mutual responsibilities of two roles bound by the
contract. The players of these two roles can interact with each other, only according to these
GR TT
Contract (Connection)
Service Relationship
Abstraction
Concrete Services
Obligations Towards GR
Obligations Towards TT
Tom’s Repair Fast-Tow
ContractRule Fact
CO
MM
TT
GR
TX
RoSAS Composite
RolesContracts
AA
Fast-Tow
Tom’s Repair
Ms. Ann
QuickCab
Mr. Jim
73
interaction terms. An interaction term captures the direction of interaction, synchronousness and
the message signature.
A sample contract is shown in Listing 4-2. The contract CO_TT specifies the connection
between the roles CO and TT. It specifies two interaction terms (ITerm). The direction of the
interaction orderTow is from CO to TT as (indicated by AtoB, where A is CO and B is TT),
meaning the CO has to initiate the interaction. It also describes the signature of messages in
terms of parameters (String: pickupInfo) and return types (String: ackTowing). The presence of
return type indicates that the communication is a request-response interaction. In contrast, the
interaction term payTow does not specify a return type, meaning there is no response expected.
Both the roles and the contracts that capture the interactions of a composite are runtime
entities. To evaluate and monitor these interactions, contractual rules are used. These rules,
declarative in nature, are used to monitor and evaluate the performance of interactions, and
possible violations of contracts. The rules (indicated by the clause Rule File) associated with the
CO_TT contract may enforce certain conditions to ensure the interactions are valid. For
example, a rule to evaluate the payTow may check whether the payment exceeds a certain
unusually large amount to prevent suspicious payments.
Listing 4-2. A Sample Contract
Contract CO_TT{ A is CO, B is TT; ITerm orderTow (String:pickupInfo) withResponse (String:ackTowing) from AtoB; ITerm payTow (String:paymentInfo) from AtoB; RuleFile "CO_TT.drl"; }
Contracts in the organisation capture an explicit connection between two services. The
requirement for such explicit connections among components is highlighted in the literature
[231-235]. For example Confract [234, 236] is a framework to build hierarchical software
components. The authors highlighted the importance of interface and composition contracts. In
order to ensure the conformance among off-the-shelf software components, a contract-based
service composition theory was proposed by Bernardi et al. [235]. This requirement also exists
in service orchestration. A service orchestration solution should not be treated as a mere
collection of services wired together in terms of a process structure/flow. The relationships
among the services in terms of responsibilities, permissions and obligations need to be properly
represented. The processes need to be grounded upon such a well-described structure that
represents the service relationships. Otherwise, the runtime adaptations can be error-prone,
leading to non-compliant processes. Furthermore, the runtime interactions may be detrimental to
74
meeting the goals as expected by the service aggregator. Therefore, it is important that a service
orchestration approach treats these service relationships as first class entities.
While ROAD provides a system theoretic approach to defining an adaptable organisational
structure, there is a lack of process support to coordinate the activities within the organisation.
Business process support is significant for an organisation to function and achieve its business
goals in an efficient manner. In particular, if ROAD concepts are to be applied in a service
orchestration context, support for business process modelling and enactment is crucial.
Therefore, we in this thesis further extend ROAD to allow definition of business processes over
the adaptable organisational structure.
4.1.2 Processes
Processes represent a way to achieve a business goal. In a service composition, business goals
are defined by the aggregator mainly relative to the demands of its service consumers. For
example, RoSAS defines business processes to efficiently provide roadside assistance relative to
the demands of its customers. Furthermore, such processes are used to coordinate the offerings
of service providers. For example, towing, repairing, taxi-ing and case-handling need to be
coordinated using business processes.
These business processes need to be adaptable to cater for runtime error handling,
optimisation and change requirements. In a service oriented environment where a service
execution is distributed and autonomous, the adaptability is a must for the survival of the
aggregator’s business. The adaptability needs be improved by carefully designing the service
orchestration. The specific properties of service oriented computing (SOC), such as the
autonomy of collaborating services and the loose-coupling between services, need to be
reflected in the process support.
To make the proposed approach suitable for modern uses of service orchestration, the
challenges mentioned in Chapter 2 need to be addressed in defining processes. For example,
SaaS requires the support for native multi-tenancy, where a single application instance has to
serve multiple tenants [45, 47, 89]. The idea of grounding processes upon an organisational
structure is complementary to these requirements. The common basis provided by the
organisational structure can be used to define multiple and varying business processes at
runtime to meet the varied requirements of different consumer groups. Apart from the
adaptability of processes, the structure of the composition also subjected to change. Therefore,
the adaptability provided by the organisational structure is also equally important. In addition,
the modularity provided by the explicit representation of service relationships allows breaking
down the complete set of service interactions of a service orchestration into smaller manageable
75
units (Section 4.8.3). Such a modular specification limits the complexity that can be potentially
introduced by the many runtime modifications to business processes.
This thesis introduces several new constructs to the ROAD meta-model [81] to provide
process support. These new constructs are added with two main objectives:
1. To make the ROAD organisational structure better suited to defining adaptable
service orchestrations. Consequently, the necessary refinements to the existing
ROAD meta-model have been made.
2. To provide process support and address the requirements of service orchestration.
Consequently, new concepts and methodologies for service orchestration modelling
and enactment have been introduced.
We identify multiple layers of a service orchestration’s design, as shown in Figure 4-6. The
top three layers, i.e., Process, Behaviour and Task, are the new additions while the bottom three
layers, i.e., Contracts, Roles and Players, already exist in the existing ROAD framework and are
used with refinements in this work to make them better support the top three layers.
Starting from the bottom, the Players layer represents a set of concrete services that are ready
to participate in the business collaboration. For example, this includes the Web services
provided by Garage and Tow-Truck chains, and Web service client application installed on
Motorists’ mobile phones or the car’s on-board emergency alert system. The Roles layer
captures the roles of the organisation that need to be filled during the runtime by the above
mentioned players. On top of the Roles layer, the relationships among roles are captured via the
Contracts layer. The Contracts layer and Roles layer together form the organisational structure
as discussed above. A contract between two roles defines the interactions and mutual
obligations. The Task layer is grounded upon these interactions defined in contracts. A task
captures an outsourced activity that should be performed by a role-player. Necessary
messages/data resulting from contractual interactions are used to perform tasks. Furthermore,
task executions can result in more contractual interactions. The coordination or the ordering
among tasks is specified in the behaviour layer, which defines units of behaviour (i.e.,
behaviour units) in the organisation. The behaviour layer captures the common behaviours as
behaviour units and defines them in a reusable manner. The Process layer shares and reuses
these behaviour units to define multiple business processes to achieve different business goals
or to address the different requirements of customer groups.
76
Figure 4-6. Layers of a Serendip Orchestration Design
In the following sections, we will describe these layers in details in terms of the underlying
concepts, i.e., Loosely-coupled Tasks, Behaviour-based Processes, Two-tier Constraints and
Interaction Membrane. Understanding these concepts is fundamental to comprehend the
Serendip service orchestration.
4.2 Loosely-coupled Tasks
In an organisation, tasks are defined in roles. A task is a description of an atomic activity that
should be performed on behalf of the organisation. One or more tasks could be defined in a role.
If bound to the organisation, a role-player is obliged to perform the tasks defined in its
corresponding role. Therefore, the task is defined by the organisation and yet the execution is
always external to the organisation5. This relationship between the Role, Player and Task is
captured by the meta-model presented in Figure 4-7. Example roles and tasks of the case study
are the Case-Officer analysing roadside service assistance requests, the Tow-Truck performing
task ‘tow’ and the Garage performing task ‘repair’ and so on. Listing 4-3 shows the task
definitions of the role CO (Case-Officer). Any player bound to the role CO should be able to
perform all these tasks shown6. The clause playedBy refer to an identifier of the player binding
of a case officer player as shown in Listing 4-1.
5 Here the organisation refers to the composite, rather than to a business organisation. A business
organisation may have internal services (players) deployed to play the roles of a composite. But from the composite (i.e. the organisation) point of view these players and their executions are external.
6 A player may be another service composite that decompose the tasks among multiple roles, See section 4.8.1 for more information.
77
Task RoleDefined in PlayerPlayed by
Figure 4-7. Meta-model: Tasks, Roles and Players
Listing 4-3. Task Definitions of Role Case-Officer
Role CO is a 'Case Officer' playedBy copb{ Task tAnalyze{ … }//To analyse the assistance requests Task tPayGR { … }//To pay the garage Task tPayTT { … }//To pay the tow-truck }
4.2.1 Task Dependencies
The processes in an organisation coordinate executions of tasks. This coordination is required
to enforce the dependencies among tasks, and to determine the sequence of task executions. For
example, the tPayTT task may not be initiated until the tTow task is complete. Therefore, the
tPayTT and tTow tasks exhibit dependencies as required by RoSAS business.
Most of the existing orchestration approaches over-specify task dependencies, resulting in
tightly coupled tasks in a business process [73]. For example, in WS-BPEL, the task/activity
ordering is block-structured. Tasks are directly related to each other and leads to static and
brittle service composites [106]. Such tightly coupled rigid structures can cause unnecessary
restrictions if the task dependencies are to be modified during the runtime, especially for already
running process instances. Software engineers have to deal with the tight couplings among tasks
to introduce new dependencies and to modify the existing dependencies. It should be noted that
SOA is inherently a loosely-coupled architecture [41]. The distributed software systems or
services that perform these tasks are connected to each other in a loosely coupled manner. In
this context, the tightly coupled task dependencies in a service orchestration prevent delivering
the true benefits of such loosely-coupled architecture.
The tasks are executed by the concrete services (role-players). Sufficient dependencies need
to be captured among these execution steps. For example the two tasks tTow and tPayTT should
be carried out by Tow-Truck and Case-Officer respectively. In this example, what prevents the
task tPayTT from being carried out is the fact that towing has not been done. When this
scenario is realised with imperative languages such as WS-BPEL, the two tasks tPayTT and
tTow are placed in a sequence, where tPayTT succeeds tTow as shown in Figure 4-8(a). This
seems to be correct as tPayTT should not be initiated before tTow is carried out. Nevertheless,
such direct task linkage creates tight-coupling between the two tasks during the execution. Such
78
tight-coupling can unnecessarily restrict runtime modifications that require task dependency
changes. For example, an ad-hoc deviation of a process may require that tPayTT is carried due
to a special order from the Case-Officer rather than strictly after the execution of task tTow.
Figure 4-8. The Loose-coupling Between Tasks
In fact, what holds back the execution of the tPayTT task is the non-fulfilment of its pre-
conditions. In this case, the knowledge that the towing has been successfully done is the pre-
condition that needs to be fulfilled in order to execute the task tPayTT, not the completion of
task tTow. The pre-conditions of task tPayTT and completion of task tTow are two separate
things and the process modelling approach should not force such misinterpretations. The
practice of tight coupling among tasks as exhibited in workflows and block structured processes
can lead to such misinterpretations.
As a solution, instead of having such a direct linkage, it is important to have an indirect
linkage between the tasks as shown in Figure 4-8(b). In this example, the post-condition of tTow
may act as the pre-condition of tPayTT. Alternatively, a post-condition of a different task may
trigger the required pre-condition initiating the tPayTT. For example, a bonus payment could be
made without requiring a tTow activity at all. Pre-condition of a task is a property of task itself.
Consequently, tPayTT has the flexibility of continuously adapt its pre-conditions locally.
What’s more, the task dependencies in a service orchestration are not always simple. There
can be complex dependencies as depicted by Figure 4-9. The imperative approaches have used
gateways or decision points [15, 237] to formulate these complex dependencies to support
various workflow patterns [16, 29]. However, again, such connectors are also directly linked to
the tasks. A runtime modification to those connectors can also become tedious in the same way
as it is the case for directly linked tasks. Obviously, orchestration modelling should allow
adequately expressing these complex dependencies, but without over specification. Let us use
the term minimum task dependency to refer to the task dependency details that provides
adequate information about dependencies without over-specification.
tTow tPayTT tTow tPayTT
Post-condition Pre-condition
Loose-couplingTight-coupling
(a) (b)
79
Figure 4-9. Complex Dependencies among Tasks
4.2.2 Events and Event Patterns
In order to capture the minimum task dependency to avoid the rigidity, and also to allow
capturing the complex task dependencies, we use event patterns to specify the pre- and post-
conditions of tasks. An event here is a passive element [28], which is triggered as a result of the
execution of a task. This makes the event a soft-link between tasks.
Usefulness of events has been heavily explored in the past, varying from database systems to
enterprise application design [238-247]. Hence the usage of the concept of event is well-
understood. A task may be dynamically subscribed to an event or a pattern of events.
Furthermore, tasks can be unsubscribed from events during the runtime. The post-conditions of
a task may be dynamically modified to generate new events or to remove unwanted events. The
manner, in which the tasks are executed, such as quality of execution and timeliness, may
decide the patterns of events that are triggered. For example, a success of towing may lead to
triggering the event eTowSuccess while a failure may lead to triggering the event eTowFailed.
Tasks can be seen as both subscribers and publishers of events. Tasks are subscribers because
they listen to events. Tasks are publishers because they trigger events upon the task execution.
For example the pre-event pattern of task tTow is (eTowReqd * eDestinationKnown)7,8. This
means the triggering of events eTowReqd and eDestinationKnown will lead to the execution of
task tTow as defined in role TT. Similarly the post-events also can be specified as an event
pattern. For example, tTow has the post-event pattern (eTowSuccess ^ eTowFailed), which
specify that the execution of task tTow leads to triggering of either the eTowSuccess or
eTowFailed events. The dependencies of task tTow defined in role CO is specified according to
the Serendip Language SerendipLang (see Appendix A) as shown in Listing 4-4.
7 As a naming convention, the prefixes e, m, t, b, p will be used for events, messages, tasks, behaviour
units and process definitions respectively. 8 Event patterns use notations with following meanings.
* = AND | = OR ^ = XOR
Loose-coupling
Tasks
Pre-conditionPost-condition
80
Listing 4-4. Task tTow of Role TT
TaskRef TT.tTow { InitOn "eTowReqd * eDestinationKnown"; Triggers "eTowSuccess ^ eTowFailed"; }
Another task such as tPayTT (to make a payment to Tow-Truck) can subscribe to event
eTowSuccess. In other words the initiating dependency is specified via the event eTowSuccess.
The description for tPayTT is given in Listing 4-5. It shows that actual dependency for carrying
out the payment is the knowledge of towing is now successfully completed. This knowledge is
represented within the composition by eTowSuccess triggered by tTow task. Note that this
knowledge can also originate from another task (if that is the business requirement). For
example, the eTowSuccess event can be triggered due to a task execution by Garage when the
car is towed to the Garage. Irrespective of the source, the events represent certain knowledge
acquired by the organisation. It is the knowledge available to the organisation initiates further
execution of tasks. In this sense events are independent from the tasks that trigger them and
tasks that refer to them as illustrated by Figure 4-10. The events are related to tasks via the pre-
and post-conditions specified in tasks as shown in the meta-model presented in Figure 4-11.
Listing 4-5. Task tPayTT of Role CO
TaskRef CO.tPayTT { InitOn "eTowSuccess"; Triggers "eTTPaid"; }
Figure 4-10. Events Independent of Where They Originated and Referenced
Figure 4-11. Meta-model: Events and Tasks
It is important to note that in this work, the concept of the event has been used with two
important properties, i.e., events are both Publisher-independent and Subscriber-independent.
tTow tPayTT
eTowSuccess
This event can be used by other dynamic tasks
This event can be triggered by other dynamic tasks
TaskEvent Pre/Post Conditions
81
1. Publisher-independent: An event represents knowledge or a fact irrespective of the
source, e.g., the knowledge “car towing is successful” denoted by eTowSuccess is
independent from which task triggers that event.
2. Subscriber-independent: An event is not intended for a particular subscriber. It could
be subscribed by any interested entity, e.g., event eTowSuccess is not published for
the task tPayTT. Any task can subscribe to this event by specifying the event as part
of its pre-conditions.
4.2.3 Support for Dynamic Modifications
The publisher-independent and subscriber-independent usage of events improves the loose-
coupling nature of tasks. Such event-based loose-coupling brings the following main
advantages for process instance level adaptations.
1. Dynamic Task Insertion/Deletion: New tasks can be dynamically inserted and subscribed to
existing events as shown in Figure 4-12(a). This is an advantage as there is no necessary
requirement to modify the existing tasks. Only the new task needs to specify its event
pattern that needs to be subscribed. Such agility is important for runtime modifications. For
example, RoSAS might insert a new task tNofityTowFailureToCustomer, which will
subscribe to event eTowFailed to handle the exception and send a notification to the
member. In addition, there can many other exception handling tasks that can simultaneously
subscribe to the same event eTowFailed. On the other hand, when not required, task
subscriptions can be removed and tasks can be deleted without much effect on the rest of
the system as shown in Figure 4-12(b). If tNofityFailureToCustomer is no longer required,
for example, the task along with its subscription can be deleted on the fly.
2. Dynamic Event Insertion/Deletion: Events can be dynamically included in pre/post-event
patterns of existing tasks as shown in Figure 4-12(c). New events are usually included
when the existing events are semantically insufficient to initiate the task execution. For
example, upon completion of task tPayTT the event eTTPaid is triggered to mark the fact
that Tow-Truck has been paid. Suppose that if the task (tRecordCreditPay) needs to be
initiated only when the payment has been made using credit, subscribing to an existing
event eTTPaid is not semantically enough to initiate tRecordCreditPay. Thus, it is required
that a new event eTTPaidByCredit to be inserted to the existing post-event pattern of the
task tPayTT, which is only triggered when the payment is made using credit. Furthermore,
existing and obsolete events can be deleted by modifying the pre/post-event patterns as
shown in Figure 4-12(d). For instance, if credit-based payments are no longer supported
82
then the event eTTPaidByCredit can be removed (at runtime) from the post-event pattern of
the task tPayTT.
Figure 4-12. Advantages of Loose-coupling
Naturally, there should be a proper support from the execution engine/runtime to carry out
such alterations on the already running instances (Chapter 5). However, apart from the runtime
capabilities, the modelling language should provide ready support for such modifications by
allowing loose-coupling among tasks. The process modelling should not unnecessarily impose
rigidity. As shown above, the loose-coupling achieved by having events as soft-links among
tasks avoided such rigidity.
The soft-linking via pattern of events also helps to limit the number of tasks that are modified
upon a change requirement. This is important especially for carrying out the modifications on
running process instances, where the states of the tasks determine the legitimacy of
modification. Understandably, modifying an already completed task is meaningless. Tasks that
are already under execution may allow modifications depending on the properties that are
modified (Section 6.5). However, it is possible to modify a future task or post-conditions of
tasks that are still under execution in a process instance. Therefore, if the number of tasks that
need to be altered is high for a given change (due to limitations of the language such as a direct
linkage among tasks), there is a higher probability that the change becomes invalid due to the
involvement of an already completed task or a task under execution. Such incapability of
carrying out possible modifications would hinder the flexibility and thereby the business agility.
The publisher- and subscriber-independent properties of events help to avoid a direct linkage
among tasks and isolate the modifications as much as possible.
t1
t2
tNew
(c)
t1
(a)
e
e
e
New Task
New Event
t1
t2
tObsolete
(b)
e
Obsolete Task
X
(d)
t1e
e
Obsolete Event
X
t2 t2
83
To elaborate, consider the scenario where a new task tNofityTowFailureToCustomer needs to
be dynamically inserted into a running process instance to handle an exception. This task shows
a dependency with the existing task tTow. Suppose that tTow is already under execution or
completed for this process instance. Due to the use of events as soft-links and the publisher-
independent nature of events, the new task tNofityTowFailureToCustomer can be dynamically
inserted without requiring any modifications to tTow. All that needs to be done is to let the task
tNofityTowFailureToCustomer subscribe to the event eTowFailed which is both publisher- and
subscriber-independent. In contrary, if the above two tasks are directly linked, it might be
required to modify the tTow to specify what the next possible task is, which could be
problematic as tTow is already or being completed.
In conclusion, the loosely-coupled tasks have improved the amount of flexibility that is
required for modifying the running process instances. The loose-coupling has been achieved by
using the events as soft-links among tasks. The publisher- and subscriber-independent nature of
events has made dynamic insertion/deletion of tasks and events possible at the process instance
level. In Serendip, the flexibility offered by such loosely-coupled tasks is used to describe
adaptable organisational behaviours. The Serendip processes are in turn defined using these
adaptable behaviours.
4.3 Behaviour-based Processes
Service aggregators generate value by bringing the service providers and consumers closer,
creating a service eco-system [5]. While the service providers and consumers benefit from such
a service eco-system, an aggregator tries to increase its revenue through various custodianships.
Aggregators need to address the varying requirements of different service consumer groups to
exploit new business opportunities and thereby increase its revenue. Given the existence of an
already aggregated composition of services, the aggregator always attempts to maximise the
profit by increasing the reuse of its system, including the partner services. For example, the
roadside assistance can be delivered as multiple packages by including different features using
the same service composition of services as mentioned in Chapter 2.
Modular programming is a concept where a complete code is segregated into discrete units or
modules, each of which maintains a responsibility for specific functions [248]. One of the main
purposes of providing modularity is to improve code reuse. The current service orchestration
approaches provide little support for increased reuse [249]. Usually a process is defined wiring
the available services to suit the business requirements of a customer or customer group.
Redefining separate service orchestrations ground-up to suit the requirements of different
consumer groups can be time consuming and costly. Such inefficiencies may negatively impact
84
on the prospect of quickly exploiting the market opportunity, which may appear and disappear
in a short period of time. The reduced time-to-market is important in online marketplaces.
Hence, the ability to reuse the existing services as well as already proven service orchestrations
can increase the business agility [250].
Modularisation in service orchestration offers a better approach for increasing the reusability
in service orchestration. Rather than specifying a lengthy and monolithic coordination, it is
advisable to define small and reusable units of coordination. Later, larger and more meaningful
and complete business processes can be constructed using such small units to address the
demands of different service consumer groups.
This idea of modularisation has been explored in the past. Khalaf [251] attempts to create
multiple fragments of a BEPL process while maintaining the operational semantics. However, it
does not actually promote the reuse of already proven service orchestration, rather attempt to
create several fragments. In contrast, Adams et al. [146] attempt to improve the reuse of a
fragmented process by defining a self-contained repertoire of sub-processes to handle
exceptions in workflows via Worklets. A Worklet is a self-contained (YAWL [103]) process
fragment, which will be selected and executed as a replacement for a particular task of a
workflow. The worklets (process fragments) can be reused across multiple distinct tasks [143].
Although the use of process fragments is primarily for substituting selected tasks of a
workflow, it has made an important contribution in terms of promoting process modularisation
and reuse. This leads to the idea of defining processes that are composed of fragments of
processes. Instead of being used as replacements, the well-proven coordination logic of a
service orchestration can be defined as multiple reusable, self-contained coordination units, new
processes can be defined out of them. Such an arrangement can provide the much required
agility and abstraction for a service orchestration.
4.3.1 Organisational Behaviour
In Serendip, a complete behaviour of the organisation is represented as a collection of relatively
independent units called behaviour units. A behaviour unit provides the modularity of grouping
related tasks together. For example towing, repairing, taxi ordering are behaviour units of
RoSAS organisation that can be reused to fulfil the demands of multiple consumer groups.
As shown in Listing 4-6, the behaviour bTowing consists of tasks tTow, and tPayTow. The
pre- and post-event patterns among these tasks can define their dependencies as described in
Section 4.2. As many behaviour units as required can be defined on top of the organisation
structure during design-time and runtime.
85
Listing 4-6. A Sample Behaviour Description, bTowing
Behavior bTowing{ TaskRef TT.tTow { InitOn "eTowReqd * eDestinationKnown"; Triggers "eTowSuccess ^ eTowFailed"; } TaskRef CO.tPayTT { InitOn "eTowSuccess"; Triggers "eTTPaid"; } }
The organisation acts as a repository for defining and maintaining a pool of such behaviour
units that will be referenced or used by different business process definitions. These behaviour
units define how the underlying service infrastructure should be coordinated. A business process
is a collection of behaviour units tailored towards fulfilling a business goal of a consumer or a
consumer group. These business goals may be formulated to satisfy the business requirements
of a particular service consumer group within the aggregator’s business model. For example,
some users of RoSAS may prefer to have additional services such as taxi-drop-offs,
accommodation and paramedic services for an extra cost, while others may prefer to have basic
towing and repairing services for a lower cost. Subsequently, depending on the requirements,
behaviour units can be reused across multiple business processes.
Figure 4-13. Organisational Behaviours are Reused across Process Definitions
This means that multiple business processes can be defined to achieve multiple business goals
upon the same organisational structure as shown in Figure 4-13. This brings the opportunity of
addressing the requirements of different consumer groups in both time and cost effective
manner, compared to a ground up building of different service orchestrations. The following
properties of a behaviour unit underlie the above mentioned advantages:
1. A behaviour unit defines an accepted organisational behaviour by grouping related tasks in
a declarative manner. The tasks of a behaviour unit is executed without any direct and
imperative ordering among them, but solely based on the satisfaction of their pre-
conditions.
pdGold pdPlatpdSilv
bTowingbRepairing
bTaxiProviding
bAccommodationProviding...
...
Process Definitions
Behaviour Units
bComplainingSingle Service Composition
Multiple Service Consumer Groups
86
2. A behaviour unit is reusable. A single behaviour unit may be used across multiple process
definitions.
3. A behaviour unit is self-contained and can express a complete well-identified behaviour of
an organisation independent of the processes that refer to it. The behaviour captures both the
tasks (Section 4.2) and the constraints (Section 4.4) among them.
4.3.2 Process Definitions
In Serendip, a Process Definition consists of a collection of references to existing behaviour
units of the organisation. As mentioned above, behaviour units are declarative and a collection
of tasks, but do not have the notion of start and end. Nevertheless, a process definition should
have a start and end so that process instances can be initiated and terminated. Therefore, apart
from a collection of references to behaviour units, a process definition should also specify the
start and end conditions of the process. This is important for two reasons.
1. It gives the designer the ability to specify different start/end conditions for different process
definitions although they may share the same behaviour units.
2. The enactment engine can allocate and de-allocate resources for process instances
depending on the fulfilment of these start and end conditions.
As shown in Listing 4-7, the sample process definition pdGold specifies the Condition of
Start (CoS) and Condition of Termination (CoT) along with a number of references
(BehaviorRef) to existing behaviour units. According to the definition a new process will start
when a complaint or request is received as indicated by the event eComplainRcvdGold. The
process will continue as specified by the referenced behaviour units, bComplaining, bTowing,
bRepairing and bTaxiProviding. The process will terminate when the member is notified of the
completion of handling the case as indicated by the event eMMNotifDone.
Listing 4-7. A Sample Process Definition, pdGold
ProcessDefinition pdGold { CoS "eComplainRcvdGold"; CoT " eMMNotifDone"; BehaviourRef bComplaining; BehaviourRef bTowing; BehaviourRef bRepairing; BehaviourRef bTaxiProviding; }
Similarly another process definition (see Listing 4-8) can be defined, sharing (some of) the
existing behaviour units. However, if the defined behaviours are not sufficient to achieve the
87
goal, new behaviour units can be defined on the organisation and accordingly referenced by the
new process definitions.
Listing 4-8. A Sample Process Definition, pdPlat
ProcessDefinition pdPlat { CoS "ePlatComplainRcvdPlat"; CoT "eMMNotifDone"; BehaviorRef bComplaining; BehaviorRef bTowing; BehaviorRef bRepairing; BehaviorRef bTaxiProviding; BehaviorRef bAccommodationProviding; }
The relationship between Process Definition, Behaviour Unit and Task is captured in the
meta-model shown in Figure 4-14. As shown, the behaviour unit captures a unit of
organisational behaviour by collecting a set of related tasks together. Process definitions refer
and reuse a set of behaviour units to achieve a specific goal.
Figure 4-14. Meta-model: Process Definitions, Behaviour Units and Tasks
In summary, behaviour-based process modelling provides the following advantages.
1. Abstraction: Behaviour units provide an abstraction of underlying task complexities. A
behaviour unit may encapsulate a single or a number of tasks and the dependencies
among them. Such an abstraction is important for adapting high level business
requirements as a feature-based customisation of a service composition, where behaviour
unit corresponds to feature. As such, features can be added or removed from certain
process definitions depending on user requirements, by adding and removing behaviour
units.
2. Capturing Commonalities: Processes defined upon the same application or service
organisation can show many commonalities. Behaviour units facilitate the capture of
these commonalities by allowing many process definitions to reuse or share a single
behaviour unit.
3. Agile Modifications: Process definitions reference behaviour units. To change a feature
that appear in multiple process definitions, only a single behaviour unit needs to be
modified, instead of updating the multiple process definitions repetitively. For example, if
the towing protocol needs to be patched, there is no need to repeatedly modify an array of
Process Definition Behavior Unit TaskRefer toRefer to
88
business process definitions that use towing. An update in the behaviour unit bTowing
would automatically be available for all the referenced process definitions.
Nevertheless, it should be noted there are also some associated drawbacks with behaviour
reuse and sharing. For example, suppose the aggregator patches/modifies a particular behaviour
unit to facilitate a business requirement of a particular process definition. Since the same
behaviour is reused in different process definitions, such an upgrade or patching might
potentially harm the objectives of other sharing process definitions. This could challenge the
integrity of the service composition and thereby the business. In addition, certain changes might
be specific to a particular process definition (for a particular consumer group). Since the same
behaviour unit is reused such specific changes might not be possible. Therefore, despite the
benefits, the reuse of behaviours has some inherent drawbacks. In next two sections (i.e.,
Sections 4.4 and 4.5, we discuss the solutions to these drawbacks of behaviour reuse.
4.4 Two-tier Constraints
A service orchestration is a coherent environment, where the service consumers and service
providers as business entities achieve their respective goals as integrated by a service
aggregator. However, changes are inevitable. Therefore, during the runtime, the organisational
behaviours, as described by behaviour units in Serendip, need to be altered to facilitate various
change requirements. Irrespective of the source, the runtime changes need to be carried out
both in the process definition level and in the process instance level [77, 252]. It is a challenge
for a service orchestration to maintain its integrity amidst such changes. Therefore, it is
important that the process modelling approach provides suitable measures to ensure that the
changes are carried out within a safe boundary.
In this section we discuss the importance of such a boundary for a safe modification and how
to define one without unnecessarily restricting the possible modifications.
4.4.1 The Boundary for a Safe Modification
A clearly defined boundary for safe modification not only safeguards the integrity of the
organisation or composition, but also helps to increase the confidence level of applying changes.
Such confidence allows a service aggregator to further optimise the business processes without
being held back by the possibility of possible violations of business constraints. In a service
composition optimisations need to be carried out to satisfy the business requirements of the
service consumers and service providers.
89
For example, some service consumers may need faster and rapid completion of towing,
repairing and other assistances. Certain assistance processes might need to deviate from the
originally specified time period depending on the severity of the accident and the unforeseen
customer requirements. Fulfilling such requirements can increase the reputation of RoSAS as
the service aggregator. Inability to cater for the customer requirements may damage its
reputation. However, on the other hand, the providers of bound services are also autonomous
businesses and have their own limits in delivering services. Furthermore, the collaborations
need to maintain certain temporal and causal dependencies between tasks. An adaptation from a
service consumer point of view may lead to breaking certain dependencies, hindering the
collaborations and impacting on the service delivery in the long run.
Similarly, an adaptation requested by one of the collaborating services might also impact on
the customer goals. Thus the service aggregator needs to support the adaptability in a careful
manner. The possible adaptations need to be facilitated but without compromising the integrity
of the composite from the perspectives of the consumers, partner services and aggregator.
Consequently, the problem of supporting adaptations should be considered as finding a solution
to meet customer demands within a safe boundary as symbolised in Figure 4-15. Therefore, we
define the boundary for safe modification in terms of “a set of constraints formulated from the
business requirements of the aggregator as well as the consumer and partner services of a
service orchestration to avoid invalid modifications”.
Figure 4-15. Boundary for Safe Modification
The use of constraints in business process modelling has been extensively explored in the
past. Condec [73, 150] is a constraint-based process modelling language attempted to use
constraints for business process modelling. In Condec, There is no specific flow of activities;
instead, the activities are carried out as long as the execution adheres to the defined constraint
model. The constraint model defines the constraints that should not be violated at a given time.
This ensures the integrity while providing increased flexibility for the runtime execution.
Regev et al. [76] also see the flexibility as the ability to change without losing the identity. The
identity of a business is defined via a set of ‘norms’ and ‘beliefs’. Here a norm is a feature that
remains relatively stable and a belief is a point of view from a particular observer [253].
90
We also agree that defining such constraints in modelling business processes is essential to
ensure their integrity. However, this thesis moves a step further as to how the constraints should
be defined in an orchestration to achieve the maximum flexibility without unnecessarily
restricting the possible changes to the runtime. We propose defining constraints in two different
levels of a shared service composition and thereby identify the minimal set of constraints that is
applicable for a given modification.
4.4.2 The Minimal Set of Constraints
A common practice is to globally associate constraints with a process model [76, 110, 139]. A
process model is valid as long as a set of constraints are not violated. However, this thesis
discourages setting up a global set of constraints in a service orchestration, especially in shared
online marketplaces where multiple business requirements are achieved using the same shared
application infrastructure. Instead, the constraints should be specified in terms of particular
scope. We use the modularity provided by behaviour units and the process definitions as the
basis for specifying the constraints. When a modification is proposed, only the applicable
minimal set of constraints is considered to check the integrity. As shown later in this section, the
minimal set of constraints is determined by considering linkage between behaviour units and
process definitions.
Unlike a global set of constraints, which is same for all the modifications, a well-scoped
constraint specification only uses the minimum relevant constraints. It follows that compared to
the global specification of constraints; the probability of restricting even a possible modification
is less in well-scoped constraints due to lesser number of constraints considered. With this
intention we define constraints in two different levels in a Serendip orchestration, i.e.,
Behaviour-level and Process-level.
Behaviour-level constraints are defined in behaviour units, e.g., in bTowing, to
safeguard the collaboration aspects as expected by towing. A collaboration represented
by the behaviour units can have certain properties or constraints that should not be
violated by runtime modifications to the behaviour unit. A sample constraint
(bTowing_c1) is shown in Listing 4-9, which specifies that every car towing should
eventually be followed by a payment. In other words, if event eTowSuccess has occurred
it should eventually be followed by the event eTTPaid. Consequently each process
definition that refers to bTowing should respect this constraint. Note that the constraint
are specified in the TPN-TCTL [254, 255] language, which is a language to specify
TCTL properties of a Time Petri-Nets (TPN). Further details about the used constraint
language can be found in [256, 257]. This language has been used due to its
91
expressiveness in terms of specifying the causal relationships between two events, and
the available tool support [258].
Process-level constraints are defined to safeguard the goals represented by a process
definition. A process definition addresses a requirement of a customer or a customer
group. Therefore, these constraints reflect goals or norms as expected by the customer.
Process constraints are defined within a process definition. A sample process-level
constraint is shown in Listing 4-10, which specifies that when a complaint is receive,
eventually it should be followed by notification to customer. Since the constraint is
defined in pdGold that this constraint is applicable only for Gold customers.
Figure 4-16 shows the relationship among constraints and the constituent process modelling
concepts, process definitions and behaviour units.
Figure 4-16. Meta-model: Types of Constraints
The relevancy of an applicable minimal set of constraints is determined by the linkage among
behaviour units and process definitions within the organisation. For a change in a behaviour unit
B, which is shared by n process definitions PDi (i=1, 2,3...n), the applicable minimal set of
constraints (CSmsc) is,
{⋃
} ⋃{ }
Where,
CSB is the set of constraints defined in behaviour unit B.
CSPDi (i= 1,2,3…n) is the set of constraints defined in the ith process definition PDi that refers
to behaviour unit B.
Any applicable constraint set (CSmsc) is always a sub set of the global set of constraints
(CSglobal), which is all the constraints defined over all the m behaviour units and k process
definitions, i.e., CSmsc ⊆ CSglobal.
Process Definition Behavior Unit
Constraint
Process Constraint Behavior Constraint
92
{⋃
} ⋃ {⋃
}
Where,
CSPDi (i= 1,2,3…k) is the set of constraints defined in the ith process definition PDi.
CSBj (j= 1,2,3…m) is the set of constraints defined in the jth behaviour unit Bj.
Listing 4-9. Behaviour-level Constraints
Behaviour bTowing { TaskRef...; Constraint bTowing_c1:"(eTowSuccess>0) -> (eTTPaid>0)"; }
Listing 4-10. Process-level Constraints
ProcessDefinition pdGold { CoS "eComplainRcvdGold"; CoT " eMMNotifDone "; BehaviorRef ...; Constraint pdGold_c1:"(eComplainRcvdGold>0) -> ( eMMNotifDone >0)"; }
4.4.3 Benefits of Two-tier Constraints
To elaborate the benefits of having two-tier constraints, consider that six constraints (c1,
c2…c6) are defined within the scope of two process definitions (pdGold and pdPlat) and three
behaviour units (bRepairing, bTaxiProviding and bAccommodationProviding) as shown in
Figure 4-17. Assume there is a modification proposed to pdGold due to a change requirement in
the Gold service consumer group to change the way the taxi assistance is provided.
Consequently the behaviour unit bTaxiProviding needs to be modified. However, another
process definition pdPlat also uses the behaviour unit bTaxiProviding.
Figure 4-17. Process-collaboration Linkage in Constraint Specification
Therefore, the change in the pdGold has an impact on the objectives of pdPlat. Consequently
the applicable minimal set of constraints that need to be considered in identifying the impact of
pdGold pdPlat
bRepairing bTaxiProviding bAccommodationProviding
c2c1 c3 c4
c5 c6
c
Modification Request
Actual Modification
93
the modification include all the constraints defined in bTaxiProviding, pdGold and pdPlat.
Therefore,
CSmsc = {c2,c5,c6}
CSglobal = {c1,c2,c3,c4,c5,c6}
There is no requirement to consider the other constraints, i.e. c1, c3 and c4. Hence such a
scoping of constraints and the explicit linkage between process definitions and behaviour units
avoid unnecessarily restrictions and considerations in modifications in contrast to a global set of
constraints. Only the applicable minimal set of constraints (CScsp) is considered, which always
less or equal than the global set of constraints (CSglobal).
The use of two-tier constraints also helps to identify the affected process definitions and
behaviour units. This helps the change impact analysis processes. In the above example, the
process definitions pdPlat and pdGold and behaviour unit bTaxiProviding are affected. The
other process definition pdSilv and rest of the behaviour units are not affected. When violation
happens as identified by impact analysis, a software engineer can pinpoint and perform
corresponding actions only on the exact sections that are affected in a large service
orchestration. Possible actions would be to discard the change or relax some constraints. Such a
capability is possible due to the consideration of explicit linkage between the process definitions
and behaviour units in the two-tier constraint validation process (Section 6.4).
4.5 Behaviour Specialisation
The concept of organisational behaviour modelling introduced in Section 4.3 can be used for
capturing the commonalities among business processes that are defined upon the same service
composition. Many processes can reuse the same behaviour unit to avoid redundancy of
specification. It also helps to reduce the effort of ground-up modelling of new processes in
addressing different consumer groups.
4.5.1 Variations in Organisational Behaviour
In general, we cannot assume that a defined behaviour unit can fully satisfy the expectations of
all the consumer groups. The reality is that there are likely some slight variations in the
expected organisational behaviours of a behaviour unit from different customers or customer
groups. Furthermore, such variations can be unforeseen. Not reflecting these variations can
make IT processes incorrect or insufficient in capturing real world business processes. For
example, although the towing behaviour is used across multiple processes, there can be some
variations in the way the actual towing dependencies should be specified to cater for the
94
requirements of different consumer groups. For platinum members (pdPlat), the tTow task may
need to be delayed until the Taxi picks the Motorist up, whereas for Silver members (pdSilv)
there is no such dependency as they are not offered with the taxi service (bTaxiProviding) at all
(see Figure 4-13). Introducing the event-dependency eTaxiProvided for tTow in a common
behaviour unit bTowing can cause problems in executing a pdSilv process as there will not be
any triggering of event eTaxiProvided. On the other hand, not introducing such a dependency
might not capture the true dependencies as required by the process pdPlat.
One naive solution to this issue is to model two separate behaviour units (bTowingSilv,
bTowingPlat) targeting these two business processes. These two behaviour units duplicate all
the tasks associated with the towing behaviour, but with tTow having different pre-conditions in
the two behaviour units, as shown in Figure 4-18. The bTowingPlat specifies the pre-condition
for tTow as “eTowReqd * eDestinationKnown * eTaxiProvided”, whereas bTowingSilv specifies
“eTowReqd * eDestinationKnown” as the pre-condition. Then two process definitions pdPlat
and pdSilv can refer to these separate respective behaviour units.
Figure 4-18. Variations In Separate Behaviour Units (a) bTowingSilv (b) bTowingPlat
This solution however, requires duplicating all the other tasks and dependencies in both
behaviour units, merely because of such a slight deviation of a pre-condition of a single task.
Such duplication can hinder the efficiency of performing modifications. As such, both
behaviour units now need to be modified upon a patch or an upgrade to the towing behaviour.
Assuming that there are also such slight variations in the other processes such as pdGold, the
number of duplicate behaviours can grow significantly each time a variation is required in a
shared behaviour, hindering the ability of capturing commonalities. Therefore, an improved
solution is required to avoid the unnecessary duplications while allowing such variations.
Figure 4-19. Meta-Model: Behaviour Specialisation
We further improves the behaviour-based process modelling (Section 4.3) with the Behaviour
Specialisation feature, which can be used to support variations. The Behaviour Specialisation
feature allows a behaviour unit (the child) to specialise another behaviour unit (the parent) by
specifying the additional tasks or overriding existing properties of tasks. This addition to the
Process Definition Behavior Unit
Specialise
95
meta-model is shown in Figure 4-19. This feature makes it possible to form a behaviour
hierarchy within the organisation. The new behaviour is called a child of the already defined
parent. Therefore, for the given example, the behaviour variation required by the pdPlat can be
addressed by extending or specialising the behaviour unit bTowing as shown in Listing 4-11.
Note that the InitOn property of the task tTow (of Child) is re-specified to override the Parent’s
InitOn property. As the child, bTowing2 will inherit all the other non-specified tasks and the
properties from bTowing, the parent. The process definition, pdPlat that requires this specialised
version of towing can change its reference from bTowing to bTowing2 whilst the other process
definitions (pdSilv and pdGold) can still refer bTowing as shown in Figure 4-20
Listing 4-11. A Specialised Behaviour Unit Overriding a Property of a Task
Behaviour bTowing2 extends bTowing { TaskRef TT.tTow { InitOn "ePickupLocKnown * eDestinationKnown * eTaxiProvided"; //Overrides } }
Figure 4-20. Process Definitions Can Refer to Specialised Behaviour Units
Apart from overriding properties of tasks, new tasks can also be introduced. For example, if a
gold member is required to be alerted when the towing is complete, a new task tAlertTowDone
can be introduced by specialising the behaviour unit as shown in Listing 4-12.
Listing 4-12. A Specialised Behaviour Unit Introducing a New Task
Behavior btTowing3 extends bTowing { TaskRef CO.tAlertTowDone { InitOn "eTowSuccess"; Triggers "eMemberTowAlerted"; } }
A behaviour unit can be either concrete or abstract. The above mentioned behaviour units are
concrete behaviour units that fully specify their behaviour. In contrast, an abstract behaviour
unit may not specify a complete behaviour, due to lack of knowledge. Hence, abstract behaviour
units cannot be directly referenced by a process definition. The purpose of abstract definitions is
pdGold
bTowing
bTowing2
pdPlat
Specialisation
pdSilv
96
to act as a parent that can capture the commonalities among a set of variations expected by child
behaviour units. For example, when a taxi is provided a payment need to be made. This
payment can be made in various forms, e.g., using credit or debit. Assume that at the time of
defining the behaviour unit, the payment protocol is unknown. The tasks such as placing taxi
order and providing taxi are common to all behaviour units. However, a process definition
should not refer to such a partially defined behaviour unit as the payment details are missing. In
this case it is best to define the common tasks in an abstract parent behaviour unit whilst the
specialised payment procedures are specified in child behaviour units. Listing 4-13, shows a
single abstract parent behaviour unit bTaxiProviding capturing the commonalities in taxi
providing whilst the two child behaviour units extend the parent to specify variations. Marking
the parent behaviour unit abstract will disallow a process definition from referring to it.
Listing 4-13. An Abstract Behaviour Unit
//Parent behaviour unit is marked abstract, because how to perform the payment is unknown abstract Behavior bTaxiProviding{ TaskRef CO.tPlaceTaxiOrder { InitOn "eTaxiReqd"; Triggers "eTaxiOrderPlaced"; } TaskRef TX.tProvideTaxi { InitOn "eTaxiOrderPlaced"; Triggers "eTaxiProvided"; } } //Child behaviour unit, specialising the parent to specify how to perform credit-based payments Behavior bTaxiProvidingCreditBased extends bTaxiProviding { //Payment is carried out after a credit check TaskRef CO.tCreditCheckForTaxiPay { InitOn "eTaxiProvided"; Triggers "eTaxiCreditCheckSuccess ^eTaxiCreditCheckFailed"; } TaskRef CO.tPayTaxi { InitOn " eTaxiCreditCheckSuccess "; Triggers "eTaxiPaid"; } //Other tasks if any } //Child behaviour unit, specialising the parent to specify how to perform debit-based payments Behavior bTaxiProvidingDebitBased extends bTaxiProviding { //Payment is carried out directly via debit TaskRef CO.tPayTaxi { InitOn " eTaxiProvided "; Triggers "eTaxiPaid"; } }
97
4.5.2 Specialisation Rules
To ensure consistent and correct behaviour, behaviour specialisation needs to follow a set of
rules. These rules are similar to the specialisation rules used in object-oriented programming
[56-59], but systematically applied to behaviour-based process modelling.
When a process definition pdProcessDef refers to a behaviour unit bChild, which specialises
the behaviour unit bParent in a behaviour hierarchy as shown in Figure 4-21,
Figure 4-21. Rules of Specialisation
Rule 1. All tasks/constraints specified in bParent but not specified in bChild are
inherited from bParent. All the tasks/constraints specified in bChild but not specified
in bParent are recognised as newly specified local tasks/constraints of bChild. The
inherited and locally specified tasks/constraints are treated the same way as the
defined.
Rule 2. If there are tasks/constraints common to both bChild and bParent, the common
tasks/constraints of bParent are said to be overridden by those of bChild. The
commonality is understood via the task/constraint identifier, i.e., if both bChild and
bParent uses the same identifier for the task/constraint, the parent’s task/constraint is
overridden by the child’s.
Rule 3. When a task of a child (bChild.tTask) overrides a task of a parent
(bParent.tTask)
a. The attributes specified in Task bParent.tTask but not specified in
bChild.tTask are inherited by bChild tTask.
b. If there are attributes common to both bChild.tTask and bParent.tTask, the
attributes of bChild.tTask take precedence.
Rule 4. When a constraint of a child (bChild.cCons) overrides a constraint of a parent
(bParent.cCons) the constraint expression of bChild.cCons takes precedence.
Rule 5. All the attributes of tasks of a concrete behaviour unit need to be fully defined
(inherited or locally). If not, the behaviour unit should be explicitly declared abstract.
98
4.5.3 Support for Unforeseen Variations
Identifying commonalities and allowing variations in business processes has been discussed in
the past. Many approaches have been proposed to address this requirement [42, 66-68, 149, 193,
203]. These approaches help to identify the commonalities and variations to a certain extent.
For example, the VxBPEL language extension for BPEL attempts to allow a process to be
customised based on variability points [67]. However, the variability points are fixed and only
the parameters are selected during the runtime. Likewise the aspects/rules viewed via pointcuts
[193] in the AO4BPEL [68, 190] represent the volatile parts, while the abstract process which
defines the joinpoints represents the fixed part. Similarly in [203], the variability points
represent the volatile parts while the provided master process represents the fixed part.
These approaches nonetheless suffer from a common weakness, i.e., there is an assumption
that the fixed part and the volatile part can be well-identified at the design time. This might be
possible in certain business scenarios. However, such design time identification cannot be
applied to business environments where the commonalities and variations need to be identified
or adjusted during the runtime. In contrast, we do not rely on such an assumption of fixed and
volatile parts. The behaviours are defined without an explicit declaration of fixed and volatile
parts. All the properties of each behaviour unit are considered to be potential volatile. They can
be modified or further extended using behaviour specialisation, to suit the business
requirements. As shown above, the towing behaviour can be further extended by overriding its
properties to support its variations in the future without an explicit declaration of fixed and
volatile parts. This is beneficial for the service aggregator, who might not be able to foresee all
the variations upfront.
4.6 Interaction Membranes
In the previous sections we describe how the control-flow is specified in the Serendip service
orchestration. In this section, we describe how the data-flow is specified.
In the organisation-based process modelling paradigm, the task execution is always external.
While executing tasks, the role-players interact with each other via the organisation. For
example, when the tTow task needs to be performed, the towing service (a player) needs to be
notified by the Case-Officer (a player). Subsequently, there will be a service request from the
Case-Officer (e.g., a Desktop Web service client application operated by a human) to the
external towing service (e.g., a Web service provided to accept job requests) through the
respective roles of the composition as shown in Figure 4-22. The organisation mediates these
messages according to the contracts defined in the organisation.
99
Figure 4-22. Role Players Interact via the Organisation
Nevertheless, the message mediation is not always simple and point-to-point in an
organisation. There are challenges that need to be addressed.
Synchronisation: In an organisation, the message delivery is well-structured
reflecting the organisational goals rather than carried out in a fixed mechanism such
as First-In-First-Out (FIFO). The messages need to be delivered to players in
synchronised with the well-defined processes. This is applicable for a service
orchestration where the service invocations are orchestrated according to a defined
flow. For example, the sending of the towing request to the Tow-Truck player should
be synchronised with the processes defined in the RoSAS organisation.
Information Content: A message (Data) sent by one player to another via the
organisation, might need to be transformed to change the information content, prior
to delivering the designated player. For example, the message from Case-Officer will
be included additional information such as time of tow-request prior to delivering to
the Tow-Truck player.
Complex Interactions: More complex interactions may need to be specified apart
from the point-to-point message delivery. Multiple messages from multiple players
may be used to compose messages to a single player. Likewise, a message from a
single player may be used to generate multiple messages to be sent to other players.
For example, the towing request sent to the Tow-Truck player might need to be
composed of information derived from messages sent by both Case-Officer and
Garage players. The response from the Tow-Truck player could be directed to both
Case-Officer and Garage.
Adaptability: For adaptability requirements, the players will be bound / unbound
dynamically during the runtime. As the players are considered autonomous, the data
and their formats can vary from one implementation to another for the same Role. As
a result, the external interactions also need to be changed to ensure the compatibility
upon player changes. The core organisational behaviour need to be stable from these
adaptations in underlying players as much as possible.
100
Delivery Method: The message delivery method to the player can vary from one
player to another. Some players may prefer to be notified, whilst some player may
ping the organisation for the required information. For example, a particular Tow-
truck job acceptance software may prefer to ping the organisation for next available
job rather than being notified upon a requirement, because of the frequent
unavailability due to lack of network coverage or due to its implementation
constraints.
In order to address above challenges, we introduce the concept of interaction membrane. An
interaction membrane is a hypothetical boundary of the composite which isolate the internal
core processes and its internal interactions from the external interactions with the players. The
membrane is formed of a number of transformation points, where the external interactions are
transformed internal interactions, and vice versa as shown in Figure 4-23. The data that comes
in and going out of the organisation has to cross the interaction membrane according to defined
transformations. In addition, the data that goes out of the organisation need to be synchronised
with the core processes at these transformation points.
Figure 4-23. The Interaction Membrane
In this section we will explain how such a membrane is formed and what the benefits are.
Section 4.6.1 provides how the indirection between the processes and external interactions are
achieved in the organisational design. Followed by an explanation of how data are transformed
across the interaction membrane. Finally, in Section 4.6.3, we explain the benefits of the
membranous design in the organisation.
4.6.1 Indirection of Processes and External Interactions
If the internal business processes of a service composition are directly associated with the
external interactions with partner services, the changes in the external interactions can adversely
affect the internal business processes. Frequent changes in the external interactions may cause
T
T
T
T
T
T
SMC Boundary Interaction Membrane
Data IN Data OUTInternal Data
Representation and Synchronisation.
T Transformation Point
101
frequent changes in business processes that are internal to the composition. Therefore, to
provide the required stability of a composition or organisation, it is better to separate the
external interactions from the internal core processes. The work of this thesis uses the concept
of Task to achieve this indirection between core organisational processes and external
interactions, which involves two aspects.
Firstly, there is an indirection between the external interactions and the internal interactions.
We call the internal interactions among roles, role-role interactions (IRR) and the external
interactions between a role and its player role-player interactions (IRP) as shown in Figure 4-24.
An external role-player interaction may be initiated based on multiple internal role-role
interactions. Conversely, an external role-player interaction may cause multiple internal role-
role interactions. For example, the tow request from the role TT to the player may use both the
tow request message from CO and the garage location message from GR. The response for the
tow request may result in two independent interactions from role TT to role CO and role TT to
role GR respectively as responses to the previous interactions.
Secondly, there is an indirection between role-role interactions and core processes. This
indirection is provided by the tasks defined in the roles, which separates the concerns of when to
perform a task and how to perform a task. The behaviour units may be modified to reflect the
changes of when to perform tasks or the synchronisation aspects as described in sections 4.2 and
4.3. The references to role-role interactions as defined in tasks may be modified to reflect the
changes on how to perform tasks. A task description should specify what the role-role
interactions use to perform a task as well as what will be resulted in when the task is performed.
In this sense, tasks act as intermediaries between core processes (or behaviour units) and role-
role interactions providing the point where the synchronisations and transformations occur.
Figure 4-24. The Indirection of Indirections Assisted by Tasks
The use of events facilitates to achieve the indirection between core processes and
interactions that occur as part of task execution. While performing tasks, the roles interact with
CO TTCO_TT
IRR IRPtTow
bTowing
GR CO_GR
When to perform?
How to perform?
A unit of behaviour
Fast-Tow(player)
102
each other via defined contracts. Subsequently, the contracts interpret these interactions and
trigger events. These events will initiate further tasks according to the dependencies specified in
behaviour units. As part of organisational modelling, the contracts capture the allowed
interactions between the two roles of a composition via interaction terms. The work of this
thesis uses the modelling concepts such as contracts and interaction terms as specified in ROAD
[81] to model the role-role interactions. The meta-model that captures these relationships is
given in Figure 4-25.
Figure 4-25. Meta-model: Tasks, Events and Interactions
For example, contract CO_TT defines the allowed interactions between the role CO and TT.
When the message mOrderTow is sent from role CO to TT, it flows through the Contract
CO_TT. As the message is received by the contract, the message is inspected to validate if it
satisfies the contract definition. In addition, the event eTowReqd is triggered based on the
message interpretation in the contract. This event is used to trigger task tTow as specified in
behaviour unit bTowing.
4.6.2 Data Transformation
Form the organisation point of view the players and the task execution is always outside the
organisation boundary. In order to execute the tasks the data may need to be derived from the
internal-interactions and delivered to the players. Upon completion of tasks the data need to be
received and may be propagated to other roles by creating more role-role interactions.
Therefore, a task definition defined in a role should capture following information.
1. To initiate a task
a. What role-role interactions would provide information?
b. How to compose them?
2. Upon a completion of task
a. What role-role interactions would result in?
b. How to compose them?
Task InteractionRoleContract
Refer to
EventPre/Post Conditions
Trigger
Bound by
Organisation
103
For example, to perform the task tTow, a request needs to be sent from role TT to its player.
This request needs certain data including pick-up location and destination. These data need to be
extracted from the interactions mOrderTow and mSendGarageLocaiton, defined in CO_TT and
GR_TT contracts respectively to the role TT. When the towing is completed (i.e., the tow task is
executed), the bound player will send a response. The response need to be transformed into
response interactions of aforementioned mOrderTow and mSendGarageLocaiton interactions.
Figure 4-26 visualises the scenario described. We use the syntax
<contract_id>.<interaction_id>.<req/res> to uniquely identify a role-role interaction. Here,
contract_id is the contract identifier and interaction_id is the interaction identifier. The suffix
req/res indicate whether the interaction is either from obligated role (request) or its partner in
contract (response) respectively. We also use the <role_id>.<task_id>.<req/res> to uniquely
identify a role-player interaction. Here, a role_id is the role identifier and task_id is the task
identifier. The suffix req/res indicate whether the interaction is from a role to player (request) or
the player to role (response).
As shown, there is a transformation (T1) to create the interaction from TT role to its player. In
addition, there are two transformations (T2, T3) to create the role-role interactions from the
interaction form player to TT role.
TT.tTow.Req= ƒT1 (CO_TT.mOrderTow.Req, GR_TT.mSendGarageLocation.Req);
CO_TT.mOrderTow.Res = ƒT2 (TT.tTow.Res);
GR_TT.mSendGarageLocation.Res = ƒT3 (TT.tTow.Res);
Listing 4-14 shows the task definition of tTow defined in the TT Role. The clause UsingMsgs
specifies the interactions that are being used to create the request to perform tTow and what is
the transformation function. It also specifies what are the resulting interactions upon the
completion of the task are and what are the transformations to be used. The current
implementation supports XSLT transformations as default transformations. The details and the
sample transformation files (XSLT) are available in Section 7.1.3.
104
Figure 4-26. Tasks Associate Internal and External Interactions
Listing 4-14. Task Definition Specifies the Source and Resulting Messages/Interactions
Role TT playedBy binding1{ Task tTow {
UsingMsgs CO_TT.mOrderTow.Req, GR_TT.mSendGarageLocaiton.Req trans t1.xsl; ResultingMsgs CO_TT.mOrderTow.Res trans t2.xsl, GR_TT.mSendGarageLocaiton.Res trans t3.xsl;
} }
4.6.3 Benefits of Membranous Design
A task associates and separates the internal interactions with external interactions via
transformations as shown in the previous section. Each role defines a number of tasks.
Therefore, all the tasks defined in all the roles of the organisation form an Interaction
Membrane for the organisation as illustrated in Figure 4-27. The core processes and information
content are maintained within the boundary of the membrane and the concerns such as the
message formats and delivery mechanisms are handled outside the membrane (yet inside the
SMC/Organisation boundary).
Figure 4-27. Formation of the Interaction Membrane
TTtTow TT.mTow.Req
Internal interactions(Role-role)
External interactions(Role-player)
tTow TT.mTow.ResGR_TT.mSendGarageLocaiton.Req
CO_TT.mOrderTow.Res
GR_TT.mSendGarageLocaiton.Res
CO_TT.mOrderTow.Req CO_TT
GR_TTRole
Perform
T2
T3Fast-Tow(player)
T1
TT
CO
GR
Role-Role Interactions
Role-Player Interactions
SMC Boundary
Fine Garage(player)
John Doe (player)
Interaction Membrane
Fast-Tow(player)
105
Such a membranous design helps to address the challenges mentioned earlier. The message
delivery is synchronised with the well-defined processes. The processes are internal to the
organisation and reflect ways to achieve organisational goals. The information content of
interactions can be altered and the complex interactions can be defined to conjunct and split
messages to participating players. The core processes are separated from the underlying
interactions to provide the stability upon frequent changes in underlying players or their
interface. The message delivery concerns are also separated from the core processes to isolate
them from changes in underlying message delivery mechanism, which is a separate concern that
should be addressed by the message delivery mechanism (Further details on message delivery
mechanism are available in Section 7.2.5)
In conclusion, the organisation maintains its processes and the information content separate
from the surrounding environment. Obviously the environment is influential preparing the
processes and information content. Nevertheless, the representation inside the organisation is
independent from the environment. This principle of the organisation is vital for an adaptive
service orchestration design. An orchestration of services is not a secluded environment but
consist of a number of partner services interact via each other. The changes to these partner
services are unforseen. Therefore, having such a membranous design to isolate the processes
and information content of the service orchestration from surrounding environment is vital to
achieve adaptability.
4.7 Support for Adaptability
The concepts presented from Section 4.2 to Section 4.6 are fundemental in designing a service
orchestration as an adaptable organisation. In this section we discuss the adaptability allowed in
the organisation (Section 4.7.1). It is further followed by a discusson concerning importance of
seperating the control from the functional system (Section 4.7.2).
4.7.1 Adaptability in Layers of the Organisation
In Section 4.1.2, we have presented the different organisational layers of a service orchestration
in Serendip (Figure 4-6). All these layers collectively achieve an adaptive service orchestration.
In order to provide the benefit of a truly adaptable service orchestration it is important that each
aspect in these layers support adaptability. We discuss below those adaptable aspects of the
layers during both design time and runtime. An illustration is presented in Figure 4-28, where an
adaptation point indicates the addition, deletion and modification of the properties of the entity.
106
At the process level, the properties of process definitions such as CoT, CoS and
constraints (indicated by 1.1) and the references to behaviour units (1.2) can be
dynamically modified.
At the behaviour level, the behaviour unit, which specifies when to perform tasks,
can be dynamically modified to change the constraints (2.1) and the task
dependencies (2.2).
At the tasks level, in order to define how to perform task, the task definition (3.1), the
references to internal interactions (3.2), external interactions (3.3) and the
transformations (3.3) can be modified. The internal message formats (3.5) and
external message formats (3.6) can also be altered.
At the contracts level, the contracts (4.1), interaction terms (4.2) and the
interpretation rules (4.2) are modifiable.
At the roles level, the roles (5.1) can be added or removed. In addition, the player
bindings (5.2) can be changed to dynamically bind or unbind players to roles.
Finally, at the players level, new candidate players can be included and existing
candidate players can be excluded from the organisation (6.1).
Figure 4-28. Adaptability in the Organisation
Role
Interpreter rules
Contract
Interaction Term
tTask
pdProcessDef
bBehaviour
Ref
Bound Player
Candidate Players
bindin
gsExternal
InteractionsInternal
Interactions
Ref
1.1
1.2
2.2
3.13.2
3.3
3.6
3.5 4.1
4.2
4.3
5.2
5.1
Adaptablen.m
Transformations
3.4
2.1
ƒ
Ref
Ref
6.1
107
4.7.2 Separation of Control and Functional Process
In order to carry out adaptations to an organisation-based service orchestration, there should be
a decision making entity with a decision making process that makes the decisions to perform the
adaptations in the organisation. This decision making process or the control aspect of the
organisation should be separated from the functional processes of the organisation [82, 231,
259].
The work of this thesis applies the concept of organiser [81] to separate the control from the
functional process. The elements in each layer of the organisation are subject to runtime
regulation and reconfiguration only via the organiser. The organiser is a special role that is
responsible for managing the composition. Each ROAD composite has only one organiser role.
Just like other functional roles, the organiser role is played by a player such as a software
system or a human or a combination of the two. The difference between the organiser role and
the functional roles concerns the scope of control [83]. Furthermore, the organiser player may
operate remotely in a distributed environment just like any other players playing functional
roles. The relationship between the Organiser and the Organisation is shown in the meta-model
given in Figure 4-29.
Figure 4-29. Meta Model: The Organiser
The work of this thesis does not intend to provide self-adaptive capabilities for service
orchestrations. As pointed out by Salehie et al. [260] a Self-Adaptive Software is a closed-loop
system with a feedback loop aiming to adjust itself to changes during its operation. In contrast,
our work keeps the loop open at the point of making decisions, hence does not provide self-
adaptive capabilities although providing the structural mechanism for self-adaptability through
the organiser role and player. We consider that the decision making capabilities that closes the
loop is external to the system (through the organiser player) and should be implemented in a
domain specific manner.
From the service orchestration point of view, the organiser can be seen as an entity that is
responsible for making decisions and adapting the service orchestration. The question of how
the process adaptations are managed is exogenous to the organisation [82]. This is vital as the
complexity of adaptation decision making can be varied depending on the business domain. The
complexity associated with the management decision making should be separated from the
functional system by design. For example, the organiser player as shown in Figure 4-30 can be a
human or an intelligent software system that makes complex decisions. The organiser itself can
Organisation OrganiserManaged By
108
be formed of several sub-systems that interact with each other in making decisions. Rule-based
approaches and control-theoretic approaches are example approaches that could be used in
designing the organiser player as a software system. The factors such as frequency of process
changes, criticality and repetitiveness of change decision making can be influential in choosing
the organiser player. Furthermore, the organiser player can also be changed during the runtime.
A business that employs a human to make decisions may replace the human with a machine (or
may keep both) due to management complexity and workload. For these reasons it is important
to separate the decision making process from the functional process.
Figure 4-30. Organiser Player
From the service orchestration point of view the changes that are results of a decision making
process can be either evolutionary or process instance specific modifications. The service
orchestration modelled as an organisation accepts and realises the change requests depending on
their feasibility. The feasibility is determined by an impact analysis process, based on the scope
of the modification, the current status of processes as well as the constraints imposed on the
organisation. The interrelatedness of process goals and collaboration goals play an important
role in the impact analysis as discussed in Section 4.4. If a modification is feasible, the
organisation accepts and realises it; otherwise, it is rejected.
4.8 Managing Complexity
Proper management of the complexity of a service orchestration is paramount to sustainable
adaptability. If the complexity is not managed properly, the service orchestration can become
unusable in the long run. Due adaptations could be omitted because of lack of confidence. In
addition, the adaptations can be time consuming and error-prone. The design of a service
orchestration plays an important role in managing the complexity.
In the previous section we have discussed the importance of separating control and functional
processes in the service orchestration design and how it contributes to managing the complexity.
In addition, there are three further design concepts of the organisational approach that helps
OrganiserPlayer
Modifications – Evolutionary and Ad-hoc
=a Human a Machine
Monitoring
Composite
109
managing the complexity of a service orchestration, i.e., hierarchical and recursive
composition, support for heterogeneity and explicit representation of service relationships via
Contracts.
4.8.1 Hierarchical and Recursive Composition
ROAD supports hierarchical and recursive breakdown of complex systems [83, 261]. Using
this concept, a large and complex service orchestration can be sub-divided into manageable
smaller units of orchestrations as shown in Figure 4-31. A task can be executed by a player that
is designed as another Serendip orchestration. How the task is executed and managed in the
second composite is not a concern of the main composite. The organiser of the sub-composite
can reconfigure and regulate its own (sub-) orchestrations, while the organiser of the main
composite can reconfigure and regulate the main service orchestration. As such, the functional
concerns are also separated.
For example, suppose that coordinating the towing activities is a rather complex process. It is
no longer just an ordering of towing from a bound external player. Apart from the external
towing services, there are tow-coordinating officers from RoSAS itself to handle the towing
operations. In this sense, towing itself is a complex process which could be used by the main
roadside assistance process. Rather than introducing this complexity into the main roadside
assistance process, a sub-process (sub-system) can be introduced to hide that complexity from
the main orchestration as shown in Figure 4-31. RoSAS owns and manages both composites,
but the concerns are now separated into two different systems. The main orchestration manages
the roadside assistance process, while the sub-orchestration handles all the concerns associated
the towing process. Both composites have their own organiser roles. Depending on the
requirement, the organiser roles of both composites could be played by the same player or
different players who interact with each other.
Similarly, the other tasks of main orchestration can be further expanded and defined as sub-
orchestrations as shown in Figure 4-32. The sub-orchestrations can also have another level of
sub-orchestrations if required, forming a hierarchy of service compositions. Figure 4-32 also
shows that the boundary of ownership is not the same as the boundary of orchestration
organisation. Such an ownership is defined by the fact that who controls the composite. As
such, the same business can define and manage multiple composites in the hierarchy.
110
Figure 4-31. Reducing a Complex Orchestration into Manageable Sub-Orchestrations
Figure 4-32. A Hierarchy of Service Compositions
4.8.2 Support for Heterogeneity of Task Execution
Integration of heterogeneous distributed systems is one of the promises of SOA. Therefore, the
support for heterogeneity is important in a service orchestration approach. The support for
heterogeneity allows a defined service orchestration to function without requiring specific
knowledge about how the players are implemented and managed. In heterogeneous
environments, the players are implemented using various technologies and their
implementations do vary over the time.
The design of the core service orchestration should be independent of the implementation and
behaviour of the participating service as shown in Figure 4-33. Otherwise, the changes in the
implementation and behaviour of the participating service can add to the complexity of the core
service orchestration.
Service Orchestration (Main Process)
tTow
ROAD Composites
Service Orchestration (Sub-Process)
Organiser
Organiser
Task Execution
Business Boundary (RoSAS, Ownership)
Main Composite
Sub-Composites
Atomic Services
The Boundary of Ownership
Organiser
OrganiserOrganiser
Organiser
111
Figure 4-33. Implementation of Task Execution is Unknown to Core Orchestration
In order to support this requirement, Serendip does not specify any detail about the execution
of tasks by the players. The core service orchestration assumes the execution of a task as an
outsourced activity. Only the outcome of the task is used to determine the cause of executions
via events as supported by the Serendip language. This allows any implementation (player) to
be used to execute the task supporting the heterogeneity. The task could be executed by a
locally deployed Java component/library, or it could be executed by a remote Web service.
Irrespective of the player’s implementation, the core execution could be defined.
Apart from the control-flow, the data-flow is highly dependent on the player. Different
message exchange protocols e.g., SOAP [262], REST [263] and XML-RPC [264] could be used
by the players. In addition, even within the same protocol, different encodings, e.g., SOAP 1.1
vs. SOAP 1.2 could be used. Moreover, different data formats, e.g., currencies, data-time etc.
could be used. These concerns are handled in separately from the core service orchestration.
Further details on how the data transformations are carried out is available in Section 5.4.
4.8.3 Explicit Service Relationships
One of the important aspects of service orchestration is the interactions among the partner
services. These interactions need to be modified to suit the evolving business requirements. The
aspects such as obligations and the evaluation conditions of these interactions do change. A
monolithic representation of these interactions can increase the complexity of the service
orchestration. Therefore, it is necessary to specify the interactions of a service orchestration in a
modular and coherent fashion.
The introduction of Aspect Oriented Programming [222] concepts can be seen as one of the
earlier attempts to provide the modularity in service orchestration. These approaches attempt to
separate the crosscutting concerns from the core service orchestration. For example, the
AO4BPEL [68] and MoDAR [149] approaches use business rules as aspects to be integrated
with the core business process, to provide the required modularity. This was beneficial as the
complexity of the business logic that is subject to more frequent modifications are now captured
in rules, reducing the complexity of the business processes. In a service orchestration where the
Events EventsProgression of Core Process
Task Execution
Concrete Impl. UnknownConcrete Impl.Concrete Impl.
112
services interact with each other, however, the above approaches provide no support for
explicitly representing the service relationships. However, to better align IT with the business, a
service orchestration approach should support capturing such service relationships in a modular
and coherent manner.
The service-relationships mentioned in this thesis should not be taken as well-known Service-
Level-Agreements (SLA). An SLA is a contractual agreement between the two business parties
(services), which usually specify the non-functional properties of service delivery. Obviously a
service aggregator maintains such SLAs with its partner services, e.g., between RoSAS and
Fast-Tow. In contrast, a service relationship is an internal representation maintained by the
service aggregator, which defines relationship between two such partner services. This
difference is illustrated in Figure 4-34 where the service relationship of two potential services
are captured and represented as a ROAD contract. In fact, this representation of service
relationships is an abstract one. The relationship is not between two specific partner services per
se, but between the roles (positions) that partner services can play. The service relationships
only maintain the aggregator’s point of view of the connection between the two roles.
Such an explicit representation of service relationships helps to achieve the modularity (in
addition to its use in underpinning the organisational approach). As such, the connections and
interactions between the collaboration services are explicitly captured and represented in the
service orchestration. Such an explicit representation is backed by the contract-based designed
provided by ROAD. The contracts provide the modularity and the coherent structure to capture
and define complex interactions, i.e., how the services should interact and under which
conditions these interactions should be allowed as shown in Figure 4-35. For example, the
allowed interactions, the state captured by contractual parameters and the conditions of
evaluating these interactions between the roles Case Officer and Tow-Truck are all captured in
the CO_TT contract. The same goes for the GR_TT contract which captures the interactions
between the roles Garage and Tow-Truck. This provides the required abstraction and modularity
compared to a monolithic representation of such business logic.
In addition, the contracts are adaptable. With the contract-based design the adaptability is
seen as a property of a relationship. As the business relationships evolve, the contracts can also
be changed accordingly. New interaction terms can be added and existing interaction terms can
be deleted or modified. New rules that evaluate interactions can be injected and existing ones
can be deleted or modified, all being done in a well-modularised structure.
113
Figure 4-34. Service Relationships vs. SLA
Figure 4-35. Contracts Support Explicit Representation of Service Relationships
4.9 The Meta-model
The previous sections of this chapter have presented the fragments of the meta-model capturing
the concepts of the Serendip approach. In this section we bring these fragments together to
present the complete meta-model of the Serendip approach as shown in Figure 4-36. The shaded
concepts are borrowed from ROAD, whilst the rest of the concepts are newly introduced.
Figure 4-36. Serendip Meta-Model
As shown, the organisation defines the contracts, roles, players, behaviour units and process
definitions. The organiser regulates and reconfigures the organisation. The tasks are defined in
roles, while interactions are defined in contracts. As roles interact with each other according to
the defined contracts, events are triggered. These events act as pre and post conditions of tasks.
Roles are played by players who perform the tasks. The behaviour units refer to these tasks to
Partner Service
A
Partner Service
B
SLA SLA
Service Relationship
Representation
Composite (a Service)
R1 R2Contract Provides the Modularity to
Capture Service Relationships
Adaptability is a Property of the Relationship
Process DefinitionBehavior Unit
Task Interaction
Role Contract
Organisation
Constraint
Process Constraint Behavior Constraint
Specialise
Bound by
Refer toRefer to
Refer to
EventPre/Post Conditions
Trigger
Player Played by
OrganiserManaged By
FactRule
114
define an acceptable organisational behaviour, which can be reused across multiple process
definitions. A behaviour unit can be specialised to create another behaviour unit. Process
constraints and behaviour constraints that are defined in process definitions and behaviours
units respectively set boundaries for the changes on the service orchestration to protect its
integrity.
4.10 Summary
In this chapter we have introduced Serendip, a novel approach to business process modelling,
grounded on the organisational concepts to achieve adaptive service orchestration. It treats or
views a service composition as an adaptive organisation, which has a structure that provides the
required abstraction of the underlying services and their relationships. Upon such an adaptable
structure business processes are modelled.
The structural and process concepts of service orchestration modelled as an organisation are
placed into six layers, providing the logical abstraction for the organisation-based orchestration.
The structural aspect consists of the Players, Roles and Contracts layers. The process aspect
comprises the Task, Behaviour and Process Definition layers. The adaptability at all these layers
in the organisation collectively enables and contributes to the realisation of an adaptive service
orchestration.
In Serendip, the loose-coupling among tasks achieved by events improves the runtime
adaptability; the behaviour-based process modelling and specialisation captures the behavioural
commonalities and variations between processes and enables business agility; the two-tier
constraints facilitates the analysis and localisation of change impacts and control the adaptations
without unnecessary restrictions; the interaction membrane isolates internal and external data-
flow concerns. Based on its set of basic concepts of the Serendip meta-model, the SerendipLang
service orchestration modelling language is designed, supporting and realising the meta-model.
We have also discussed how the Serendip approach can improve the adaptability of a service
orchestration while managing the complexity that the adaptability requirements can introduce.
115
Chapter 5
Serendip Runtime
In the previous chapter, we have introduced the Serendip meta-model, language and underlying
concepts, which enable adaptive service orchestration. In this chapter, we introduce a novel
service orchestration runtime called Serendip Runtime designed to support the enactment of
Serendip service orchestrations.
Instead of designing a new service orchestration runtime, an existing service orchestration
runtime or an enactment engine could have been used. For instance the Serendip language
descriptions could be translated into an executable language such as WS-BPEL for enactment
purposes. A popular BPEL engine such as Apache ODE [265] or Active-BPEL [197] could
have been used as the runtime platform. However, such an alternative would not provide the
basis for fully exploiting the true benefits of the Serendip approach in supporting process
adaptation. The resulting BPEL scripts from the translation does not have adaptability properties
exhibited at the Serendip language level because WS-BPEL was not designed with adaptability
in mind. The runtime BPEL engine does not support the required adaptability either. The
loosely-coupled tasks and event-driven nature of the Serendip language should be supported by
a specifically designed runtime engine to facilitate the adaptability features of the language and
approach.
A Serendip orchestration is designed as per the Role Oriented Adaptive Design (ROAD). The
newly introduced process layers are a natural extension of ROAD concepts as explained in
Chapter 4. Subsequently, the Serendip runtime is also designed by extending the ROAD
framework to provide the process support required. This chapter presents a detailed view of how
the Serendip runtime is designed and how it achieves runtime orchestration of services
according to the Serendip approach.
Firstly, the design expectations for such an orchestration runtime are stated in Section 5.1.1.
Secondly, the core components of the Serendip runtime are introduced in Section 5.1.2 to fulfil
these expectations. Thirdly, the details about how the Serendip runtime functions is explained
from Section 5.2 to Section 5.5, by clarifying different aspects of the runtime platform,
including process life-cycle management, event processing, data synthesis and synchronisation
and dynamic creation of process graphs.
116
5.1 The Design of an Adaptive Service Orchestration Runtime
This section presents an overview of the Serendip runtime. We first discuss the design
expectations clearly identifying the additional requirements that should be satisfied by an
adaptive service orchestration. Then we outline the core components of the Serendip runtime
framework and explain how they satisfy the design expectations identified.
5.1.1 Design Expectations
A service orchestration runtime is designed as an enabling technology for a service
orchestration language. For instance, the Active BPEL [2] and Apache ODE [265] runtimes are
designed to enact the processes specified according to WS-BPEL language [22, 104]. According
to Ko et al. [95] the relationship between such a runtime and the language standard is a nested
one. It follows; a purpose of a service orchestration runtime is to support a specific underlying
execution language. In supporting a specific language, a service orchestration runtime should
meet a basic set of expectations.
Expectation 1. Maintaining a Representation of Orchestration: An orchestration runtime
should be able to maintain a concise representation of the service orchestration. The
runtime should include a repository to maintain the orchestration descriptions. During
the execution time these descriptions are used to enact multiple process instances.
Expectation 2. Maintaining Process States: An orchestration runtime should be able to
maintain the status of process instances [266]. As a process instance progresses, the
runtime determines the next activity/activities based on these maintained states. Many
instances can be executed in parallel, thus the runtime needs to maintain process
states in association with the corresponding process instance context.
Expectation 3. Data Mediation: An orchestration runtime is an application integration
environment where multiple applications interact with each other. Therefore, a data
mediation support is important. As part of data mediation, the data need to be
validated, enriched, transformed, routed and operated (delivered) to the participants.
These are also known as VETO/VETRO patterns in the Enterprise Services Bus
(ESB) architecture [267].
Apart from these basic expectations of an orchestration runtime, there are the following
additional expectations to support adaptability.
Expectation 4. Anticipation for Adaptation: The design of an orchestration runtime
should show anticipation for runtime adaptations. What we mean by anticipation is
that the expectations 1-3 should be supported in a manner that the runtime
117
modifications to the supporting mechanism are possible. The process representation,
the maintaining of process states as well as the message mediation mechanisms
should be designed to support the runtime adaptation. For example, mechanisms that
supports how the process progresses and how the messages are validated
/transformed should expect late modifications to process instances. To highlight the
difference, a runtime designed to support static process definitions, where process
instances follow a specific loaded definition may not need to show such anticipation.
Expectation 5. Adaptation Channels: An orchestration runtime should open necessary
channels to perform runtime adaptations. An adaptation channel is a well-designed
mechanism that exposes a proper interface for an authorised entity to perform
adaptations. Instead of allowing any arbitrary changes on service orchestration, an
adaptation channel allows systematically realising the modifications via the
adaptation channels. The channels should guarantee safe and reliable adaptations
without leading to inconsistencies in modifications.
Expectation 6. Business Integrity Protection: An orchestration runtime should consist of
a mechanism to ensure the business integrity in view of modifications during the
runtime. For a static service orchestration, integrity protection ensured at design time
is sufficient as the process executions adhere to the original definitions. But for an
adaptive service orchestration the process definitions as well as the process instances
are subject to modifications at runtime. Amidst these modifications, the runtime
should have mechanisms to ensure the business integrity as the modifications are
applied.
While the Serendip Runtime needs to support the expectations (1-3) as an orchestration
runtime, it also needs to support the additional expectations (4-6) to facilitate adaptability.
5.1.2 Core Components
The Serendip runtime framework consists of four main interrelated core components:
Enactment Engine, Model Provider Factory (MPF), Adaptation Engine and the Validation
Module as shown in Figure 5-1. The figure also shows the connections with the sub-components
such as parsers, deployment containers, message transformation, message interpretation and
change validation plugins. The Enact-Engine and the MPF belong to the functional system,
where the processes are represented and enacted. The Adaptation Engine and the Validation
Module belong to the management system, where management operations are carried out and
validated.
118
Figure 5-1. Core Components of Serendip Runtime
Model Provider Factory (MPF): Maintains a representation of different elements of
the service orchestration (Expectation 1) in terms of runtime models. These models
represent Process Definitions, Process Instances, Behaviour Units, Tasks, Roles and
Contracts. Furthermore, the relationships among the models are also maintained, e.g.,
the links between process definitions and behaviour units; the parent-child
relationships between behaviour-units. Models are instantiated based on the
descriptors that are parsed using a parser, e.g, the XML Parser. Dynamic
modifications may be realised on these models during the runtime via the Adaptation
Engine. MPF also provides support for cloning and deleting models. For example,
process instances may be cloned from existing one during the runtime. In essence,
MPF acts as a repository of loaded models that can be continuously evolved upon
change.
Enactment Engine: Provides the enactment support for Serendip Processes. The states
of running process instances are maintained in the Enactment Engine (Expectation 2).
The Enactment Engine also allows parallel execution of processes. It uses the runtime
models maintained in MPF when a process of a certain type needs to be instantiated.
For example, when a pdGold process instance needs to be instantiated, a fresh and
exact copy is created from the most-up-to-date pdGold process model as requested by
the Enactment Engine. Furthermore, the data is mediated from one player to another
to carry out the tasks are also managed by the Enactment Engine (Expectation 3). The
data are validated by defined contracts (Section 5.3.2); the data are routed by roles
(Section 5.4.1); the data transformation mechanism enriches and transforms data to
suite a role player (Section 5.4.2) and the delivery mechanism (Section 7.2.5)
Model Provider Factory
Validation Module
Monitoring and Adaptation Tools
ROAD4WS
Enactment
Romeo-plugin
Validation Plugins
Message Interpreter
Runtime Models
EnactmentEngineC
ore
Com
pone
nts Adaptation Engine
Customisations
XMLParsersMessage
Transformer
Validation
FunctionalManagement
119
communicate (operate) with the role player. Most importantly, the entire process
execution and data mediation support in the Enactment Engine is designed with
runtime modifications in mind (Expectation 4).
Adaptation Engine: Provides the adaptation support in both the loaded process
models and the running process instances. The Adaptation Engine, which is built on
top of MPF and Enactment Engine, provides the appropriate channels to carry out the
runtime modifications (Expectation 5). The Adaptation Engine ensures that
adaptations are carried out in a systematic and reliable manner. Chapter 6 will
provide more details about the design of the Adaptation Engine and its mechanisms
to realise different types of adaptations.
Validation Module: Performs validations upon modifications to the runtime models
maintained in MPF (Expectation 6). The Validation Module is extensible to support
different validation mechanisms and techniques. For example the current
implementation (Chapter 7) of the Validation Module uses a plugin developed for
the Romeo model checker [258] which provides Petri-Net-based [99, 254] validation
of the process models. The Validation Module is extensible in that apart from the
default Petri-Net-based validation plugin, more domain specific plugin validation
tools can also be integrated to provide more domain specific process validation. For
example, a rule-based validation mechanism may be developed by integrating
business rule repositories that capture modification constraints as rules.
Apart from these four core components, there are a number of other additional components
that support the operation of the Serendip runtime. For example, the monitoring and adaptation
tools provide the process monitoring and adaptation support to software engineers. The
monitoring tool helps to identify the process flow, task status and other properties that are
important to make adaptation decisions. Once the adaptation decision is made the editing tools
support the manipulation of process definitions and process instances. Moreover, there are other
parsers, plugins, adapters, deployment containers and interpreters that support the operation of
Serendip runtime.
The rest of the sections of this chapter will provide details on how the Serendip runtime is
designed with respect to Expectations 1-4, i.e., the primary functions of the MPF and the
Enactment Engine. The mechanisms that satisfy Expectations 5 and 6, i.e., the primary
functions of the Adaptation Engine and the Validation Module, will be discussed in Chapter 6
with more details.
120
5.2 Process Life-cycle
During the runtime, many processes are instantiated and terminated in an orchestration runtime
to serve consumer requests. Serendip modelling allows the definition of multiple process
definitions over the same composite structure. Therefore, at a given moment, multiple process
instances may run in parallel either of same definition or of different definitions. These process
instances have their own process context during their process lifecycle. Furthermore, a process
instance may be modified to support process instance specific modifications during its lifecycle.
This section clarifies the different stages of a process instance and how a process progresses
over its lifecycle.
5.2.1 Stages of a Process Instance
As shown in Figure 5-2, a process instance has three main stages during its lifetime. i.e. (1)
Process Instantiation, (2) Process Execution and (3) Process Termination.
Figure 5-2. The Life Cycle of a Serendip Process Instance
Process Instantiation is the stage where a new process instance is created based on a process
definition. The process definitions are maintained in the MPF. A process instance is instantiated
in a service composite to serve a service request. Once instantiated, a process instance maintains
its own model and state. Initially this model is an exact copy of the process definition model.
However, during the runtime, the model can be modified to change the process instance
(Section 6.3). For example, when a Gold member sends a roadside assistance request, a new
pdGold instance is created. To facilitate an ad-hoc change requirement this instance may deviate
from the original pdGold definition (Section 8.4.2).
A process definition specifies the Condition of Start (CoS), which will be used by the
enactment engine to instantiate new instances as discussed in Chapter 4. The process instance
maintains its own state-space and has a unique identification (pid). This pid is used for two main
purposes. Firstly, the pid is associated with the events and messages that are related to the
process instance (Section 5.3 and Section 5.4). Secondly, the pid is used to issue modification
commands to a specific process instance (Section 6.2.2).
After the process instantiation, the Process Execution stage begins. During the process
execution stage, the tasks belong to the process instance are executed. As mentioned in Chapter
4, the task execution is always considered as external to the composite and is expected to be
performed by role-players. During the execution stage the tasks become doable if their pre-
1. Process Instantiation
3. Process Termination
2. Process Execution
121
condition event patterns are met/matched (rather than in response to a completion of another
task). For example when the pre-condition event pattern9 eTowReqd * eDestinationKnown is
triggered the task tTow become doable. Once the task becomes doable, the obligated role-player
has to carry out the task. In service oriented computing this usually results in a service request
from role to the bound Web service or player. Section 5.4 provides more details on how the
interactions between role and player occur.
Once the task is executed more events are triggered, e.g., eTowSuccess ^ eTowFailed, triggers
either event eTowSuccess or event eTowFailed. These events act as part of the pre-conditions
for other tasks, possibly making those tasks doable. Multiple instances of the same or different
process definitions can be executed simultaneously. Therefore, these events are all managed and
recorded within the scope of a process instance in order to avoid event conflicts among multiple
instances. Only the events of the same instance are used to match the event patterns. For
example, the eTowReqd and eDestinationKnown events can be triggered for two different
process instances pid=p001 and pid=p002 at the same time. The task tTow of process instance
p001 should not be initiated due to triggering of events belonging to p002.
The stage where the process is naturally completed or prematurely aborted is known as the
Process Termination stage. A process instance can be naturally completed when the Condition
of Termination (CoT) as specified by the process definition is met. For example, when the
eMMNotifDone event is triggered, a pdGold process instance is considered complete. Process
instances derive these CoTs from the original process definitions. During the runtime the CoT of
individual process instance can be dynamically modified to include the additional conditions or
to exclude existing conditions. On the other hand, a process instance can be aborted if it is no
longer required (which is a management decision). For example, the RoSAS organiser may
decide to abort a process instance, if the execution is no longer required or valid, e.g., desertion
of a case due to unforeseen conditions. During the process termination stage, the process-
instance specific models are removed from the enactment engine and the allocated resources are
released.
5.2.2 Process Progression
As discussed in Chapter 4, a Serendip process definition is a collection of references to
behaviour units. Each behaviour unit specifies the pre- and post-conditions of a set of related
tasks in a declarative and loosely-coupled manner. In contrast to tightly-coupled task
dependencies, this brought advantages, such as their ease of modification in easily realising
unforeseen business requirements. The Serendip runtime and the way the progression of a
9 For event patterns we use symbols as follows, AND=*, OR=, XOR=^
122
process instance should be designed to exploit these advantages of loosely-coupled task
dependencies.
Figure 5-3 shows the stages of process execution. Initially, an internal interaction (e.g., a
complaint from motorist to Case-Officer) occurs and subsequently the relevant events are
generated. The task execution occurs in response these events. When the pre-condition (as
specified by a pattern of events) of a task is met, a message (i.e., Out Message) is sent from the
role to the player (e.g., Web service request) for processing. The events within the composite
are transparent to the player. Then the player executes the task as per the player’s process and
sends the In Message (e.g., Web service response) back to the role. This response can create one
or more internal interactions. Based on the evaluation of these interactions, more events are
triggered. These events enable the pre-conditions of further tasks and the cycle continues until
the process instance completes as defined by the condition of termination.
1. Process Instantiation
3. Process Termination
2. Process Execution
EventsInternal Interactions
Out Message
Player’s Process
In Message
Task Execution
Figure 5-3. Process Execution Stage
The above brief description provided a high-level view of how a process progresses by
executing tasks. To have a better understanding of the runtime, the upcoming sections give a
more detailed account of how the runtime is designed to support the process progression as
outlined above.
5.3 Event Processing
Serendip runtime adheres to the event-driven specification of the process control flow. The
processes are instantiated, executed and terminated based on events. As such, event processing
plays an important role in the Serendip runtime. This section describes how Serendip runtime
supports such event-driven process enactment. Firstly, Section 5.3.1 describes how the events
are captured and managed. Section 5.3.2 describes how such events are triggered within the
context of a ROAD composite.
123
5.3.1 The Event Cloud
The Event Cloud is a repository of event records that are generated from many streams of events
[268]. The usefulness of events and the concept of Event Cloud have been explored in the past
[247, 268, 269]. The work of this thesis also uses the concept of Event Cloud with the purpose
of recording the events that are generated within the Serendip orchestration composite. The
events can be triggered due to the progression of different process instances. Therefore, these
process instances can be seen as the streams of event records that flow into the Event Cloud.
In Serendip, an event record is a tuple <eid, pid, timestamp, expiration> where eid is the event
identification and pid is the process instance identification. The timestamp captures when the
event is fired and the expiration captures when the event eventually expires. By default all
events expire when the process instance (denoted by pid) is complete. The Event Cloud consists
two main collections, i.e., an Event Repository and a Subscriber List as shown in Figure 5-4.
Figure 5-4. The Event Cloud
Event Record Repository: Is used to keep the records of events. Adding events to the
repository is known as Publishing. Event Records are published by the internal
components of the ROAD composite. These components are known as publishers as
shown in Figure 5-4. A typical example of a publisher is a Contract. A Contract
publishes events by interpreting the role-role interactions (Section 5.3.2). Events
Records are garbage-collected when they expire. Table 5-1 lists the different types of
event publishers.
Subscriber List: Is used to maintain a list of subscribers that are interested in events
(or event patterns to be precise). Subscribers need to be maintained for notification
purposes. When a new event is added, the Event Cloud checks for the interested
event patterns of these subscribers and notify those whose event patterns are matched.
Subscribers can be added or removed at any time. At the time of a process enactment,
for example, the task instances of a new process instance can be dynamically added
to the list of subscribers. When the process instance is complete the subscribers are
removed. Table 5-2 lists the different types of subscribers.
Event Cloud s1 ... Subscriber List
ee e
e ee
e ee
s2
p001
s3
Event Repository
The Event Cloud
e
2. Publish (Event)
Publisher, e.g., Contract
3. NotificationSubscriber
e.g., Task Instance
1. Subscribe
ee
ee
e
124
Table 5-1. Types of Event Publishers
Publisher Purpose Example
Contract To record the
interpreted events( via
contractual rules).
Contract CO_TT interprets message CO_TT.mOrderTow.Req, to
generate event eTowReqd.
Organiser To add events to the
Event Cloud.
The organiser adds event eMMNotifDone to process instance
pdGold034 to terminate the instance, because the motorist did not
send the notification as obliged to.
Table 5-2. Types of Event Subscribers
Subscriber Purpose Example
Task To identify when a task
become doable, i.e., to
detect if its pre-event
pattern is met.
Task tTow of process instance pdGold001 subscribes to
Event Cloud to match event pattern, eTowReqd *
eDestinationKnown
Enactment
Engine
To identify when a new
process should be enacted,
i.e., to detect when the CoS
of each process definition
is met.
The engine subscribes to Event Cloud to detect the CoS
event patterns of all the process definitions such as
eGoldRequestRcvd
Process
Instance
To identify when a process
instance should be
terminated, i.e., to detect if
the CoT event pattern is
met.
Process instance pdGold001 subscribes to Event Cloud to
detect CoT event pattern, eTTPaid * eGRPaid *
eMMConfirmed
Adaptation
Script
Scheduler
To identify when to
execute a scheduled
adaptation script. (Section
6.3.3)
An adaptation script S1 is dynamically scheduled to make
a pre-condition of a task change from original definition if
event eTowFailed has occurred in process instance
pdGold034. The scheduler subscribes to the event
eTowFailed of process instance pdGold034
Organiser To receive alerts upon
interested events.
The organiser (role/player) wishes to be alerted when
eTowFailed has occurred for process instance pdGold034
125
The use of such a publish-subscribe based mechanism in Event Cloud enables loose-coupling
among the tasks to be executed. As described in Section 4.2.1, tasks of a Serendip process are
loosely-coupled, e.g., tTow and tPayTT do not relate to each other directly by design. Changes
can be carried out on tasks to deviate them from the original process definition. As mentioned in
Expectation 4, the runtime should be ready for such late deviations in process instances. There
are two important characteristics of the Event-Cloud that facilitate this requirement.
1. Event Cloud does not maintain subscribed event patterns but the subscribers.
2. Subscribers can be dynamically added or removed from the Event Cloud
Event Cloud does not maintain the event patterns of the subscribers. Instead, only a list of
subscribers is maintained. Event Cloud queries the subscribers to get the most up-to-date event
pattern when a new event is received. This characteristic is useful to allow late changes for
subscriber. For example, the task tPayTT of process instance pdGold001can modify the pre-
event pattern to deviate from the original event pattern which is specified in its originated
process definition of pdGold. There is no need to specifically update the event patterns in the
Event Cloud upon such changes.
The subscribers can be dynamically added or removed from the Event Cloud. In this way the
adaptations (e.g., new task insertions and removals) can be dynamically carried out while the
process is running (Section 6.2). For example, consider an ad-hoc change, where a new task
tNofityTowFailureToCustomer is added into the instance pdGold001 to handle the event
eTowFailed. To support this change, a new task instance is first created and added to the
process instance pdGold001. Once the task is added to the instance, the task is subscribed to the
Event Cloud by the process instance.
The next section will describe how these events are published in the Event Cloud via rule-
based contracts.
5.3.2 Event Triggering and Business Rules Integration
The business rules technology is gaining popularity due to its expressiveness and inference
capabilities [270]. In the past, business rules have been heavily explored in defining knowledge
based systems and expert systems [176, 270-273]. Business Rule Integration [179, 270] with
BPM has added many advantages. These include better decoupling of the system, ease of
understanding, reusability, manageability by non-technical staff, and improved maintainability
[179, 180]. Section 3.4.3 has presented a number of such approaches that integrate business
rules into processes to improve their adaptability. Apart from the support for adaptability, the
rule-based message interpretation allows the specification of complex message interpretation
logic.
126
According to the Business Rules Group [274], “A business rule is a statement that defines
and constrains some business. It is intended to assert business structure or to control or
influence the behaviour of the business”. According to this definition, business rules control the
way a business behaves. As such, use of business rules is a natural choice for controlling and
interpreting the interactions in a service composite to control the runtime behaviour of the
business model as realised by a service orchestration. Nonetheless, a monolithic representation
of business rules in a service composite can cause problems in terms of modularity and
adaptability. Inappropriate modularity can cause confusion in writing or improving the rules that
represent the assertions and constraints of a business. A lack of support for adaptability can
cause problems when a business evolves and the specified assertions and constraints are no
longer valid. Therefore, the business rules need to be specified in a manner with proper
modularity and support for adaptability.
In this work we integrate business rules with the Serendip orchestration runtime via ROAD
contracts. Our aim is to clearly separate the processes level concerns form the complex message
interpretation concerns. In a service orchestration, the message interpretation needs to be based
on the existing relationships among the partner services. These relationships can be dynamically
changed during the runtime. It follows that the processes running on the composite should also
select the paths of executions in response to these changing service relationships.
A ROAD composite uses contract to explicitly represent the relationships among the roles in
the composite [81]. A contract represents a well-defined relationship between two roles. The
players bound to the roles such as Web services need to interact according to the defined
contracts. Contracts also have a memory during the runtime. In this memory a contract maintain
a set of facts that represent the contract state and a set of rules that represent a set of assertions
of the relationships. The messages pass through the contract are interpreted according to facts
(i.e., contractual facts) and the rules (i.e., the contractual rules) maintained in the contract
(memory). The facts are dynamically modified by the rules. This work uses these contractual
rule-based interpretations to trigger events, exploiting following advantages.
Modularity: Contracts provide the modularity to specify the facts and rules for
validating and interpreting the role-role interactions. The end result of this validation
and interpretation step can be used to determine the course of the process flow.
Adaptability: The memory of the contract represented by the facts and rules are
dynamic. The facts and rules can be dynamically updated during the runtime to
reflect the change to the relationship between the two services.
127
For example, roles CO and TT need to interact according to the interaction terms defined in
the CO_TT contract and the relevant messages have to go through the CO_TT contract. As
such, the modularity provided by the contract helps to specify all the interactions between CO
and TT. The interactions include how to order the tow service, how to pay for the tow service.
In addition, the CO_TT contract captures facts and assertions (rules) that allow evaluating these
interactions.
The CO_TT contractual memory can be updated to reflect the changing evaluation conditions
which determine a successful towing. For example, a new rule can be dynamically injected to
calculate whether the towing has been done in a remote or a metropolitan area, which could
change the final outcome of a successful towing. A fact can be updated (by rules) to record the
number of requests from the Case-Officer to Tow-Truck for a given day. This provides the
required adaptability to evolve the contractual rules to comply with the changing business
requirements.
One of the key advantages of using ROAD is its ability of service relationship-driven
interpretation of events. This means that not only the message but also the current context of the
service relationship can also influence the final outcome of the interpretation of interaction. The
current context of the service relationship can be derived from the facts maintained in contract
memory. For example, suppose that according to the contract between roles CO and TT, a
bonus payment needs to be made for every Nth tow request. In this case, whether the bonus
payment should be made or not is a decision based on the current context of the service
relationships rather than a decision solely based on the passing message. A fact e.g.,
TowCounter in contract memory continuously updated contractual rules would serve purpose.
An event eTTBonusAllowed can be triggered if the condition (TowCount>N ) is true. A task
tPayBonus may have the event eTTBonusAllowed as the pre-event-pattern to perform the actual
payment. As such, the contractual rules integration allows not only message-based
interpretation but also a service relationship-driven interpretation as well.
Figure 5-5 shows a message exchange between the roles CO and TT via the CO_TT contract.
Figure 5-5(a) shows that the tow request (#1) from role CO to role TT is interpreted (#2)
according to the rules defined in CO_TT contract. Consequently the event eTowReqd is
triggered and recorded in the Event Cloud (#3). Figure 5-5(b) shows that the response from role
TT to role CO is interpreted again, according to the rules defined in the CO_TT contract.
Subsequently, the event eTowSuccess (or eTowFailure otherwise) is triggered and recorded in
the Event Cloud.
128
CO TTCO TTCO_TT
Contractual Rules
Event Cloud
eTowReqd
CO_TT
Event Cloud
eTowSuccess
(a) (b)
2
13
1
2
3
Contractual Rules
Figure 5-5. Message Interpretation
In conclusion, the complexity of the evaluation conditions/assertions is captured in contracts.
The enactment engine is only concerned about the outcomes of the evaluation, i.e., triggered
events. While benefiting from a powerful service relationship-driven message interpretation,
this design decouples the process execution from the complexity of interpreting interactions as
captured in the contracts.
5.4 Data Synthesis of Tasks
In Section 4.6 we have described how the data synthesis and synchronisation mechanism is
designed to form a membrane (the Interaction Membrane) around the composite. The
membrane is the border in which transformations between internal messages (due to role-role
interactions) and external messages (role-player interactions) takes place. An external
interaction is composed possibly from internal interactions when a service invocation needs to
be formulated. Conversely, an external interaction can cause further internal interactions.
Transformations need to be carried to propagate data across the membrane. The membrane
isolates the external interactions from the (internal) core processes (and vice versa) to provide
the organisational stability. In order to support this concept, the design of the Serendip runtime
needs to address a number of challenges.
1. The request to complete a task needs to be composed from the data derived from a single or
possibly multiple internal messages. In the case of multiple messages are being used, they
can arrive in any order. The runtime should accommodate such unpredictability in internal
message ordering.
2. Multiple processes can run simultaneously. Multiple internal messages of the same type
(e.g., CO_TT.mOrderTow.Req) but belonging to different process instances (e.g., p001,
p002) can arrive at the role simultaneously. The message synthesis mechanism of the
runtime should be able to deal with such parallelism.
129
3. Sending request to the player needs to be synchronised according to the task dependencies
defined in the behaviour units (e.g., bTowing) of the organisation. The message synthesis
mechanism should provide support for such synchronisation.
4. The response from the player may be used to create a single or multiple internal
interactions. The responses (to multiple requests) may arrive in any order irrespective of the
order in which the requests are made. For example, if towing requests are made to the
FastTow service in the order of process instances p001 → p002, the responses may come in
the order of p002 → p001. The runtime should be able to handle the unpredictability in
external message ordering.
5.4.1 The Role Design
In order to address the above challenges, the runtime is designed as shown in Figure 5-6, which
gives a zoom-in view of an instantiated Role’s composition at runtime. As shown, a role
instance contains four message containers to store the messages and a Message Analyser. The
purpose of the Message Analyser is two-fold, i.e., Data synthesis and synchronisation.
1. Synthesis: Performs the data transformations across (in and out of) the membrane. The
Message Analyser performs two types of data-transformations. These two transformations
are named as follows from the perspective of the composite.
a. Out-Transformations (Out-T): Transform internal messages to external messages.
We call the used internal messages the Source-Messages and the generated external
message the Out-Message.
b. In-Transformations (In-T): Transform external messages to internal messages. We
call the used external message the In-Message and the generated internal messages,
the Result-Messages.
2. Synchronisation: Synchronises the Out-Transformation in the behaviour layer by specifying
the ordering of task executions and the reception of source messages. Note that the In-
Transformation is not synchronised with the behaviour layer, because, as soon as a message
is received form the player, the message is In-Transformed to create the internal Result-
Messages. Nevertheless, runtime make sure that the interpretation of Result-Messages via
the contracts (Section 5.3.2) trigger events conforming to the post-event-pattern of a given
task.
In order to support the two transformations and the synchronisation, there are four message
containers in a role that the Message Analyser will use to pick and drop messages.
130
PendingOutBuffer: The internal messages (Source-Messages) flowing from the
adjoining contracts are initially buffered in here.
OutQueue: Once Out-transformed, the external messages from role to the player
(Out-Messages) are placed in here.
PendingInQueue: The external messages received form the player (In-Messages) are
initially placed here.
RouterBuffer: Once In-transformed the internal messages (Result-Messages) ready
to be routed to other roles (across contracts) are placed here.
5.4.2 The Transformation Process
The complete message transformation process is detailed below.
Figure 5-6. Data Synthesis and Synchronisation Design
Synchronisation step:
1. When the behaviour layer has identified that Task Ti of Process Instance PIj is ready for
execution, the Message Analyser of the obligated role is notified.
Out-Transformation steps:
Out-T
In-T
PendingOutBuf OutQueue
Pending In Queue
RouterBuf
TT Role
To Player
From Player
MessageAnalyser
Behaviour unit
Synchronisation (pid=p001,
taskId=tTow)
1
23
4
5
67
To CO_TT contract
To GR_T
T con
tract Transformations
ƒ
CO_TT.mOrderTow.Req
GR_TT.mSendGRLocaiton.Req
GR_TT.mSendGRLocaiton.Res
CO_TT.mOrderTow.Res
TT.tTow.Req (OutMsg)
TT.tTow.Res (InMsg)
The
mem
bran
e
Org
. Bou
ndar
y Task Execution
131
2. The Message Analyser picks the relevant Source-Messages with the same process
identifier PIj from the PendingOutBuffer as specified in the task description Ti .
3. The Message Analyser performs the Out-Transformation to generate the Out-Message.
This may include enriching the information of the message and transforming the
message to the expected format. Then the Out-Message is then marked with the same
process identifier PIj and placed in the OutQueue.
4. The underlying message delivery mechanism (e.g., ROAD4WS for Web services [261])
delivers the message from the OutQueue to the player. Then the player executes task Ti
of PIj.
In-Transformation steps:
5. The underlying message delivery mechanism receives the response (i.e., the In-
Message) from the player and places it in the PendingInQueue. The message delivery
mechanism ensures that the In-Message is marked with the same process identifier PIj
e.g., via a synchronous Web service request.
6. The Message Analyser picks the messages from PendingInQueue and performs the In-
Transformation as specified in task Ti to create Result-Messages.
7. The created Result-Messages are marked with the same process identifier PIj and placed
in the RouterBuffer, to be routed across the adjoining contracts (towards suitable roles).
Then these contracts trigger events by interpreting these messages as explained in
Section 5.3.2. The runtime (via Event Observer, Section 7.1.2) make sure that the
triggered events conform to the post-event pattern of the corresponding task.
Let us consider a specific example scenario to clarify the above steps using the task tTow of
role TT. First, the behaviour layer determines that the task tTow of process instance p001 has
become doable (Step1). Subsequently, the Source-Messages of the task tTow
(CO_TT.mOrderTow.Req, GR_TT.mSendGRLocation.Req) that belong to the same process
instance (i.e., with p001) are picked from PendingOutBuffer (Step 2). Then these Source-
Messages are transformed (Step 3) to create the Out-Message (TT.tTow.Req), which is the
message delivered to the player (Step 4). When the task is complete (i.e., Car has been towed)
and the player responds, the response message or the In-Message (TT.tTow.Res) is placed in the
PendingInQueue (Step 5). The underlying message delivery mechanism makes sure that the
same pid of the request is associated with the response. Then TT.tTow.Res is picked and
transformed to create two Result-Messages, i.e., CO_TT.mOrderTow.Res and
132
GR_TT.mSendGRLocation.Res (Step 6). These Result-Messages are again associated with the
same pid of the In-Message. Then these Result-Messages are placed in the RouterBuffer (step7)
to be routed across the relevant roles, i.e., CO and GR via adjoining contracts, i.e., CO-TT and
GR_TT.
The above role and transformation design has been able to address the aforementioned
challenges .
Challenge 1: The PendingOutBuffer provides the storage for messages to be buffered. The
messages are picked based on the synchronisation request rather than the order in which they
arrive. This addresses the challenge of unpredictability in message ordering.
Challenge 2: All the messages are annotated with the process instance identifier. Therefore,
the same type of messages of different process instances can be uniquely identified. This
addresses the challenge posed by the parallelism in process execution.
Challenge 3: The Message Analyser is synchronised with the behaviour layer of the
organisation. The Out-Transformation is carried out only when the task becomes doable. As
such, the message synthesis mechanism enables the synchronisation required.
Challenge 4: The message transformations are carried out in an asynchronous manner within
the composite. The message containers allow a separation between the role and the message
delivery mechanism which serves the Out/InQueues outside the membrane. The role does not
need to wait for the response from the player. The Message Analyser operates in an
asynchronous manner whereas the message delivery (Player Service invocation) may operate
both in a synchronous or asynchronous manner (Section 7.2.5). Multiple threads in Message
delivery mechanism may spawn to serve the messages in the OutQueue as well as to insert
messages to the InQueue when the player responds (the player implementation need to support
multi-threaded serving). In this way the responses from the player may arrive in any order
different from the request order.
5.5 Dynamic Process Graphs
The Serendip behaviour units capture different organisational behaviours. The Model Provider
Factory (MPF) maintains all the behaviour units. The task dependencies are defined in a
declarative manner. The benefits of having such declarative behaviour-specification were
described in Section 4.2 and Section 4.3. Behaviour units are assembled to specify a process
definition to fulfil a particular customer requirement. For example the pdGold process definition
assembles behaviour units bComplaining, bTowing, bRepairing and bTaxiDropOff for
customers in the Gold category.
133
However, such declarative descriptions can be difficult to visualise mentally. Process
visualisation helps to identify the possible gaps and optimisation opportunities in the process
definitions. In fact, one main goal of BPM is to visualise processes to understand how processes
are actually performed (lived) and how they should be performed (intended) [275]. Process
graphs as a visual aid are provided for software engineers to understand the complete flow of a
process as illustrated by Figure 5-7.
Figure 5-7. Generating Process Graphs from Declarative Behaviour Units
Apart from the benefit of visualisation, such a graph-based representation of a process helps
to correctness of the defined process. Since a behaviour unit is reused potentially by several
process definitions, there may reachability issues. Such issues with the correctness of a process
definition can be visually as well as formally identified if the complete flow of the process is
constructed.
Furthermore, such a graph construction cannot be static. The behaviour units can be changed
during the runtime, affecting all the process definitions that share these behaviour units. For
example, insertion of new task tNofityTowFailureToCustomer to the behaviour bTowing could
affect all the referenced process definitions, i.e., pdSilv, pdGold, and pdPlat. In some cases
completely new behaviour units can be linked to the process definitions to add more features
during the runtime. For example, behaviour bHotelBooking can be dynamically added to
process definition pdPlat to provide accommodation to stranded platinum members, if required.
In addition, an individual process instance also can deviate from the original definition to
accommodate ad-hoc changes. These changes lead to changes in the complete flow of a process
definition or a process instance. As MPF maintains all these process models, there should be
support from the MPF to dynamically re-construct the process graphs incorporating the
modifications. We describe in this section how the process graphs are dynamically constructed.
Due to the event-driven nature of task specifications and dependencies in Serendip, a suitable
notation for the visual representation should also support events. The process modelling
notation for Event-driven Process Chains (EPC) introduced by Sheer [17] has been chosen in
this thesis as the notation for process visualisation. (EPC is also used in process modelling tools
Declarative Behaviour Units
bTowing2
bTowing bRepairing
bTaxiDropOff
bParamedicbComplaining
...
pdGold pdPlatpdSilv
Runtime Modifications on MPF
Process Graphs
V X
134
such as ARIS [276] and SAP R/3 [277]). An EPC graph consists of three main elements, i.e.
functions, events and connectors (AND, OR, XOR). The directed edges connect these elements
together into a directed graph forming a workflow. Serendip defines a process as a collection of
declarative behaviour units. Each behaviour unit contains tasks and its pre- and post-conditions
(event patterns) define the control flow. These declarative behaviour units need to be
transformed into as an EPC workflow model in the form of a process graph. Then a suitable
Tool can be used to visualise these graphs.
The construction of an EPC process graph is carried out in two-steps.
1. Construct atomic graphs for each task10 of process.
2. Link all the constructed atomic graphs to form the final graph by mapping events.
Section 5.5.1 describes what an atomic graph is and how to construct pre-trees and post-trees
as part of Step 1. Section 5.5.2 describes how event-based mapping is used to link the atomic
graphs together to construct a complete EPC workflow as part of Step 2.
5.5.1 Atomic Graphs An atomic graph is a single EPC function with its pre-trees and post-trees constructed. Here,
a pre-tree is a tree constructed of events and connectors, based on the pre-event pattern of a
task. A post-tree is a tree constructed of events and connectors, based on the post-event pattern
of a task. The EPC function corresponds to a Serendip-Task. An EPC event corresponding to a
Serendip-Event specified in pre- and post-event patterns of tasks. The EPC Connector symbols
(AND, OR, XOR) are used with their semantics as explained in [278] and are summarised in
Table 5-3.
A sample atomic graph is shown in Figure 5-8. As shown the pre-event pattern eTowReqd *
eDestinationKnown would create one AND connector with two incoming edges for events
eTowReqd, eDestinationKnown to form the pre-tree of task tTow. Similarly the post-event
pattern eTowSuccess ^ eTowFailed creates the post-tree that has a XOR connector with two out
going edges for events eTowSuccess and eTowFailed. Pre- and post-trees can be complex
depending on the complexity of the specified event patterns.
10 To mark the termination of a process definition, we insert an additionalTask with the identifer
<Pid>.tTerminate. This additional task’s pre-event-pattern is the CoT of the process definition and the post-event-pattern is an event with identifer e<Pid>End. This ensures the visualisation capture the termination condition of process definition.
135
Table 5-3. EPC Connectors
Pattern Join Split
Symbol Description Symbol Description
OR
Any incoming branch, or a
combination of incoming
branches, will initiate the outgoing
branch.
One or more outgoing branches
will be enabled after the
incoming branch finishes.
XOR
One, and only one, incoming
branch will initiate the outgoing
branch.
One, and only one, outgoing
branch will be enabled after the
incoming branch finishes.
AND
The outgoing branch is initiated
only after all the incoming
branches have been executed
Two or more outgoing branches
are enabled after the incoming
branch finishes.
Figure 5-8. An Atomic EPC Graph
The algorithm that constructs an atomic graph for a given task is listed in Listing 5-1. The
procedure createAtomicGraph() creates an atomic graph for a given task T. The function uses
the grammar given in Listing 5-2 to construct an Abstract Syntax Tree (AST). Two ASTs are
generated, one for the pre-event pattern and one for the post-event pattern. Using the constructed
ASTs, the pre- and post-trees are constructed using the procedure construct(), which recursively calls
itself until all the nodes of the AST is processed.
V
V
X X
VV
eTowReqd eDestinationKnown
V
TT.tTow
X
eTowFailed eTowSuccess
Pre-Tree
Post-Tree
136
Listing 5-1. The Algorithm to Construct an Atomic Graph
//The procedure to create an atomic graph. PROC createAtomicGraph(Task T) G = new Graph(); //Create a new graph. F = new EPCFunction(); //Create a new EPC Function. G.addFunction(F);//Add the Function to the Graph //Abstract syntax trees are constructed by parsing the pre- and post- event patterns. AST preAST = parse(T.getPreEventPattern()); AST postAST = parse(T.getPostEventPattern()); //Create pre and post trees. construct(preAST, G, F, TRUE); construct(postAST, G, F, FALSE); RETURN G; ENDPROC //The procedure to construct an EPC graph G for a given AST. // This function can be reused to construct both pre- and post-trees. PROC construct(AST anAST, Graph G, Node node, BOOL isPreTree) EPCElement elem = NULL; SWITCH anAST.Type // Type of the root of the tree. CASE ‘OR’ elem = new ORConnector(); G.addORConnector(elem); BREAK; CASE ‘XOR’ elem = new ANDConnector(); G.addANDConnector(elem); BREAK; CASE ‘AND’ elem = new XORConnector(); G.addXORConnector(elem); BREAK; CASE ‘EVENT’ elem = new Event(node.getId()); G.addEvent(elem); BREAK; ENDSWITCH //Add edges IF (isPreTree) G.addEdge(elem, node); ELSE G.addEdge( node, elem); ENDIF //Recursively call for all the children; FOR (INT i=0 TO anAST.getChildCount()) construct(anAST.getChild(i), G, elem, isPreTree); //recursive call ENDFOR ENDPROC
137
Listing 5-2. EBNF Grammar for Event Pattern
eventpattern : xorpattern ; xorpattern : orpattern ('^' orpattern)* ; orpattern: andpattern ('|' andpattern)* ; andpattern: atom ('*' atom)* ; atom: ('(' eventpattern ')') | event ; event : ('a'..'z' | 'A'..'Z' |'0'..'9')+ ;
5.5.2 Patterns of Event Mapping and Construction of EPC Graphs
For a given process, a task has dependencies on other tasks through events. These dependencies
are identified as identical events in the constructed atomic graphs. For example, the tTow and
tPayTT tasks show dependencies via event eTowSuccess as shown in Figure 5-9. Within the
space of a process definition (or process instance), these identical event(s) need to be mapped
and linked to construct the complete EPC graph. However, this mapping process needs to be
carried out in a way to ensure the validity of the resulting EPC graph.
In order to carry out the mapping ensuring the validity of the final EPC graph that represents
a process, we introduce a set of 8 event mapping patterns, as shown in Figure 5-10. For a given
event, it may have a successor, predecessor or both. When the mapping is carried out, the
successors and predecessors need to be carefully arranged. Depending on the applicability for
predecessors or successors, the patterns are categorised into two sets, predecessor processing
patterns and successor processing patterns. These patterns need to be used as pairs in order to
perform a single event mapping, e.g., Pattern A + Pattern G.
The selection of a pair is based on how the events predecessors and successors are organised
for a given event:
1. A predecessor or a successor of an event can be a connector (AND, OR, XOR), a Task or
a Null/void (Ø).
2. An event cannot have another event as a predecessor or successor in the graph. Events
always relate to each other via a Connector or a Task.
The Column 1 of Figure 5-10 provides four patterns to process the predecessors of a mapping
event, whereas the Column 2 provides four patterns to process the successors of a mapping
event. The objective is to create one graph out of two graphs with an identical event. For
example, eTowSuccess in Figure 5-9 is the identical event in both atomic graphs. Let us call
these two events Event1 and Event 2 to uniquely identify them: Event 1 is the eTowSuccess in
Graph1 and Event 2 is the same event in Graph2. Each pattern provides what need to be done
138
for the two graphs (Graph 1 and Graph 2) when Event1 and Event2 need to be mapped or linked
into one as explained in Table 5-4.
Figure 5-9. Mapping Identical Events
Figure 5-10. Predecessor and Successor Processing for Merging Events
eTowReqd eDestinationKnown
V
TT.tTow
X
eTowFailed eTowSuccess
mappingCO.tPayTT
eTowSuccess
eTTPaid
Graph 1 Graph 2(Event1)
(Event2)
1
V V
2
V V
+ =
1
V V
2
V V
VX
X
X
1 2
V V
+ =
V V
21 XX
1
V V
2
+ =1
V V
2X
1
+2
=1 2
X
X X X
X
X
X
X X
1
V V
2
V V
+ =1
V V
2
V V
X
X
X
1 2
V V
+ =
V V
21 X
X
1
V V
2
+ =1
V V
2 X
1
+2
=
1 2 X
V
X X X X
XX
X X
Ø Ø
Ø
ØØ
Ø
Ø Ø
Predecessor Processing Patterns Successor Processing Patterns
1
2
Event 1
Event 2
V V X Task or connector
Ø No Edge (NULL Predecessor or Successor)
Edge
X Delete Edge/Event/Task/Connector
Pattern A
Pattern B
Pattern C
Pattern D
Pattern F
Pattern E
Pattern G
Pattern H
139
Table 5-4. The Mapping Patterns and the Actions
Pattern Description Actions Pr
edec
esso
r Pro
cess
ing
A Both Event 1 and
Event 2 have a
predecessor
The predecessors of identical events of Graph 1 and Graph 2 are
kept by combining via an OR connector. New edges, i.e.,
predecessor 1→OR, predecessor 2→OR and OR→ Event 1 are
added. Event 2 and its edge from predecessor 2 are discarded. The
edge of Event 1 from its predecessor is also discarded.
B Event 1 does not
have a predecessor,
but Event 2 has a
predecessor
The only predecessor of Graph 2 is connected to Event 1 via a new
edge. Event 2 and its edge from predecessor 2 are discarded.
C Event 1 has a
predecessor, but
Event 2 does not
have a predecessor
Event 2 is discarded.
D Neither Event 1 or
Event 2 has a
predecessor
Event 2 is discarded.
Succ
esso
r Pro
cess
ing
E Both Event 1 and
Event 2 have a
successor
The successors of identical events of Graph 1 and Graph 2 are kept
by combining via an AND connector. New edges Event 1→AND,
AND→successor 1, AND→successor 2 are added. Event 2 and its
edge to successor 2 are discarded. The edge from Event 1 to
successor 1 is also discarded.
F Event 1 does not
have a predecessor,
but Event 2 has a
successor
Only successor of Graph 2 is connected to Event 1 via a new edge.
Event 2 and its edge to successor 2 are discarded.
G Event 1 has a
predecessor, but
Event 2 does not
have a successor
Event 2 is discarded.
H Neither Event 1 or
Event 2 has a
successor
Event 2 is discarded.
140
A mapping is done by selecting a Pattern-Pair; one from Column 1 and one from Column 2.
∴ A Pattern-Pair = {A, B, C, D} × {D, E, F, G}
The truth table for the pattern-pair selection is given in Table 5-5. Depending on the presence
of predecessor and successor a pair of patterns is selected according to the truth table. To
elaborate how the pattern pair selection works, let us take the mapping shown in Figure 5-9,
where the two graphs 1 and 2 have an identical event identified by eTowSuccess. As mentioned,
let us call eTowSuccess in graph 1 as Event 1 and eTowSuccess in graph 2 as Event 2.
Event 1 has a predecessor (connector XOR) but does not have a successor.
Event 2 does not have a predecessor but has a successor (task tPayTT).
According to the truth table given in Table 5-5 the corresponding pair of patterns that should
be applied is C+F, where the predecessor pattern is C and successor pattern is F. By applying
pattern C, the resulting graph deletes the Event2 and keeps the rest intact. By applying pattern F,
the resulting graph creates a new edge from event 1 to the task tPayTT as shown in Figure 5-11.
Figure 5-11. A Linked Graph
eTowReqd eDestinationKnown
VTT.tTow
X
eTowFailed eTowSuccess
CO.tPayTT
141
Table 5-5. The Truth Table for Pattern Pair Selection
Event 1 (Graph1) Event 2 (Graph2) Pattern Pair
Predecessor? Successor? Predecessor? Successor?
1 1 1 1 A+E
1 1 1 0 A+G
1 1 0 1 C+E
1 1 0 0 C+G
1 0 1 1 A+F
1 0 1 0 A+H
1 0 0 1 C+F
1 0 0 0 C+H
0 1 1 1 B+E
0 1 1 0 B+G
0 1 0 1 D+E
0 1 0 0 D+G
0 0 1 1 B+F
0 0 1 0 B+H
0 0 0 1 D+F
0 0 0 0 D+H
1=Present, 0=Not Present
For a process definition or a process instance, all the identical events of all the graphs are
iteratively linked to construct a complete EPC graph. The algorithm for this construction
process is given in Listing 5-3. The EPC graphs (G1…n) are the atomic graphs that correspond to
the process tasks and are constructed as described in the previous section. The procedure
linkGraphs links the graphs one by one until a single graph is constructed. It uses the procedure
linkTwoGraphs to link only two graphs. The procedure linkTwoEvents uses the mapping
patterns (Figure 5-10) to modify the graphs by mapping identical events as described above. A
constructed EPC graph is shown in Figure 5-12.
Finally, to ensure the correctness of the constructed EPC graph, it is validated using ProM (as
indicated by the verify() function call), which has an EPC validator plugin [279, 280]. The
plugin uses a set of reduction rules and a Petri-Net-based analysis to determine the syntactical
correctness [281].
142
Listing 5-3. Algorithm to Link EPC Graphs
//1. Iteratively link assembled EPCs PROC linkGraphs(Graph [] {G1, G2, G3… Gi...Gn}) Graph linkedGraph=G1; //Initialise with the first graph of the array FOR (INT i=2 TO n) Graph linkedGraph = linkTwoGraphs(linkedGraph, Gi); ENDFOR IF(verify (linkedGraph)) //Check for correctness. RETURN linkedGraph; ELSE RETURN NULL; ENDPROC //2. Link two Graphs PROC linkTwoGraphs (Graph Gx, Graph Gy) Graph Gnew = New Graph(); //Create an Empty Graph COPY all the functions+events+connectors+edges of Gx, Gy to Gnew; INT Ne = Number of events of the graph Gnew; FOR (INT i=0 TO Ne){ Event eCur = Ei ; //current event is ith event FOR(INT j=i+1 TO Ne) Event eTemp = Ej ; //temporary event is jth event IF(eTemp.identifier == eCurIdentifier)THEN //We have a match. Link two events Graph Gnew = linkTwoEvents(eCur, eTemp, Gnew); ENDIF ENDFOR ENDFOR RETURN Gnew; ENDPROC //3. Link Two events PROC linkTwoEvents(Event Event1, Event Event2, Graph Gmap) //Select a pair of mapping patterns as indicated in Table 5-5. //Link according to mapping patterns introduced in Figure 5-10 to Modify Graph G. RETURN Gmap; ENDPROC
143
Figure 5-12. A Dynamically Constructed EPC Graph for pdSilv Process
144
5.6 Summary
In this chapter, we have presented the key design details of the Serendip runtime. The
motivation for designing such a runtime is to provide the required runtime adaptability for a
service orchestration as defined using the Serendip concepts.
We have first presented the expectations for an adaptable service orchestration runtime. Then,
the core components of the Serendip runtime are introduced in order to fulfil these expectations.
Later, the main mechanisms of the runtime are explained. These include how the Serendip
process progresses, how the events are triggered based on service relationships, how the data are
synthesised and synchronised and how the process graphs are constructed. These mechanisms
are designed with the motive of supporting the adaptability required to realise Serendip
processes. The Publish-Subscribe mechanism supports late change of task dependencies. The
subscribers such as task instances can perform late modifications locally to its event patterns to
allow ad-hoc deviations. The progression of processes is based on events that are triggered by
dynamically changing contracts. The design of the role as well as the message transformation
process addressed a number of challenges including the message ordering and process
correlation issues. The dynamic process graph generation allows providing the up-to-date graph
based representation to fro visualisation as well as validation requirements.
145
Chapter 6
Adaptation Management
Modern organisations are required to respond to changing business requirements and
exceptional situations more effectively and efficiently than in the past [282]. Process-Aware
Information Systems (PAIS) [283] facilitate this requirement to a certain extent by leveraging
IT support to design and redesign business processes. Unexpected changes to the requirements
emerge when the business processes are already instantiated or in operation. Providing business
process support for well-identified exceptional situations is somewhat less-challenging, since
they can be captured in the initial process design. Alternative process flows may be pre-included
in the process definitions, which is also known as flexibility-by-design in the literature [63].
However, the unexpected exceptional situations need to be handled during the runtime while the
system is in operation and the processes are running. This is more difficult than applying
changes to static process definitions.
Two types of changes that need to be supported by a PAIS are identified by van der Aalst [55,
84, 141, 252].
1. Ad-hoc Changes: These changes are carried out on a case-by-case basis. Usually,
they are applied to provide a case or process instance specific solution.
2. Evolutionary Changes: These changes are carried out in a broader and relatively
more permanent manner. All future cases or process instances reflect the changes.
Heinl et al. [59] in his terminology also identifies these two categories of changes as Instance
Adaptation and Type Adaptation respectively. In addition, a few other works in the past also
highlighted the difference and the importance of supporting these two types of changes in a
PAIS [110, 169, 284, 285]. It follows; a service orchestration needs to be designed in a manner
that it provides support for both ad-hoc and evolutionary changes.
According to Regev et al. [76] flexibility is the ability to continuously balance between
change and stability without losing identity. From this perspective, the changes need to be
carried out in a manner that they do not lead to erroneous situations. The composition, both
prior to and after the adaptations, needs to be stable and ensure the expected goals are achieved.
146
This is applicable for both evolutionary and ad-hoc changes, where even an ad-hoc change
should not violate organisational goals.
Adapting running process instances is more challenging than adapting static process
definitions. The running process instances can be in different states at a given time. The
adaptations on running process instances need to be carried out only when they are in suitable
states. The suitability of state also varies depending on the kinds of adaptation (i.e., what is
adapted).
For the above reasons, runtime changes to service orchestrations need to be carefully carried
out. Proper management (IT) support is essential. We refer to this support as adaptation
management and an IT system that provides such support as an adaptation management system.
This chapter presents how the adaptation management system of the Serendip Framework is
designed. This work adopts and further extends the ROAD architecture [81] to manage the
adaptations in a service orchestration. The ROAD architecture introduces the guidelines for
architecting and managing a software composite structure. This includes changes to the
structural aspects of the composite such as contracts, roles and players and is extensively
discussed in the past. The concerns associated with managing such changes in the composite
structure are beyond the scope of this thesis and we refer to [82, 83] for more details. The
content of this chapter focuses only on managing process adaptations in a service composition.
Firstly, a detailed discussion of process management and different adaptation phases are
introduced. Secondly, the basic concepts of a design of an adaptation management system and
the potential challenges are discussed. Later, a detailed description of the adaptation
management system that addresses these challenges is presented.
6.1 Overview of Process Management and Adaptation
In order to understand the adaptation management in the context of service orchestration, it is
necessary to understand the lifecycle of BPM and its phases. Section 6.1.1, provides some
previously presented views of BPM lifecycle and phases. Section 6.1.2 will further refine these
views to provide a more detailed and expanded understanding of process management and
adaptation.
6.1.1 Process Modelling Lifecycles
Weber et al., [286] presented a traditional business process life-cycle of a PAIS, consisting of
four phases as shown in Figure 6-1(a). During the design phase the high-level business
requirements are designed as a process or a workflow. During the modelling phase the design is
translated into an executable model. Then the model is executed during the execute phase.
147
Finally, the executed processes are monitored at the monitor phase. The feedback may be used
to redesign the processes and the cycle continues.
A similar view was taken by van der Aalst et al. [7]. As shown in Figure 6-1 (b), the BPM
(Business Process Management) Lifecycle has four stages, process design, system
configuration, process enactment and diagnosis. Initially the processes are designed in the
process design phase. During the system configuration phase these processes are implemented.
Then the implemented processes are enacted using the configured system at the process
enactment phase. These enacted processes are analysed to identify possible improvements in the
diagnosis phase. This view is an extension of the workflow management stages, which did not
contain a phase for diagnosis.
Although above two views on a process lifecycle have slight differences in terms of the
defined stages or phases, both highlight two important aspects of IT process support. That is, the
IT process support is always subject to iterative refinements. The existing process executions or
enactments provide a feedback to shape up the future designs of the process. Furthermore, both
views assume that the process adaptation (refinement/redesign) is a management activity or
concern. Nonetheless, above views have the following limitations.
1. There is no clear distinction between the process definition adaptation and the
process instance adaptation in the above views [55, 110]. The views are presented
mainly focusing the process definition adaptation. In an operational environment, the
same process definition can be instanced for different process instances that may
require individual and different adaptations. These adaptations are not a complete
redesign of the process concerned and carried out within a scope of a process
instance.
2. Above views assume that adaptation at the process definition level is a design time
activity. There is no representation for runtime adaptations at the process definition
level. This is understandable as the traditional BPM systems require an offline
(re)design of the definitions/schemas before being (re)deployed in an enactment
engine. In contrary, contemporary systems requires adapting the process definitions
or schemas while the system is up and running without complete re-deployment.
Such a capability is essential for modern applications functioning in a competitive
business environment, where a temporary shutdown of an orchestration engine can be
costly to the businesses, if not catastrophic.
148
Figure 6-1. (a) Process Lifecycle, Cf. [286]. (b) BPM Lifecycle, Cf. [7]
6.1.2 Adaptation Phases
This thesis presents a more detailed view of process management and adaptation support than
the views presented above. As illustrated in Figure 6-2, this view also assumes that the process
refinement is an iterative activity, and that the process design/redesign is a management activity
which should be based on the feedback provided by monitoring the process execution. As such,
this view does not contradict with the previously presented views in Figure 6-1. However, this
view clearly distinguishes the process definition adaptation from the process instance
adaptation, addressing the first limitation. It also shows that the adaptations on process
definitions are no longer a complete design-time only activity but also can be a runtime activity,
addressing the second limitation. These two explicit representations are vital to understand the
phases of adaptation management in an adaptive service orchestration.
Figure 6-2. Process Adaptation Phases
As shown in Figure 6-2, process adaptation has three phases, between four milestones. The
four milestones are marked according to the activities of model, deploy, enact and terminate
(shown by vertical bars). Between these four milestones there are three phases known as
designed (#1), deployed (#2) and enacted (#3). An understanding of these three phases and
(a) (b)
Designed Deployed Enacted
Ad-hoc Change
Evolutionary Change
Process Instance Level Runtime
Process Definition LevelDesign-Time
Re-design
1 2 3Dep
loy
Ena
ct
Mod
el
Terminate Instance
Discard Definition
Term
inat
e
TimeLevel
Milestones and Phases
Monitoring
Discard Models
149
milestones is necessary to determine the type of adaptation that should be carried out in a
service orchestration. Let us clarify these phases and milestones in detail.
At the beginning, an initial design is drafted to reflect the business requirements. This
involves business analysts and software engineers. It should be noted that an IT process design
is just to achieve a close approximation of the real world business requirements. Therefore, the
design is subject to numerous revisions, until the design is accepted as “close enough”. This
forms the phase 1 or the designed phase. Once a process definition is designed, then it is
deployed on a suitable platform, e.g., an orchestration engine. Upon deployment, many process
instances can be enacted from the same definition. This is the phase 2 or the deployed phase.
Then the enacted business process instances are executed during the phase 3 or the enacted
phase. These instances are individually monitored and adapted to instance-specific business
requirements and unexpected exceptions until they terminate.
The monitoring is a continuous activity which is carried out during the phase 3. Monitoring
provides the feedback to carry out three different types of adaptations in all three phases.
Therefore, unlike the views presented in Figure 6-1, this view (Figure 6-2 ) does not consider
monitoring as a completely different phase. Instead monitoring is considered as an activity
carried out during the enactment phase by observing how running processes behave. Business
Process Monitoring and change Mining/Analysis are important aspects of process adaptation,
and much work has been done in this area [284, 287]. As shown, the output from the monitoring
closes the feedback loop in three different locations.
During the phase 3, the process instances are subject to ad-hoc changes [55, 84, 141] to
deviate from the original definition. These changes are not permanent and affect only a
particular process instance. On the other hand if the outcome demands a common and system
wide adaptation, then evolutionary changes are carried out on the loaded definitions or their
internal representation (models maintained at runtime). Such changes affect the future process
instances but do not require a re-deployment of service orchestration descriptions. The loaded
runtime process models that represent the process definitions can be dynamically modified to
reflect the changes, in the phase 2. However, in a worst case scenario, a complete
system/process redesign is required and the system’s ability of handling such changes may be
beyond the change capabilities offered at phase 2. Such drastic changes are required to be
carried out during the phase 1.
This view shows that the process definition level change is no longer a complete design time
activity. Usually, there is confusion in referring process runtime changes to process instance
level changes. In modern PAIS, meanings of these two are not identical. During runtime both
process definitions and instances can be changed. The system design should support performing
150
such changes on-the-fly with minimal or no interruptions to the system’s ongoing operation.
Therefore, out of the adaptations in the three phases, this thesis focuses on the adaptations in
phase 2 and phase 3, i.e., evolutionary and ad-hoc adaptations at runtime respectively. Design
time modifications at phase 1 are beyond the scope of this thesis.
6.2 Adaptation Management
The adaptations of a service orchestration need to be managed in order to assure that the
objectives of service providers, aggregator and consumers are achieved. When there is a change
in the way that the services collaborate, the service orchestration needs to be adapted.
Nonetheless, adaptations cannot be carried out at will. Instead, they should be carefully
managed. Therefore, the middleware for service orchestration should provide mechanisms to
manage the adaptations. Based on the overview provided in the previous section, this section
describes how the adaptations are managed in the Serendip Framework.
6.2.1 Functional and Management Systems
Human organisations such as hospitals or universities consist of both functional and
management systems. The functional system continuously performs business functions, while
the management system manages the behaviour of the functional system. To elaborate, for
example, treating patients is defined in the functional system of a hospital, which includes
functionalities such as diagnosis, treatment and keeping medical records. The functional system
also defines a number of roles such as doctors, nurses and clerks to perform these
functions/duties. On the other hand, the management system (the authority) of a hospital
continuously manages the functional system. Concerns such as what roles should exist, who
should play these roles and how roles should interact with each other are determined by the
management system.
Adopting such a managed system’s viewpoint, a Serendip service orchestration also includes
both functional and management systems. A service orchestration deployed in an orchestration
engine serves the incoming requests. In order to serve the requests, a number of roles are
defined. The service orchestration uses external partner services to play the functional roles. The
service orchestration defines when and how to invoke these partner services. On the other hand
the management system decides when and how to reconfigure and regulate the service
orchestration to optimise its operations. The concerns such as what roles should exist, how the
service relationships are changed, what process definitions are modified and how a particular
process instance should deviate are the decisions that the management system takes. Both
functional and management systems runs in parallel.
151
It is important that the functional and management concerns are separated [48]. Serendip will
follow this separation of functional and management systems by adopting the concept of the
Organiser.
6.2.2 The Organiser
The Role Oriented Adaptive Design (ROAD) separates the management concerns from the
functional concerns via a special role called the organiser, in a service composite. The
organiser reconfigures and regulates its composite [83]. An organiser player manages the
composite via the organiser player and can be either a human or an intelligent software system,
which is automated or semi-automated. The organiser player (the management system) should
be separated from the service orchestration (the functional system). For a simple service
orchestration, the organiser can be a human monitoring the processes and performing the
adaptations. For a more sophisticated service orchestration that requires machine intelligence
the design of the organiser can be a complex software system with or without human
involvement.
The work presented in this thesis, uses the organiser [82, 83, 288] concept and further
extends it to support process monitoring and adaptation. In this work, the organiser not only
reconfigures and regulates the structure but also changes the behaviour or processes of the
organisation. The behaviour units can be dynamically added, removed or changed to reflect the
changing business requirements. Such modifications do not require any pre-configured tunable
points as is the case with many existing alternatives [67, 68, 149]. We believe that such
configured tunable points are not flexible enough in dynamic operating environments, as they
require foreseeing the changing business requirements. Ability to change without requiring pre-
configured tunable points helps the decision making entity (a software system or human) to
better reflect the changes in business requirements in the IT process support.
In the existing ROAD framework, a set of operations is provided via the organiser role and is
exposed to the organiser player for it to reconfigure and regulate the service composite
structure. The organiser player can add/remove/modify the contacts/roles/player-bindings of the
composite via the organiser role interface. This thesis extends the available operations of the
organiser role to provide the support for process level modifications. That is, the organiser can
change the process definitions as well as process instances of the organisation via these
extended set of operations. For example, updatePropertyOfTask(String pid, String taskeId,
String property, String value) is an operation that performs an atomic adaptation to change a
property, e.g., pre-conditions, of a task. The complete list of operations is listed in Appendix C.
152
Based on our experience, process-level adaptations can be more challenging than the
structural adaptations. Running process instances have relatively shorter lifetime compared to
the composite structure and can change their states quickly. The adaptations need to be carried
out systematically to ensure such properties as atomicity, consistency, isolation, durability (i.e.,
the ACID properties) [289]. Achieving such sound adaptations requires an Adaptation Engine in
the runtime framework.
6.2.3 The Adaptation Engine
In Section 5.1.2 we briefly introduced the Adaptation Engine, which is a core component of the
Serendip Framework that provides adaptation support systematically and reliably. Each
deployed and instantiated composite will have its own instance of an adaptation engine, which
is running in parallel to the enactment engine.
As shown in Figure 6-3, the adaptation engine belongs to the management system while the
enactment engine and the model provider factory belong to the functional system. The
management decisions taken by the organiser player are realised in the functional system via the
adaptation engine. If the decision is not safe to be carried out, the adaptation engine will reject
the adaptations and provide possible reasons. For example, if the organiser player decides to
make a process instance deviate from the original process definition, that decision must get the
consent of the adaptation engine. If the deviation is not safe, e.g., due to a violation of a
behaviour constraint, then the deviation is rejected. In addition, the adaptation engine provides
operations for monitoring the functional system.
Figure 6-3. Separation of Management and Functional Systems
One of the main reasons to place an adaptation engine between the organiser player and the
functional system is to address a number of process adaptation management challenges that can
occur during the runtime (see below). Just by providing a set of operations via the organiser
Adaptation Engine
Organiser Role
Organiser Player
Enactment Engine
Management System
Functional System
Models (at MPF)
MonitoringAdaptationsReject
Composite
Accept
153
interface does not address these challenges. In practice, when a human is the organiser player,
he/she may not be able to determine the safe states (Section 6.5) of running process instances to
carry out the adaptations. Furthermore, if the organiser player is remotely located over a
network, the network delays can make the adaptation decision invalid due to process state
changes over the period of time delay.
6.2.4 Challenges
There are a number of adaptation challenges that should be addressed by the design of the
Adaptation Engine. They are:
1. The organiser and the service orchestration can be decentralised. Therefore, there
should be a mechanism to adapt the service orchestration in a decentralised manner.
2. A single logical adaptation may involve multiple adaptation steps. Therefore, there
should be a mechanism to carry out multiple steps of an adaptation in a single
transaction. This is important for both efficiency and for avoiding false alarms due to
process validation carried out in inconsistent states.
3. Manually performing the adaptations on running process instances that are executed
speedily can be difficult. A mechanism is required from the adaptation engine to
schedule and automate the future adaptations.
4. Adaptations can be frequent, and manual validation can be inefficient. Therefore, it is
necessary to automate the validation of adaptations. In addition, the adaptations need
to be carried out in a way that ensures the integrity of process definitions and
underlying organisational behaviours. The interlinked nature of process definitions
and organisational behaviours need to be taken into consideration while performing
the adaptations to ensure the integrity.
5. For phase 3 adaptations, i.e., the runtime adaptations on a process instance, the state
of the running process instance needs to be considered. The validity of an adaptation
of a process instance not only depends on the defined constraints but also on the state
in which the process instance is currently in.
In order to address the first challenge, the work of this thesis extends the operation-based
adaptation mechanism of the existing ROAD framework to achieve the adaptation support for
processes. The details are discussed in Section 6.3.1.
In order to address the second challenge, this thesis proposes a batch mode adaptation
mechanism on top of the existing operation-based adaptation. In batch mode adaptation, a
number of atomic operations are carried out in a single logical transaction prior to process
154
validation. The batch mode adaptation is described in Section 6.3.2. The adaptation scripts that
support batch-mode adaptation is described in Section 6.3.3.
In order to address the third challenge, a script scheduling mechanism that improves the
scheduling capabilities of adaptation mechanism is introduced. The script scheduling is
described in Section 6.3.4 and the complete adaptation mechanism is described in Section 6.3.5.
In order to address the fourth challenge, an automated process validation mechanism is
introduced in Section 6.4. This mechanism uses the two-tier constraints described in Section
4.4. Such validation is applicable for both process instances and process definitions.
Finally, the fifth challenge is addressed by performing state-checks before adaptations as
described in Section 6.5. State-check is a mechanism to ensure that a process instance is in a
stable position to carry out the adaptations.
6.3 Adaptations
Adaptations on Service orchestration are carried out during the runtime on process definitions
and on process instances, i.e., at phase 2 and phase 3 as introduced in Section 6.1.2. Serendip
supports two main modes of adaptations called operation-based adaptations and batch-mode
adaptations, which are discussed in detail below.
6.3.1 Operation-based Adaptations
Adaptations of a ROAD composite are carried out via the exposed operations of the organiser.
For example, if a role needs to be added, the addRole() operation of the organiser role is
invoked by the organiser player.
This thesis extends the available set of operations of the organiser role in order to perform the
adaptations on the newly introduced process layers. An overview of types of operations
available in organiser interface is given in Figure 6-4. Firstly, all the operations can be divided
into monitoring operations and adaptation operations. The monitoring operations allow
monitoring the organisation whilst the adaptation operations allow adapting the organisation.
The adaptation operations can be further divided into two sets, i.e., operations that adapt the
structure and operations that adapt processes of the organisation. While we use the operations
for structure adaptations available in ROAD framework, we introduce new operations to
perform process adaptations. Depending on the adaptations are carried out in process definitions
(phase 2) or process instances (Phase 3), the process adaptation operations can be further
divided into two sets, i.e., instance-level adaptation operations and definition-level adaptation
operations. For example, addTaskToProcessInst() is an adaptation operation that performs an
155
instance-level adaptation, whilst addBehaviorRefToProcessDef() is an adaptation operation that
performs a definition-level adaptation. The full list of operations of organiser role is available in
Appendix C.
Figure 6-4. Types of Operations in Organiser Interface
These operations allow a decentralised organiser player to manipulate the service
orchestration. A suitable deployment technology such as Web services can be used to expose
these operations of the organiser role as Web service operations, which are further discussed in
Section 7.2. Each adaptation would lead to an integrity check. Depending on the outcome of the
integrity check a feedback is sent to the organiser player. e.g., as a Web service message.
In some cases, multiple operations are required to complete a process adaptation and to bring
the service orchestration from one valid state to another. The operation-based adaptations alone
cannot fulfil this requirement. Consequently, a batch mode adaptation is introduced.
6.3.2 Batch Mode Adaptations
Figure 6-5 shows the fundamental difference between operation-based adaptation and batch
mode adaptation. After a single transaction (invocation of adaptation operation(s)), the
adaptation engine performs an automated validation to ensure the integrity of the orchestration
(6.4). Depending on the validation result, the change is either accepted or rejected. As shown in
Figure 6-5 (a) the operation-based adaptation performs validations after each adaptation
operation (i.e., each operation is treated as a transaction). In contrast, a batch mode adaptation as
shown in Figure 6-5 (b) carries out a number of adaptation operations before an integrity check.
Furthermore, the adaptation is carried out in a single transaction which is efficient when the
organiser player and the organisation are decentralised in a networked environment.
Organiser Operations
Monitor Adaptation
Structure Process
Definition-Level Instance-Level
156
Figure 6-5. (a) Operations-based Adaptation (b) Batch Mode Adaptation
The primary use of batch mode adaptation is to avoid false alarms, which is a practical
problem with operation-based adaptations. For some change requirements, it is required to carry
out a number of adaptations in the orchestration to bring the orchestration from one valid state
to another. For example, consider an adaptation which imposes a new event dependency
between two tasks tTowPay and tRecordCreditPay. When a task tRecordCreditPay subscribes
to event eTTPaidByCredit, another task tTowPay of the process should trigger this event. This
requires a number of atomic operations. If the validation is carried out after one atomic
operation, e.g., modify the pre-event pattern of tRecordCreditPay, it would trigger a false alarm
as the eTTPaidByCredit is not triggered by any other task.
The batch mode adaptation ensures the ACID properties in adaptation transactions, similar to
transactions in database systems [289]. The ACID properties (atomicity, consistency, isolation,
durability) are well-known for their ability to guarantee a safe transaction which is a single
logical unit of operations on a database. Similarly a batch-adaptation is a single logical unit of
adaptations in a Serendip composite and it may consist of a number of atomic adaptation
actions (Section 6.3.5). A failure in a single command of a batch adaptation would result in a
failure in the complete transaction (atomicity). A batch adaptation should bring the service
orchestration from one valid state to another valid state (consistency). No two batch adaptations
should interfere with each other during the execution (isolation). The adaptation engine ensures
that these adaptations are carried out in sequence to support the isolation property. Once a batch
adaptation is carried out, it will remain committed in the service orchestration either at the
process definition level or instance level (durability).
In order to support batch mode adaptations, the work of this thesis uses adaptation scripts.
The adaptation scripts can be executed via the organiser operation executeScript(). A script can
be used to order a number of atomic adaptation operations in a single transaction. This allows a
number of adaptation operations being performed prior to an integrity check. The next section
introduces adaptation scripts.
Adapt 1 Validate (Realise / Reject)
Operation 1
Adaptation Engine
Result Adapt 1 Adapt 2 Validate (Realise / Reject)
Batch (Op1+Op2)
Result
Adapt 2 Validate (Realise / Reject)
Operation 2
Result
Adaptation Engine
(a) (b)
157
6.3.3 Adaptation Scripts
In order to support the batch mode adaptation, the Serendip Framework provides a scripting
language (Serendip Adaptation Scripts) to issue a number of adaptation commands in a single
transaction. Appendix D presents the syntax and details of the language.
An adaptation script specifies a number of commands scoped into possibly multiple blocks. A
script corresponds to a single transaction, i.e., all the commands specified in the script (enclosed
by blocks) need to be committed in a single transaction. A failure of a single command is a
failure of the complete script. A block is used to define a scope for a set of commands. A typical
use of a block is to define the adaptation commands within a scope of a single process instance.
An example adaptation script is given in Listing 6-1.
Listing 6-1. Sample Adaptation Script
INST:pdGold033 { updateTaskOfProcessInst tid=”tRecordCreditPay” property=”preEP”
value=“eTTPaidByCredit”; updateTaskOfProcessInst tid ” tTowPay” property ”postEP”
value ” eTTPaid * (eTTPaidByCredit ^ eTTPaidByCash)”; }
The above script specifies modifications to the process instance (INST) pdGold033 to make it
deviate from the original definition, i.e. pdGold. The objective of this is to add a new
dependency between two tasks via an event, which was not available earlier. The first script
command changes the pre-event pattern (preEP) of task tRecordCreditPay to insert the event
eTTPaidByCredit. Then another script command changes the post-event pattern (postEP) of task
tTowPay to add the dependency for eTTPaidByCredit. Although being executed via two
commands, such a deviation is a single logical transaction. An integrity check after the first
command would have triggered a failure. Therefore, such changes need to be carried out in a
single transaction in order to bring the service orchestration into a state where a validation can
be carried out without false alarms. Section 8.4.2 contains further examples from the RoSAS
case study to demonstrate the capability of batch mode adaptations.
6.3.4 Scheduling Adaptations
The correctness of an adaptation depends not only on how the adaptation is carried out but also
when the adaptation is carried out. Manually detecting and applying adaptations may not be
practical for process instances that have a shorter lifespan or speedy execution. Even for a long
running or slow process instance, identifying and waiting for the exact conditions to perform
adaptations can be inefficient. When the decision making entity (the organiser) and the system
(service orchestration) is decentralised, the network delays may cause delayed execution of
158
adaptations making them invalid. The Adaptation Scripts only specify how the adaptation is
carried out. Therefore, apart from the capability of performing adaptations on-the-fly, there
should be a way to schedule the adaptations in advance. This section clarifies the adaptation
scheduling mechanism.
Scheduling adaptations are possible when the adaptation procedure as well as the condition
in which the adaptation should be carried out is well-known. Much work has been done in the
past for scheduling adaptations or allowing pre-programmed adaptations [84, 290, 291]. Similar
to these approaches, Serendip allows scheduling adaptations in its service orchestration. The
scheduling mechanism in the Serendip Framework uses adaptation scripts to schedule the
adaptations. The operation scheduleScript() of the organiser interface allows scheduling an
adaptation based on a particular condition.
scheduleScript(<script_file>,<condition_type=[EventPattern|Clock]>, <condition_val>);
Here, the script_file is the file name that contains the script. The condition_type is the type of
condition which is an event pattern or a clock event. The condition_val is the value of the
condition which is an event pattern or a specific time of the clock.
For example, consider the scenario where a new task tNotifyGarageOfTowFailure needs to be
dynamically introduced for a running process instance pdGold034, if the towing is failed. This
is not captured in the original process definition pdGold. The organiser can schedule an
adaptation script (script1.sas), which will be activated when the eTowFailed event has been
triggered for pdGold034, instead of continuously monitoring the system.
scheduleScript (“script1.sas”, “EventPattern”, “pdGold034: eTowFailed”);
Scheduling mechanism is not used to provide self-adaptiveness for the framework. A self-
adaptive system is a closed-loop system that adjusts itself to changes during operation [260].
The loop involves a decision making step. In scheduling, the decision still needs to be taken by
someone other than the system itself. This may be a human who is playing the organiser role.
The intention of the scheduling mechanism was only to facilitate the realisation of the already
taken decisions.
For this purpose, the Adaptation Engine has been extended with an Adaptation Script
Scheduler. The Adaptation Script Scheduler accepts the scheduled adaptation scripts and
continuously monitors the Serendip runtime to detect the suitable moment to perform the
adaptation. Once detected, the scheduled script is activated. The complete adaptation
mechanism that includes operation-based and batch-mode (scheduled/not-scheduled)
adaptations is described in the next section with details.
159
6.3.5 Adaptation Mechanism
The objective of the adaptation engine is to assist the organiser player to carry out the
adaptations on the service orchestration in a systematic manner. The design needs to support
both the operation-based and batch mode adaptations (scheduled/un-scheduled) ensuring the
consistency among the two modes. In addition, the adaptation engine needs to operate as a
separate sub-system addressing the management concerns, from other sub-systems (i.e., the
Enactment Engine and MPF) that address the functional concerns.
Figure 6-6 shows the adaptation mechanism, where all the steps are numbered and explained
below. The main components of the architecture are the Organiser Role, the Adaptation Engine,
the Adaptation Scripting Engine, the Adaptation Script Scheduler, and the Validation Module.
Figure 6-6. The Runtime Architecture of Adaptation Mechanism
Organiser Role: Acts as a proxy for an organiser player to reconfigure and regulate
the service orchestration as discussed in Section 6.2.2.
Adaptation Engine: Provide the process adaptation support in a timely, consistent and
accurate manner as discussed in 6.2.3. It accepts atomic adaptation actions, which are
single-step adaptations in the functional system corresponding to an adaptation
command, e.g., updateTaskOfProcessInst.
Organiser Role
Adaptation Engine
Adaptation Scripting Engine
(Interpreter)Parser
Atomic Adaptation Action
Organiser Player ( a Local / Remote entity,
a Human / Machine)
Script
Validation Module
Serendip Runtime(EE and MPF)
Process Instance
Organiser Interface, e.g., Web Service / Desktop Application
Adaptation Script Scheduler
ScheduleActivate
Adaptations
Monitor
1
3
2
46
Batch
7
9
12
11
13
5
10
Monitor
8
Legend
Parse
Validate
160
Adaptation Scripting Engine: Interprets and schedule adaptation scripts, supports
batch mode adaptations, and uses a script parser to parse the scripts.
Adaptation Script Scheduler: Schedules adaptation scripts, and listens to the Serendip
runtime to identify when to issue the scheduled script.
Validation Module: Performs validations on adaptations to be discussed in Section
6.4.
The organiser player can use the organiser role interface to perform either operation-based or
batch mode adaptations (#1). The operation-based adaptations are directly issued to the
adaptation engine (#2). In contrast, the batch mode adaptation has to go through the Adaptation
Scripting Engine (#3).
The Adaptation Scripting Engine can either schedule the script (#4) or interpret the script
immediately (#8) depending on when the script should be executed. Scheduling scripts is done
via the Adaptation Script Scheduler. If there is at least one scheduled script, the scheduler
continuously listens (#5) to the triggered events in the enactment environment in order to
identify when to activate the scripts (#6). The Adaptation Script Scheduler subscribes to the
Event Cloud which maintains the triggered events. The Adaptation Scripting Engine parses (#7)
and interprets the adaptation scripts to create an array of atomic adaptation actions (#8). These
atomic adaptation actions are understood by the Adaptation Engine.
Irrespective of whether the adaptation actions are from operations or from batch mode, the
adaptation engine understands and executes atomic adaptation actions. This arrangement
provides a unified and consistent interface to carry out adaptations. The adaptation engine
carries out adaptation requests sequentially in a single thread, in order to ensure that no two
adaptations would interfere with each other (isolation).
Prior to the adaptation, suitable backups are made in order to make sure the rollbacks are
possible upon failure (atomicity). For example if a process instance needs to be deviated the
models and the instance status are backed up. If adaptation on the instance fails, the backed up
process instance is restored. For phase 3 adaptations, the engine also needs to perform a state
check (Section 6.5). Upon these pre-checks, the adaptations are carried out (#9). If the
adaptation is a phase-2 adaptation, the models correspond to the definitions maintained in the
Model Provider Factory (MPF) are changed. If the adaptation is a phase-3 adaptation, firstly the
process instance is paused (to be resumed after the adaptations) and then the models correspond
to the process instances in MPF are adapted.
The adaptation engine performs a validation (#10) to identify the impact of change (Section
6.4). This ensures that adaptation only takes the service orchestration from one valid state to
161
another (consistency). Once the adaptation is successful the change is committed (durability).
Otherwise, the change is rejected and the backup is restored.
Apart from the adaptation commands, the monitoring commands are accepted by the
adaptation engine and executed by reading the Serendip runtime (#11, #12 and #13). These
monitoring commands, e.g., getCurrentStatusOfProcessInst, let the organiser monitor the
Serendip runtime. Table 6-1 below summarises the different adaptation modes and
corresponding paths of execution as shown in Figure 6-6.
Table 6-1. Adaptation Modes and Paths of Execution
Adaptation Mode Path of Execution
Operations-based 1-2-9-10-11-12-13
Batch Mode Non-Scheduled 1-3-7-8-9-10-11-12-13
Scheduled 1-3-4-5-6-7-8-9-10-11-12-13
6.4 Automated Process Validation
McKinley et al. [292] have identified two major types of approaches for dynamic software
composition as Tunable software and Mutable software. The Tunable Software allows fine
tuning of the identified cross cutting concerns to suit to changing environmental conditions. In
contrast, Mutable Software allows changing even the imperative function of an application
dynamically. However, they have also stated that, “While very powerful (mutable software), in
most cases the developer must constrain this flexibility to ensure the system’s integrity across
adaptations”. This is so in the sense that tunable software naturally ensures the integrity by
explicitly specifying and allowing what can be tuned. In contrast, mutable software needs an
explicit mechanism to ensure the system integrity.
Although above categorisation has been made with respect to software systems in general, it
is also applicable to process adaptations. The approaches such as AO4BPEL [68], MoDAR
[149] and VxBPEL [67] allows the change of crosscutting concerns of a process in a tunable
manner. These approaches explicitly specify what can be tuned. In contrast, Serendip allows
changes to the service orchestration without any requirement for pre-identified tunable points.
While this provides a greater degree of flexibility, it comes with the risk of possible violations
to process integrity upon adaptations. Therefore, Serendip needs to explicitly provide a
mechanism to ensure the integrity of the service orchestration.
Serendip provides a two-tier constraint validation mechanism at the behaviour unit level and
the process level, as discussed in Section 4.4. The linkage between process definitions and
162
behaviour units are used to determine the minimal set of constraints to perform the validation.
In addition, the impacts of the changes can also be identified. This section further extends the
discussion in Section 4.4 to clarify how the Serendip runtime supports this concept.
6.4.1 Validation Mechanism
The adaptations upon a service orchestration can be frequent and the complexity of the
adaptations can be high. A manual validation can be time-consuming and error-prone.
Therefore, it is important that the continuous adaptations are supported by an automated and
formal change validation mechanism to protect the integrity of the service orchestration. To
formalise and automate the change validation, it is necessary that task dependencies and
constraints are mapped into formal specifications. The work of this thesis uses some of the work
done by van der Aalst et al.[246] and Gardey et al. [293] to conduct the formal mapping.
Figure 6-7 presents an overview of the change validation mechanism. Firstly, all the task
dependencies are mapped (Td) to a dependency specification. Secondly, the minimal set of
constraints (as identified in Section 4.4.2) are mapped (Tc) to a constraint specification. The
dependency specification is being used by the constraint mapping process to ensure the
consistency among the two specifications. Details about the preparation of dependency
specification and constraint specification are given in Section 6.4.2 and in Section 6.4.3
respectively. Finally, a model checker validates the dependency specification against the
constraint specification. The result can indicate whether the validation is successful or not. If not
successful, the result identifies the violations that cause the failure.
Figure 6-7. Change Validation Mechanism
6.4.2 Dependency Specification
The dependency specification is prepared as a Time-Petri-Net (TPN) [293, 294]. A Serendip
process definition possibly refers to a number of behaviour units. The task dependencies of all
the behaviour units of a process definition in its EPC representation are mapped to a Time-Petri-
Net. A time Petri-Net is a Petri-Net [80, 99] that associates a time/clock with each transition.
Task Dependencies
Constraints
Constraints
Behaviour Unit
Process Def.
Petri-net (TPN)
TCTL
Model Checker
Dependency Specification
Result
Constraint Specification
Td
Tc
Formal SpecificationsSerendip Descriptions Formal Validation
163
The mapping mechanism reuses the rules introduced in [246, 295]. An overviews of the
mapping rules are given in Figure 6-8, which is originally published in [295].
Figure 6-8. Rules of EPC to Petri-Net Mapping, Cf. [295]
A sample mapping is shown in Figure 6-9. As shown the tasks are mapped to Petri-Net
transitions and events are mapped to Petri-Net places. Moreover, if a task is associated with a
performance property (e.g., time to execute) then the corresponding transitions are annotated
with the cost value. In addition, there can be many intermediary transitions and places that can
be produced due to the connectors such as (XOR, AND, OR) according to the transformation
rules given above.
Figure 6-9. Preparing the Dependency Specification
6.4.3 Constraint Specification
The constraints can be specified in both behaviour units and process definitions levels (Section
4.4). These constraints are specified in the CTL/TCTL [80] language, making the Tc mapping
Task
Event PlaceTransition
164
straightforward compared to Td (see Figure 6-7). However, the mapping Tc requires observing
the generated dependency specification to ensure the consistency between the two
specifications. The event identifiers used in the constraint expressions are mapped to the
equivalent places of the generated Petri-Net.
Figure 6-10. Preparing the Corresponding Constraint Specification
To elaborate, suppose the constraint expression “(eTowSuccess>0)->(eTTPaid>0)” needs to
be mapped into the constraint specification in CTL/TCTL. The expression refers to two events
eTowSuccess and eTTPaid. Suppose the places of the generated Petri-Net corresponding to
these two events eTowSuccess and eTTPaid are P17 and P19, respectively. Then the constraint
specification will include the consistent TCTL expression as shown in Figure 6-10. Similarly all
the relevant constraints (for the modifications) are mapped to prepare the constraint
specification.
Note that the validation scheme presented above is the default Petri-Nets-based validation
available in the Serendip runtime. However, the validation module of the Serendip runtime is
extensible. When the business requires a more domain-specific validation, custom validation
plugins can be added to the runtime. For example, the validation mechanism can be extended by
a business rules [176] plugin, which allow the specification and enforcement checking of
domain specific business rules beyond the capabilities offered by the default Petri-Net based
validation module. Furthermore, there are other validation techniques and tools available [212,
296] that can be easily plugged into the Serendip runtime to extend its domain specific
validation capabilities.
6.5 State Checks
The validity of the phase 3 adaptations depends not only on violation of business constraints
(Section 6.4), but also on the state of the process instance. A state check is carried out prior to
every atomic adaptation action, to ensure that the adaptation is carried out in a state-safe
manner.
At a given time, a process instance can be in any of the states shown in Figure 6-11 (a).
When a process is enacted, the process instance is in the active state. A process instance in the
(eTowSuccess>0)->(eTTPaid>0) (P17>0) -> (P19>0)Tc
...P17 P19Dependency Specification
165
active state can be either paused or terminated. A paused process instance can become active,
should the operation needs to be resumed. A process is terminated when it is naturally
completed or aborted, i.e. forced completion.
A process instance consists of many tasks that are executed during the phase 3. Figure
6-11(b) shows the states of a task in a process instance. All the tasks of a process instance are at
the init state until the pre-conditions are met. Once the pre-conditions are met, the task is in the
active state, which indicates that the obligated role-player is performing the task. Once the post-
conditions are met, the task is considered to be completed.
Figure 6-11. States of (a) A Process Instance (b) A Task of a Process Instance
Prior to any process-instance specific adaptation the process instance must be in the paused
state (except for operation updateStateOfProcessInst which is used to change process instance
state e.g., to pause a process instance). If the process instance is in the active state, the engine
temporarily pauses the process instance throughout the complete adaptation transaction. When
the process instance is paused, the service orchestration can still receive the messages related to
the process instance from the players. However, the sending of messages (service invocations)
is delayed until the process instance resumes its operations.
Table 6-2 summarises adaptation actions and their relationships with the state of process
instance and task. For example, the addTaskToProcessInst or removeTaskFromProcessInst
operations are carried out only when the process state is paused and task state is init. This is
same for the updateTaskOfProcessInst operation, which updates a property of a task, except
when the property is postEP. The post event pattern of a task of a process instance could be
modified even if the task is currently in Active state.
The operation updateStateOfProcessInst is allowed when the process instance is either active
or paused. The other operations, i.e., addConstraintToProcessInst,
removeContraintFromProcessInst, updateContraintOfProcessInst, updateProcessInst ,
addNewEventToProcessInst and removeEventFromProcessInst are allowed when the process
instance is paused. The task status is not applicable (N/A) for these operations as they are not
modifying a specific task.
Active
Paused
Terminated
paus
e
resume
naturally complete
abort Active
Init Completed
Pre conditionsPost
conditions
(b)(a)
abort
166
Table 6-2. State Checks for Different Adaptation Actions
Operation Allowed Process
Instance Status
Allowed Task
Status
addTaskToProcessInst Paused init
removeTaskFromProcessInst Paused Init
updateTaskOfProcessInst
when @prop = preEP
Paused Init
updateTaskOfProcessInst
when @prop = postEP
Paused Init/Active
updateTaskOfProcessInst
when @prop = pp
Paused Init
updateTaskOfProcessInst
when @prop = obligRole
Paused Init
updateStateOfProcessInst Paused/Active N/A
addConstraintToProcessInst Paused N/A
removeContraintFromProcessInst Paused N/A
updateContraintOfProcessInst Paused N/A
updateProcessInst @prop =CoT11 Paused N/A
addNewEventToProcessInst Paused N/A
removeEventFromProcessInst Paused N/A
While the process instances are adapted, the structure is stable. The adaptation engine does
not perform parallel adaptations (e.g., tow adaptation scripts) to ensure the Isolation property
[289] as described in Section 6.3.5 . Therefore, it is not possible that the organisational structure
is changed while a process instance is adapted, unless specifically included in the same script.
6.6 Summary
In this chapter we have presented how the adaptation management system of the Serendip
framework is designed. The objective of the adaptation management system is to systematically
carry out the phase 2 and phase 3 adaptations to a service orchestration.
11 Updating CoS is not applicable for a process instance.
167
An overview was initially presented explaining the different adaptation phases of a service
orchestration. We further extended the previous models of process lifecycle management. Based
on this extended model, we presented the adaptation management system of the Serendip
framework.
The adaptation management system clearly separated the functional and management systems
of a service composite. We further extended the organiser role of the ROAD framework and
provided the runtime support for adaptations via the Adaptation Engine. In designing the
Adaptation Engine, several challenges have been addressed. In order to address these challenges
various design decisions have been taken and reflected in the design of the Serendip adaptation
management system.
To support process modifications that may involve multiple adaptation operations, the batch
mode adaptation mechanism is introduced. The batch mode adaptations adhere to the ACID
properties (atomicity, consistency, isolation, durability). A scripting language has been
designed to support the batch mode adaptations. A scheduling mechanism has also been
introduced to schedule and automate the adaptation scripts. The complete adaptation mechanism
supports both operation-based and batch-mode adaptations (scheduled and otherwise).
Manual verification of adaptations is inefficient and impractical. To ensure the business
constraints are not violated during adaptation, an automated validation mechanism has been
introduced. This mechanism supports the two-tier constraint specification for processes (as
described in Chapter 4). The validation mechanism uses a Petri-Net based formal validation tool
to automate the integrity checking upon process adaptations, where a dynamically generated
Petri-Net for a process is validated against the TCTL constraints on-the-fly.
Phase 3 adaptations require additional state checks to ensure that the adaptations are carried
out only when the process instance is in a safe state. Adaptations may be allowed/disallowed
depending on the state a process instance is in. The state transition models for process and task
states were presented. The relationships between different process/task states and the
plausibility of Phase 3 adaptations were discussed.
168
Part III
169
Chapter 7
The Serendip Orchestration Framework
Chapter 5 has presented the design of the Serendip runtime to support adaptive service
orchestration, whilst Chapter 6 has showed how the adaptations are managed. These two
chapters have collectively presented the architectural underpinnings to improve the runtime
adaptability of service orchestration. In this chapter, we present the implementation details to
show how those design concepts are implemented in the Serendip Orchestration Framework.
The current implementation of the Serendip Orchestration Framework primarily supports and
uses the Web services technology. It uses the Apache Axis2 [297, 298] as the Web service
engine. The issues of transport layer protocols, message processing and security are handled by
the Axis2 engine. This ensures that the implementation conforms to the existing Web services
standards such as SOAP [262] and WSDL [38]. In addition, this work uses the existing ROAD
framework [81] and further improves its capabilities with added process support, as discussed in
Chapter 5. These two major decisions to reuse the ROAD framework and Apache Axis2
implementations has significantly reduced the effort of implementing the framework.
To reuse existing the ROAD framework and Apache Axis2 we need to develop a range of
new components for the overall Serendip framework. The message routing, message
transformation and deployment mechanisms are developed by extending the ROAD framework
to provide better adaptability as required at the orchestration layer. The Apache Axis2 runtime
is extended to support the service composite deployment [261], which is in addition to Axis2’
existing service deployment. Importantly, the changes are introduced as extensions, and no
changes are made to the Axis2 code base. This ensures the compatibility with Axis2 code base.
The extension is implemented as another layer on top of Axis2, has additional capabilities
beyond the behaviour of Axis2-core [298, 299], and is called as ROAD4WS [261]. Such an
extension evades the need of maintaining a dedicated version of Axis2 to support this work.
The Serendip Orchestration Framework can be divided into three main parts; i.e., the
Deployment Layer, Orchestration Layer and the Tool Support as shown in Figure 7-1.
Collectively these three parts provide the service orchestration support. The Orchestration layer,
called the Serendip-Core, is implemented to be independent from the deployment environment,
and currently supports only Web services. The motivation of making the core independent is to
170
allow other future deployment environments to reuse the Serendip-Core. The ROADfactory,
which is the core implementation of the ROAD framework, has been improved and extended
with the process support to implement the Serendip-Core. The Web services deployment layer is
the ROAD4WS extension to Apache Axis2. The Tool Support layer includes various tools
implemented to support software engineers in modelling, monitoring and adapting Serendip
orchestrations. An overview of all the technologies used and different modules are given in
Figure 7-2. The external software and tools used are shaded in the figure. We will introduce
these existing technologies in upcoming sections as we discuss the relevant components of the
Serendip framework.
Figure 7-1. Layers of Framework Implementation
Figure 7-2. Serendip Framework Implementation and Related Technologies
The rest of this chapter is organised as follows. Section 7.1 provides details on how the
Serendip-Core is implemented, highlighting important aspects. Section 7.2 describes the
deployment environment, i.e., ROAD4WS. Finally Section 7.3 provides details on the tool
support provided for modelling, monitoring and adapting service orchestrations.
ROAD4WS
Orchestration Layer
Deployment Layer
Apache Axis2 + Apache Tomcat
Web Services Processing Layer
Serendip Orchestration Framework
ModelingTool Support Layer Monitoring
Serendip-Core
Adaptation
Model Provider FactoryEnactment
XMLParsers
Runtime Models
EnactmentEngineC
ore
Com
pone
nts Adaptation Engine
Customizations
Drools Axiom
Serendip Editor
Serendip Monitor
Adaptation Script Editor
XTEXT
PROM
Romeo Analyser
ROAD4WS Message Interpreter
Message Transformer
XSLTAxis2
XTend
Javassist
Jung
Jaxb
Validation Module
Romeo-Plugin
Validation Plugins
Validation
ANTLR
171
7.1 Serendip-Core
The Serendip-Core is the central implementation of the Serendip Orchestration Framework. It
consists of a number of interrelated components that supports modelling and enacting adaptive
service orchestrations. It is important that the Serendip-core implementation delivers sufficient
support for the underlying service orchestration philosophy described in the previous chapters
of the thesis. Consequently, certain specific principles were followed at the implementation
level. This section discusses those implementation principles.
One of the key considerations behind the Serendip-Core implementation is to implement an
adaptive service orchestration engine that is independent of the underlying deployment
environment. The main reason for this design principle is to make the core framework reusable
among an array of different deployment technologies. Besides, the deployment technologies and
standards may evolve over time. For example apart from SOAP/Web services, which is the
currently supported technology (Section 7.2), other alternative technologies such as RESTful
Services [263, 300, 301], XML/RPC [264], Mobile Web services [302] do exist. Therefore, it is
important to have the Serendip-core implemented in a deployment-technology neutral manner.
To meet this requirement a couple of implementation decisions are made.
1. The event-based process progression in the engine does not depend on specific
messages but only on events. This decouples the engine operations with the
underlying message mediation mechanisms and the employed technologies.
2. Another decision is that the adaptation, validation, message transformation and
message interpretation mechanisms are implemented in an extensible manner. The
validation mechanism could be extended further to support domain specific constraint
languages apart from the default TCTL expressions [80, 258]. Likewise, future
alternatives can replace XSLT [182], which is used for realising message
transformations. Drools engine [303] can also be replaced in the future by alternative
message interpretation mechanisms.
The core framework is implemented by extending the ROADfactory 1.0, which is the
implementation of the Role Oriented Adaptive Design approach to software systems [81]. The
ROADfactory 1.0 implementation supports the instantiation and management of ROAD
composites, involving roles, contracts and message routing among different roles based on
declarative interaction terms and contractual rules. However, it did not have business process
support. For example, the ROADfactory 1.0 implementation did not have the concept of a
process in its original message routing and synchronisation mechanism. Moreover, it did not
have the concept of task defined in roles. Therefore, it was not possible to use ROADfactory 1.0
172
in its original form for this implementation. Instead, the ROADfactory 1.0 implementation is
modified to support the newly introduced process concepts. For example, the message routing
and message synchronisation are required to be carried out within a context of a process
instance. Furthermore, in order to support the event-driven nature of the coordination layer, the
contractual message evaluation and routing mechanism needs to be modified. In addition, the
concept of task and how it is used to synthesise and synchronise the internal and external
interactions has been introduced. The complete implementation (including support for Serendip,
i.e., the work of this thesis) is now known as the ROADfactory 2.0.
It should be noted that the framework implementation is large and a detailed explanation is
beyond the space allowed in this thesis. Therefore, the next few sections present only the
important implementation details of the framework. For more details about the implementation
of this thesis please refer to the ROAD 2.0 documentation12, which is an integration of both
Serendip and ROAD 1.0.
7.1.1 The Event Cloud
The Serendip enactment engine is event-driven. The tasks become doable when its pre-
conditions, specified as patterns of events, are met. As tasks are completed, more events are
triggered, which make more tasks doable. To record and manage these events, the Event Cloud
component is used in the enactment engine implementation.
The implementation of the Event Cloud is carried out according to the publish/subscribe
architecture [304, 305] as discussed in Section 5.3. The publish/subscribe architecture is used to
decouple how the information is published from how the information is used. The data from a
publisher that capture the information are not specific to a particular subscriber. Such
decoupling provides more flexibility in adding and removing publishers and subscribers with
minimal effect.
The Implementation of Event Cloud and associated classes are shown in Figure 7-3. The
EventCloud is an object maintained in the Enactment Engine to keep Event Records. The
Organiser instance (only one for the composite) and the Contract instances (many) can publish
events to the Event Cloud as discussed in Section 5.3.1.
Apart from EventRecords, the EventCloud also maintains a list of subscribers. The
subscribers of these types can be dynamically subscribed to or removed from the list. All
subscribers must implement the SerendipEventListener Interface. The interface specifies two
methods, getEventPatterns() and eventPatternMatched(). The EvenCloud uses the former
12 http://www.swinburne.edu.au/ict/research/cs3/road/implementations.htm
173
method to retrieve the event patterns that the subscriber is interested in and the latter method to
call the action as implemented locally in the subscriber. Table 7-1 presents what these patterns
are and the defined actions when the event patterns are matched.
Figure 7-3. Class Diagram: Event Cloud Implementation
Table 7-1. Subscribed Event Patterns and Subsequent Actions
Subscriber Type Subscribed Event Pattern Action when Event Pattern Matches
Task Pre-Event Pattern Notifies the obligated role to perform the
task. (e.g., invoke a player service)
Enactment Engine Condition of Start of all the
Process Definitions
Enacts a new process instance of a relevant
definition according to CoS
Process Instance Condition of Termination Self-terminates upon CoT
Adaptation Script
Scheduler
Event Pattern that should trigger a
script
Executes the script
Organiser Event Patterns that the organiser
player is interested in
Unknown
In order to evaluate the event patterns a library called Event Pattern Recogniser has been
implemented using the ANTLR parser generator [306]. The Event Cloud uses the Event Pattern
Recogniser to evaluate the patterns based on recorded events. The parser allows the creation of a
Binary Tree [307] based on the event pattern expressions. The tree is constructed using the same
grammar used to create the pre-tree of an atomic graph as described in Section 5.5.1. The
interior nodes of the tree consist of operators (AND= *, OR= |,XOR= ^) and the leaf nodes
consist of events as shown in Figure 7-4. During the pattern evaluation, the events are replaced
174
with true/false values depending on the whether the event has been triggered or not. Finally, the
tree is traversed (depth-first), evaluating each node until the top-most node is evaluated, which
produces the outcome of either true or false.
Figure 7-4. A Binary Tree to Evaluate Event Patterns
Figure 7-5 shows an example, where the event pattern “(e1 * e2) | e3” is converted to a
Binary Tree. Suppose events e1 and e2 have been triggered but e3 is not. As the next step, the
tree is annotated with true/false values. The tree traversal produces the outcome, true, which
indicates that the event pattern has been matched.
Figure 7-5. Example Event-Pattern Evaluation
7.1.2 Interpreting Interactions
Serendip processes progress based on triggered events. These events are triggered by
interpreting the interactions between roles according to the defined service relationships
(Section 5.3.2). In order to interpret such interactions and trigger events, the work of this thesis
uses declarative rules specified in Drools rule language [176].
From the implementation point of view, there are many advantages of integrating Drools
[176, 308] as the business rules engine. Firstly, Drools supports Java Specification Request for
Java Rules Engines (JSR94 API) [309], which is the standard Java runtime API to access a rules
engine from a Java platform. Secondly, Drools engine is implemented based on the RETE
algorithm [310]. RETE is a decision making algorithm which is asymptotically independent
from the number of rules that is being executed [311]. This improves the efficiency of decision
making at the Serendip runtime and thereby the overall process execution. Thirdly, Drools
framework is rich with tool support to write and execute business rules. Fourthly, the language
is expressive and easy to follow. Fifthly, Drools supports XML-based rules syntax as well. This
is an advantage if the interchangeability of rule files is a requirement. Finally, Drools is free
and has a strong user and developer community, which helps a Serendip user to quickly learn
and use the language.
e e e e...
Operators (AND, OR, XOR)
Events
|
*
e3e2e1
EP = (e1 *e2) | e3e1=T, e2=T, e3=F
|
*
FTT
True
175
Apart from the above advantages, Drools is capable of maintaining a long-lived and
iteratively modifiable contract state, which is important to represent the state of contract. The
state of contract keeps (a) facts that capture the current context/state of the service relationship
and (b) rules that specify how to interpret the interactions according to the current context of the
service relationship. Drools allows dynamically injecting these facts (Java objects) and rules
(Drools rules) into the state called an Stateful Knowledge Session [312]. Stateful Knowledge
Session is long-lived and can be iteratively modified during the runtime (in contrast to its
counterpart, the Stateless Knowledge Session). A contract state is thus represented as a Stateful
Knowledge Session in Drools. When a contract is instantiated, a Stateful Knowledge Session is
instantiated. As the roles bound by the contract interact, this Stateful Knowledge Session is
updated. When the contract needs to be removed, the Stateful Knowledge Session is
‘disposed’[312].
Figure 7-6 shows the classes associated with the Drools implementation. As shown, any rule
implementation to be integrated with the core framework has to implement the ContractRule
interface. Accordingly, the DroolsContractRule implements the interface as the Drools plugin
for the framework to interpret the messages using Drools rules. A contract maintains its contract
rules (a rule base) during the runtime. New rules can be added and existing rules can be
removed from the rule base. Eventually the rule base is deleted when no longer required, i.e.,
when the contract is removed. When a message needs to be interpreted, the specified rules are
fired via the fireRule() method.
The events that trigger as part of evaluation of result messages of a task should adhere to the
post-event-pattern of the task. A runtime entity called an EventObserver associated with the
task determines whether the fired events adhere to the post event patterns of a given task of a
given process instance by analysing the collective outcome of all the triggered events. For
example, both of the events eTowSuccess and eTowFailed should be triggered as part of a single
rule evaluation upon completion of task tTow, because its post-event-pattern is “eTowSuccess
XOR eTowFailed”. When a task is completed an instance of EventObserver is created and
shared by reference with to all the result messages. When a result message is interpreted, the
EventObserver collects the triggered events and marks that result message as interpreted. Upon
completion of interpretation of all the result messages, the EventObserver determines whether
the triggered set of events adheres to the post-event pattern of the task. Finally, the resulting
event records are published in the Event Cloud.
176
Figure 7-6. Class Diagram: Contract Implementation
Drools rules are condition-action rules [176]. Therefore, a Drools rule contains two main
parts, i.e., the when and then parts, which are also referred to as the Left Hand Side (LHS) and
Right Hand Side (LHS).
The when part specifies a certain condition to trigger the rule. This condition may be
evaluated to true or false based on the properties of facts in the Stateful Knowledge
Session [308]. Here a fact means a Java object, such as a message inserted into the
Stateful Knowledge Session, or the number of requests over the contract that is being
maintained.
The then part specifies what needs to be done if the condition is evaluated to true.
The business logic of the contract to inspect and interpret messages is written here.
Listing 7-1 shows a sample rule file which interprets the interaction orderTow of contract
CO_TT. The first rule interprets the “request” message from Case-Officer to Tow-Truck for the
operation “orderTow”. This condition is fulfilled by evaluating the two properties of the
message operationName and isResponse in the when part of the rule. In this case
operationName is “orderTow” and isResponse is false, stating that the request of orderTow
interaction will trigger this rule. The then part of this rule contains the business logic, i.e., the
event, eTowReqd, that should be triggered.
The second rule is to evaluate the message in the response path. In this rule, either of the
eTowSuccess or eTowFailed events is triggered depending on the response.
177
Listing 7-1. Two Rules to Interpret Interaction orderTow Specified in CO_TT.drl
/** 1. The rule to evaluate the interaction orderTow on the request path **/ rule "orderTowRequestRule" when $msg : MessageRecievedEvent(operationName == "orderTow", response ==false) then $msg.triggerEvent("eTowReqd"); end /** 2. The rule to evaluate the interaction orderTow on the response path **/ rule "orderTowResponseRule" when $msg : MessageRecievedEvent (operationName == "orderTow" , response==true) then //If everything is OK trigger success event $msg.triggerEvent("eTowSuccess"); //If something is wrong with the interaction trigger the failure event //$msg.triggerEvent("eTowFailed"); end
7.1.3 Message Synchronisation and Synthesis
Section 5.4 presented how the internal interactions between roles are used to compose external
interactions and vice versa. This section provides how the current implementation supports
these transformations using the Default XSLTMessageAnalyser. The XSLTMessageAnalyser is
an XSLT-based [182, 313] implementation to transform XML-based messages such as SOAP
messages exchanged among Web services. XSLT transformations are dynamically loaded and
executed. This allows the message synthesis logic to independently evolve from the rest of
service orchestration logic.
XSLT has been chosen due to the many advantages it offers. XSLT is a well-established and
popular standard to transform XML documents [314]. Different parts of a SOAP message could
be referred using XPath inside an XSLT descriptor [183]. Complex transformations can be
written using high level constructs, which are more resilient compared to the use of low level
custom transformation using e.g., DOM/SAX [315]. XSLT is platform independent. It also has
good tool support and is well-understood within the community.
Let us elaborate how the message synchronisation and synthesis mechanism has been
implemented using XSLT. As shown in Figure 7-7, the XSLTMessageAnalyser implements the
generic MessageAnalyser interface which consists of two methods, inTransform() and
outTransform(). A role maintains a MessageAnalyser13 for this purpose. When a task of the role
becomes doable, the outTransform() method will be called; when a message is received from
the player, the inTransform() method will be called. Any MessageAnalyser can use the buffers
13 The current implementation only supports an XSLT Message Analyser.
178
and queues of the role to select and place messages (Section 5.4). The XSLTMessageAnalyser
uses the javax.xml.transform.TransformFactory to perform the transformations.
Figure 7-7. Class Diagram: Message Synchronisation Implementation
Listing 7-2 shows the task description of tTow that specifies what internal messages are used
to extract the information (i.e., clause UsingMsgs) for constructing the request to the player. The
given example uses two messages, CO_TT.mOrderTow.Req and
GR_TT.mSendGarageLocaiton.Req. The task description also specifies what internal messages
are generated (i.e. clause ResultingMsgs) from the player response, i.e.
CO_TT.mOrderTow.Res, GR_TT.mSendGarageLocaiton.Res.
Depending on the task description a number of template XSLT transformation files are
generated by the Serendip Modelling Tool, i.e., one template for the out transformation and one
each for result messages. The transformation file for the out transformation of tTow is given in
Listing 7-2. As shown, the two messages CO_TT.orderTow.Req and
GR_TT.sendGRLocation.Req can be accessed via the specified XSLT parameters. Within an
XSLT, a developer can use XPath expressions [316] to refer to the specific parts of these
messages, i.e. the information to be extracted. Likewise, Listing 7-3 shows the XSLT
transformation to create the internal message CO_TT.orderTow.Res from the player response to
the task tTow.
179
Listing 7-2. A Sample Task Description
Task tTow { UsingMsgs CO_TT.mOrderTow.Req, GR_TT.mSendGarageLocaiton.Req; ResultingMsgs CO_TT.mOrderTow.Res, GR_TT.mSendGarageLocaiton.Res;
}
Listing 7-3. XSLT File for Out–Transformation, tTow.xsl
<xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:q0="http://ws.apache.org/axis2"> <xsl:output method="xml" indent="yes" /> <xsl:param name="CO_TT.orderTow.Req" /> <xsl:param name="GR_TT.sendGRLocation.Req" /> <xsl:template match="/"> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> <soapenv:Body> <q0:tow xmlns:q0="http://ws.apache.org/axis2"> <pickupLocation> <xsl:value-of select="$CO_TT.orderTow.Req/soapenv:Envelope/ soapenv:Body/q0:orderTow/pickupInfo" /> </pickupLocation> <garageLocation> <xsl:value-of select="$GR_TT.sendGRLocation.Req/soapenv:Envelope/ soapenv:Body/q0:sendGRLocation/content" /> </garageLocation> </q0:tow> </soapenv:Body> </soapenv:Envelope> </xsl:template> </xsl:stylesheet>
Listing 7-4. XSLT File for In-Transformation, CO_TT_orderTow_Res.xsl
<xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:q0="http://ws.apache.org/axis2"> <xsl:output method="xml" indent="yes" /> <xsl:param name="tTow.doneMsg" /> <xsl:template match="/"> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> <soapenv:Body> <q0:orderTowResponse xmlns:q0="http://ws.apache.org/axis2"> <return> <xsl:value-of select="$tTow.doneMsg/soapenv:Envelope/soapenv:Body/q0:towResponse/return/orderTowResponse" /> </return> </q0:orderTowResponse> </soapenv:Body> </soapenv:Envelope> </xsl:template> </xsl:stylesheet>
180
7.1.4 The Validation Module
The Serendip framework provides an in-built process validation mechanism based on Petri-Nets
[99] and TCTL [254, 255] as discussed in Section 6.4. This section provides more specific
implementation details on how the validation module works.
The Petri-Net-based validator is implemented using the Romeo Model Checker tool14. Romeo
supports analysis of Time Petri-Net and is developed by the Real-Time Systems Team at
IRCCyN15. It also provides the support for validating TCTL constraints. The Romeo Model
Checker is free and widely used.
As part of the integration effort, a Java plugin has been developed to integrate Romeo with
the Serendip-core. The classes associated with the plugin have been shown in Figure 7-8.
RomeoPNValidation defines the SerendipValidation interface. The Adaptation Engine keeps a
reference to the validation module, and calls the plugin when validation is required. The plugin
transforms the task dependency specification into a Petri-Net model using the PetriNetBuilder.
The PetriNetBuilder uses the ProM framework [279] as a library for this purpose. Then the
generated Petri-Net model is translated into a Romeo-compatible representation (see Listing 7-5
for an example fragment from the process instance pdGold001).
Figure 7-8. Class Diagram: Validation Module Implementation
Likewise, the CTLWriter class has been implemented to translate the TCTL constraint
expressions into Romeo-compatible representation (see Listing 7-6 for an example TCTL-based
specification for constraints of process instance pdGold001). The Romeo tool does not provide
a Java interface. Therefore, when the validation is required a native call to the Romeo tool is
made via the RomeoPNValidation class to validate the generated Petri-Net against the TCTL-
based constraint specification. Upon completion of the validation, the Adaptation Engine
14 http://romeo.rts-software.org/ 15 http://www.irccyn.ec-nantes.fr/?lang=en
181
receives the result of success or failure. If the validation fails, error messages concerning the
constraint violation are generated.
Listing 7-5. Generated Romeo Compatible Petri-Net File
<?xml version="1.0" encoding="UTF-8"?> <TPN name="pdGold001"> <!—places (events or generated intermediary places) --> <place id="18" label="ComplainRcvd" initialMarking="0">... </place> <place id="19" label="p1" initialMarking="0">... </place> ... <!—transitions (tasks or generated intermediary transitions) --> <transition id="11" label="CO.Analyse" eft="0" lft="33"> ... </transition> <transition id="12" label="TT.Tow" eft="0" lft="33"> ... </transition> ... <!—arcs (generated connections between places and transitions) --> <arc place="18" transition="11" type="PlaceTransition" weight="1"> ... </arc> <arc place="19" transition="11" type="TransitionPlace" weight="1"> ... </arc> ... </TPN>
Listing 7-6. Generated Romeo Compatible TCTL Expressions
(M(32)>0)->(M(34)>0) (M(29)>0)->(M(33)>0) (M(37)>0)->(M(39)>0)
7.1.5 The Model Provider Factory
The Model Provider Factory (MPF) maintains a definition of an enacted composite. The main
classes associated with the MPF are shown in Figure 7-9. The types of roles, behaviour units,
process definitions, process instances, player bindings, contracts are maintained by MPF. These
are called runtime (definition) models. During the runtime, these models can be modified to
support the type-level changes. New types can be added; existing types can be modified or
removed. These runtime models are used by the enactment engine to enact new process
instances. For example, if a process instance for a gold member needs to be enacted the engine
checks out a new process instance model based on the runtime (definition) models belonging to
the process definition pdGold. This instance model is also maintained in MPF during its
execution.
The current implementation uses JAXB 2.0 [317] and XMLSchema [318]. The runtime
models are loaded to the MPF as JAXB 2.0 bindings. JAXB helps the generation of classes and
interfaces of runtime models automatically using an XML Schema. The XML Schema-based
implementation also facilitates the future improvements to the framework. For example the
modelling tools can be generated independently from the core framework as long as the output
182
is compatible with the ROAD 2.0 schema (.xsd). One such tool that already exists is the
Serendip Modelling Tool (Section 7.3.1) which generates the ROAD descriptors compatible to
the ROAD 2.0 schema. Another example tool is the ROADdesigner [319] which is an eclipse
GMF-based [320] graphical designer16.
Figure 7-9. Class Diagram: MPF Implementation
JAXB 2.0 supports the translation of an XML representation to Java Classes and vice versa.
These processes usually are known as unmarshalling and marshalling [321, 322].
Correspondingly, two classes for Marshalling and Unmarshalling have been implemented. In
the process of unmarshalling, an XML document is converted to Java content objects or a tree
of objects. Conversely, the marshaling is the reverse process that converts a tree of Java content
objects back to an XML document. Using the process of unmarshalling, the framework creates
runtime models based on a ROAD 2.0 descriptor file as shown in Figure 7-10. The process of
marshaling is used to take snapshots of the current description of runtime models to be saved as
an XML file.
The JAXB 2.0 binding compiler generates the ROAD 2.0 parser and binder based on the
ROAD 2.0 schema (xsd). The ROAD 2.0 Parser and Binder are used to generate these runtime
models based on a descriptor such as RoSAS.xml. The descriptor file must conform to the
ROAD 2.0 schema. Therefore, any tool that generates the descriptor file should ensure that the
output conforms to the ROAD 2.0 schema.
16 ROADdesigner is still under development at the time of writing this thesis, hence lack of details.
183
Figure 7-10. Marshalling and Unmarshalling via JAXB 2.0 Bindings
7.1.6 The Adaptation Engine
The main purpose of the Adaptation Engine is to provide the process adaptation support in a
timely, consistent and accurate manner. The design of the Adaptation Engine and its mechanism
is discussed in Chapter 6 in detail. In this section we present the details on how the Adaptation
Engine is implemented and what the related technologies are.
The implementation should provide a unified and consistent method to perform both
operation-based and batch mode adaptations.
The implementation should be extensible. That is, future additions to the framework
should be able to support additional types adaptations, should the Serendip meta-model
is extended, without requiring a change in the core adaptation mechanism.
The implementation should provide an error recovery mechanism.
The related classes of the Adaptation Engine implementation are shown in Figure 7-11. The
organiser refers to the Adaptation Engine to perform the adaptations. The Adaptation Engine
can be used for both operation-based and batch mode adaptations (Section 6.3). In both cases
the Adaptation Engine accepts atomic adaptation actions. These atomic adaptation actions are
from either the AdaptationSrcriptingEngine or by the organiser role itself. Any atomic
adaptation action should implement the interface AdaptationAction.
Model Provider Factory
Runtime Models
RoSAS.xml
UnmarshallingSnapshot
1.xml
Marshalling
ROAD 2.0 Schema
(.xsd)
ROAD 2.0 Parser and Binder
A ROAD 2.0 Descriptor File Snapshots
JAXB 2.0 Binding Compiler
Conforms to
Snapshot1
.xml
184
Figure 7-11. Class Diagram: Adaptation Engine Implementation
The AdaptationAaction has two specialisations, i.e., DefAdaptAction and
InstanceAdaptAction for definition level adaptations and process instance level adaptations
respectively. The atomic adaptation actions17, e.g., DefPDUpdateAction, should implement
either of these interfaces. Each atomic adaptation action specifies (a) the pre-checks that are
carried out prior to the adaptations (b) the actual adaptation steps by implementing the adapt()
method. For clarity, Listing 7-7 shows a sample adaptation action. The Adaptation Engine
performs these atomic adaptation actions sequentially by executing the adapt() method, as a
batch or individually. The framework can be extended to support adaptations of future additions
to the meta-model, by adding more such adaptation actions, without changing the core
adaptation mechanism.
If an error occurred during the adaptation, an AdaptaitonException is thrown and caught by
the Adaptation Engine. In such a case, the adaptation is aborted and the backup is restored to
discard all the changes. If all the adaptation actions are successful, then the validation module
(Section 7.1.4) is used by the Adaptation Engine to check the integrity of the adapted model(s)
(Section 6.4). If the integrity check results in a constraint violation, again the changes are
discarded and the backup is restored. Otherwise the changes are accepted as valid and the
backup is discarded.
17 Only four atomic adaptation actions shown here for clarity.
185
Listing 7-7. A Sample Adaptation Action
public class InstanceTaskPropertyAdaptationAction implements InstanceAdaptAction{ //Properties of Adaptation Action private String taskId; private String proeprtyId; private String newVal; //The constructor of Adaptation Action public InstanceTaskPropertyAdaptationAction(String taskId, String propertyId, String val ){ this.taskId = taskId; this.proeprtyId = propertyId; this.newVal = val; } //The body of the Adaptation Action @Override public boolean adapt(ProcessInstance pi) throws AdaptationException { //Perform state checks(Optional). Throws exception if an error. //Perform adaptations. Throws exception if an error. return true; } }
The Adaptation Engine uses a parser to parse an adaptation script and create adaptation
actions. The Lexer and Parser for the scripting language is created using ANTLR [306]. The
grammar for the scripting language is available in Appendix D. Once an adaptation script is
received at the Adaptation Scripting Engine, the engine uses the ScriptParser to create an
abstract syntax tree (AST). The AST is an abstract representation of the parsed data in a tree
structure. The root node is a type of Script and the leaf nodes are Commands. The Blocks that
define a scope for commands (Section 6.3.3) are the intermediary nodes of the tree. Then the
AST is used to create the instances of adaptation actions for each and every command of the
tree.
7.2 Deployment Environment
In order to deploy Serendip service orchestrations, we have extended the Apache Axis2 Web
services engine. The extension is called ROAD4WS. In this section we introduce Axis2 and
explain how it is extended to provide support for adaptive service orchestration.
7.2.1 Apache Axis2
Apache Axis2 [298] is a free and open source Web services engine which provides
functionalities to deploy and consume Web services. The modular design in the Axis2
architecture has made it possible to extend the capabilities of the engine with minimal or no
changes to its core. For example the engine allows addition of custom Deployers, modules that
186
can further extend the service deployments and message processing capabilities of Axis2. Such
modules as Rampart18 and Sandesha19 have already been developed to extend Axis2’s
capabilities. The Axis2 engine can be deployed in a servlet container such as Apache Tomcat20.
Both SOAP- and REST-styled Web services are supported. With the built-in XML info-set
model (i.e., AXIOM21) and pull-based parser, Axis2 provides speedy and efficient parsing of
SOAP Messages. In addition, Aixs2 supports different transport protocols such as HTTP(/S),
SMTP and FTP. All these factors make Axis2 suitable to serve as the basis in implementing the
ROAD4WS extension.
7.2.2 ROAD4WS
ROAD4WS is a Web services deployment container, which enables the orchestration of Web
services using Serendip processes. ROAD4WS allows the embedding of the Serendip-Core with
the Apache Axis2 runtime. However, the implementation ensures the forward compatibility by
not demanding any modifications to the Axis2 code base. This allows easy integration of
ROAD4WS with already installed Axis2 environments. Rather than modifying the Axis2
runtime, it was extended with three functionalities, i.e., the ROAD-specific Modules, Message
Processing and Deployment as shown in Figure 7-12. The rest of the Messaging and Transport
layers of Axis2 can be used without any extensions.
Figure 7-12. ROAD4WS - Extending Apache Axis2
Modules: Axis2 allows custom modules to be incorporated into its core functionality [299].
A custom module called ROAD Module is implemented. ROAD Module integrates the
Serendip process enactment runtime with Axis2. The required libraries to realise the
Serendip orchestration are loaded via the ROAD Module.
18 http://axis.apache.org/axis2/java/rampart/ 19 http://axis.apache.org/axis2/java/sandesha/ 20 http://tomcat.apache.org/ 21 http://ws.apache.org/axiom/
Apache Axis2
Modules Deployment
Message Processing
Pojo
Arch
ive
ROAD4WSROAD Module
Ram
part
Sand
esha
... ...ROAD Deployer
ROAD MsgSender
Apache Tomcat
Messaging (SOAP/REST)Transport (HTTP(s)/SMTP/FTP)
ROAD MsgReciever
187
Message Processing: The Message Processing component has been extended with the
ROAD Message Receivers and ROAD Message Senders. The ROAD Message Receivers
identify and intercept messages targeting ROAD composites, subsequently route them to the
appropriate ROAD composites and then to the contracts. There are two types of Message
Receivers to process RequestResponse(Synchronous/InOut22) and
RequestOnly(Asynchronous /InOnly) messages from the external players. These message
receivers are associated with the exposed operations according to their type (i.e., request-
response or request only). In contrary, the ROAD Message Senders allow messages to be
dispatched to the external players. Similar to the ROAD Message Receivers, the ROAD
Message Senders also support Out-In and Out-Only patterns of Axis2. These Message
Receivers are implemented using the Axis2 Client API.
Deployment: The ROAD Deployer registers itself with Axis2-kernel to listen to the
deployment changes in the Axis2 repository in order to deploy the orchestration descriptors.
This allows deployment of Serendip orchestration descriptors dynamically without
requiring a system restart. This is achieved by the software engineer simply dropping the
orchestration descriptor to a pre-configured location. The next section (Section 7.2.3)
provides more details about Serendip descriptors and their deployments.
7.2.3 Dynamic Deployments
In order to initiate a Serendip composite, a descriptor (.xml) needs to be deployed via
ROAD4WS. Figure 7-13 shows the directory structure inside an Axis2 repository deployed in
Apache Tomcat web server. Every directory except road_composites exists by default. Once
ROAD4WS is installed a new directory called road_composites is created under
AXIS2_HOME. Service orchestration descriptors such as RoSAS.xml (Appendix B) are placed
inside this road_composites directory. This can be done while the Server (Tomcat) is running.
Once the descriptor is deployed, the descriptor is automatically picked up by ROAD4WS via
the ROADDeployer and the structure of the composition is instantiated. This includes the
creation of services interfaces (provided interface) for each and every role and the formation of
the contracts among roles. The service interfaces are exposed according to the WSDL 2.0
standard [38]. Each and every operation of a role is exposed as WSDL operation [261]. These
interfaces allow any WSDL 2.0 compatible client API to be used to invoke the role operations.
Then the orchestration engine, the adaptation engine, the model provider factory and the
validation modules are instantiated for a given composite. In addition, the organiser interface or
22 InOut and OutIn terms are used to comply with the Axis2 terminology.
188
the organiser service proxy is generated for the organiser player to bind to. This interface is the
gateway to the organiser role for the player to manipulate the service orchestration at runtime.
Figure 7-13. ROAD4WS Directory Structure
The signature of the address of a deployed proxy service is as follows:
http://<HOST>:<PORT>/axis2/services/<ORGNISATION-ID>_<ROLE-ID>.
Figure 7-14 shows that the role MM (member) of the RoSAS organisation (rosassmc) is
deployed in the server under the endpoint reference (EPR),
http://127.0.0.1:8080/axis2/services/rosassmc_mm. The service has only one operation, i.e.,
complain. The WSDL description and the message types can be viewed via the URL
http://127.0.0.1:8080/axis2/services/rosassmc_mm?wsdl.
Figure 7-14. Role Operations Exposed as Web service Operations
ROAD4WS dynamically adjusts the deployed interfaces depending on the runtime changes to
roles and contracts. For example, new roles and contracts can be introduced, and existing roles
and contracts can be modified or removed. ROAD4WS automatically detects these changes in
the deployed composites and adjust the proxy services and their WSDL operations accordingly.
ROAD4WS supports the deployment of multiple composites. This allows the decomposition
of a large service composite into smaller sub-composites in a hierarchical structure to separate
Axis2 directory structure
ROAD4WS DeploymentsService Orchestration descriptors go here e.g., RoSAS.xml
Axis2 built-in deployment mechanismsInternal-atomic service can go here
Webapps directory of Apache Tomcat installation, i.e., CATALINA_HOME/webapps
AXIS2_HOME/
189
the management concerns [81, 261]. For example, the CaseOfficer service can be played by
another ROAD composite which defines its own processes and structures to manage the
customer complaints or requests.
7.2.4 Orchestrating Web Services
Once deployed and initiated, the composite is ready to accept service requests via the exposed
service interfaces. For example a client application installed in a motorist’s mobile device may
send a service request to the composite via the rosassmc_member interface (Figure 7-14). Then
the composite enacts the corresponding business process as described in Section 5.2.
Subsequently, the bound partner services can exchange messages via the composite according to
the process. The role-player’s implementation can be either Web services, Web-service-clients
or a combination of both as shown in Figure 7-15. For example, a mobile app installed in a
client’s mobile phone is a client-only player. A tow-truck scheduling service can be a service-
only player, which does not actively communicate with the composite but expects the composite
to invoke the service upon a towing requirement. On the other hand, a job scheduling
application at a Garage is required to actively communicate with the composite (using a Web
service client API such as Axis2 client API), and hence needs to act as both a client and a
service.
The type of the player required is dependent on the type of interface that a role exposes.
According to ROAD [81], there are two types of interfaces that a role may consist of, i.e., the
Required and Provided interfaces.
1. Required Interface: The interface that should be implemented by the player. This
interface is generated if a role needs to invoke the player. The role (as a client)
invokes the player (service) via this interface.
2. Provided Interface: The interface provided by the role to the player. This interface is
generated if the player requires invoking the role. The player (as a client) invokes the
role operation (service) via this interface.
In ROAD4WS these are represented as WSDL interfaces as mentioned in the previous section.
During the runtime, the role and its bound player exchange messages via these required and
provided interfaces.
190
Figure 7-15. Orchestrating Partner Web services
Section 5.4 has explained how the service invocation requests are initiated based on task
descriptions. In addition, Section 7.1.3 has explained how the external SOAP messages are
composed via XSLT transformations. To complete tasks, the messages need to be exchanged
with bound players or services, which usually reside in a distributed environment. This
requirement has been addressed by extending the Axis2 SOAP Processing Mode [298]. Figure
7-16 shows how the implementation has extended the Axis2 Message Processing Model to
intercept messages. The left-side of the figure shows how an incoming message (SOAP) is
handled, whilst the right-side shows how the outgoing messages are handled.
Figure 7-16. Message Routing towards/from Service Composites
The message is initially processed by a Transport Listener such as HTTP or TCP Transport
Listener [298] in Axis2. Then the message has to pass through a pipe of handlers, e.g., WS-
Addressing handlers and WS-Security handlers, depending on the Axis2 configuration [299].
For example if WS-Addressing is enabled the corresponding handler will interpret the WS-
Addressing headers [22]. If WS-Security features are enabled, the messages will be processed
for security requirements [323]. The ROAD Message Receiver that is placed after these handlers
dispatches the message to the appropriate service composite, e.g., RoSAS.
The above design accompanied with two challenges that need to be addressed.
1. The descriptors are dynamically loaded. It is not possible to configure a static message
receiver.
2. The messages that target default services should not be intercepted. That is, only the
messages targeting ROAD composites should be intercepted.
MM GR
TTClient
(Mobile App)
Garage (Job acceptance/scheduling application)
Tow Truck (Job acceptance/scheduling application)
Service Orchestration
Required InterfaceProvided Interface
Client part
Service part
Service part
Transport Listener, e.g., HTTP
Message HandlersE.g., WS-Addressing
Default Message Receivers
ROAD Message Receiver
RoSASROAD
Message Sender
Default Message Senders
Transport Sender, e.g., HTTP
SOAP Message
SOAP Message
Message HandlersE.g., WS-Addressing
Composites
191
In order to address both of these challenges, each operation of a dynamically created role
interface (proxy service) is allocated with an instance of a ROAD Message Receiver, at the time
of instantiation. With such a design, both the dynamic deployment and the unnecessary
interception of non-ROAD messages are avoided. Axis2 can still host the standard Web services
and continue to dispatch messages to them.
When the message is received by the composite, it is directed to the appropriate role and
contract. Within a contract, the message is interpreted, events are triggered and a service
invocation request may be generated as described in Section 7.1.2 and Section 7.1.3.
When a message is ready to be sent to an external player or service from a role, the ROAD
Message Sender picks up the message placed in the OutQueue (Section 5.4) of the role and
places it on the outgoing path of the Axis2 Message Processing Model by specifying the
endpoint address of the player. Consequently the message will go through another set of
handlers in the outgoing path to the appropriate Transport Sender. Then the Transport Sender
makes sure that the message is sent to the specified external service (endpoint).
As illustrated above, the complete Serendip/ROAD service orchestration implementation
provides the service orchestration capabilities to Axis2 users by hooking the implementation to
the Axis2 runtime without any modifications to the Axis2 code base.
7.2.5 Message Delivery Patterns
The previous ROAD framework provides two types of message delivery mechanisms, i.e., Push
and Pull [81]. The Push mechanism is used to push the messages to the player from the role,
whereas the Pull mechanism is for a player to pull the messages from the role. The pull
mechanism is useful when the player cannot provide a static endpoint but willing to pull the
messages when ready and required. A special synchronous operation called getNextMessage() is
exposed in each of the provided role interfaces for a player to retrieve messages.
The work of this thesis further extends these push message delivery mechanisms as shown in
Figure 7-17. A set of message delivery patterns is formed based on,
What the message delivery mechanism is (Push/Pull),
What the communication synchronisation mode is (Asynchronous/Synchronous), and
Who initiates the communication (Role/Player).
The patterns A-D are for push message delivery whereas the pattern E is designed to support
the pull mechanism of the previous ROAD framework. For all these patterns, the delivery
mechanism in ROAD4WS works only with the queues outside the membrane of the composite
192
(Section 5.4). The buffers inside the membrane are used for the internal communication of the
composite only. This clearly separates the internal communication of the composite or
organisation from the external communication. Figure 7-18 illustrates how the message
delivery mechanism supports these patterns.
Figure 7-17. Message Delivery Patterns
Role initiated Player initiated
Asy
nchr
onou
sSy
nchr
onou
s
PendingInQueue
OutQueue
Role
SMC Boundary
Required Interface
PendingInQueue
OutQueue
Role
SMC Boundary
Required Interface
PendingInQueue
OutQueue
Role
SMC Boundary
ProvidedInterface
PendingInQueue
OutQueue
Role
SMC Boundary
ProvidedInterface
2
1
1
1
A B
C D
Player
Player Player
Player
PendingInQueue
OutQueue
Role
SMC Boundary
ProvidedInterface
1
E
Player
Sync
hron
ous
- Pul
l
getNextMessage()
N/A
1
1
1
1
1
2
2
3
4
3
34
2
2
2
4
Figure 7-18. Message Delivery Mechanism
Message Delivery
Push Pull
Async Sync Sync
Player Role Player Role Player
A B C D EPattern
193
Pattern A: Role sends a message to the player and does not wait for the response (the
HTTP connection is not alive). Firstly, the message is taken (#1) from the OutQueue
if available and then delivered (#2) to the player (service) via the ROAD Message
Sender.
Pattern B: Player sends a message to the role and does not wait for the response (the
HTTP connection is not alive). Message received (#1) by the ROAD Message
Receiver then the message is placed (#2) in the PendingInQueue.
Pattern C: Role sends a message to the player and waits (busy wait) for a response
(HTTP connection is kept alive). Firstly, the message is taken (#1) from the
OutQueue and then delivered (#2) via the ROAD Message Sender who waits for the
response. When the response is received (#3), it is placed (#4) in the
PendingInQueue.
Pattern D: Player sends a message to the role and waits (busy wait) for a response.
The player invokes an operation of the role (#1). Then the ROAD Message Receiver
places (#2) the message in the PendingInQueue and busily waits for response (HTTP
connection is kept alive). When the response is available at the OutQueue, the
message is collected (#3) and sent as a response (#4) to the player’s request kept
alive. The busy wait uses the properties of the message (i.e., messageId and the
isResponse) to determine that the arrived message is a response to a request from the
player or not.
Pattern E: This pattern is implemented to support the pull-based message delivery
mechanism in ROAD. In order to support this pattern a special operation called
getNextMessage() is exposed in the provided interface of the role. The ROAD
Message Receiver behaves differently when a request from the player for the above
operation is received (#1). Instead of placing the message in the PendingInQueue as
in the case for Pattern D, the ROAD Message Receiver checks the OutQueue of the
role for any available push messages (#2). If available, the message is collected (#3)
and attached in the response path to be sent back to the player (#4). If not, a fault
message is sent back to the player as a response (#4). This pattern is useful when the
player cannot provide a static endpoint to the composite, yet still wants to process the
messages arriving at the role it plays.
194
7.3 Tool Support
The Serendip framework has a set of tools for the software engineers to model, monitor and
adapt the service orchestrations in an effective manner. The tool support available can be
divided into three categories depending on their use, i.e., Modelling, Monitoring and
Adaptation. Sections 7.3.1, Section 7.3.2 and Section 7.3.3 describe in detail the tool support for
those aspects respectively.
7.3.1 Modelling Tools
The motivation of the Serendip modelling tool is to allow a software engineer to model a
Serendip orchestration based on the SerendipLang as presented in Chapter 4. The language
conforms to the Serendip orchestration meta-model and concepts. However, the deployment
environment (Section 7.2) accepts the ROAD descriptions in a unified format which is XML. In
addition, there are a number of artifacts that need to be associated with a ROAD descriptor file,
including the rules and transformation templates. To support orchestration modelling in
Serendip and automatically generate the artifacts required for enactment, a modelling tool as an
eclipse-based plug-in has been developed.
This plug-in modelling tool, called SerendipLang Editor, can be easily integrated with the
eclipse development environment as an eclipse plug-in. It supports a software engineer to
develop Serendip orchestrations in a comprehensive and effective manner. As shown in Figure
7-19 the tool consists of an editor with autocomplete support, context menu support, syntax and
error highlighting capabilities.
When a Serendip descriptor, or the source file e.g, RoSAS.sdp (Figure 7-19), is saved in the
editor, a deployable XML descriptor is automatically generated. In addition, the required
artifacts (i.e., the contractual rule (*.drl) and XSLT (*.xsl) templates) are also automatically
generated. Later, the rules templates (*.drl) can be edited to include the business policies (that
evaluate and interpret the contractual interactions) using the Drools IDE23. The transformation
templates (*.xsl) can be edited to include the message transformation logic using the
EclipseXSLT24 editor. Both the Drools IDE and the Eclipse XSLT editor are eclipse-based plug-
ins, which can be downloaded free. Finally, the complete set of artifacts can be deployed in
ROAD4WS as described in Section 7.2.
23 http://docs.jboss.org/drools/release/5.2.0.Final/drools-expert-docs/html/ch08.html 24 http://eclipsexslt.sourceforge.net/
195
Figure 7-19. SerendipLang Editor – An Eclipse Plugin
Figure 7-20. Generating ROAD Artifacts
The tool is implemented using the Xtext 2.025 and Xtend 2.026 frameworks. Xtext supports the
implementation of parsers and editors for a given grammar of a Domain Specific Language
(DSL). The grammar of the SerendipLang is available in Appendix A. Xtend is used to
generate the XML descriptor, the rule templates and transformation templates. Xtend is a
statically typed template language, which allows dynamic code generation from Xtext-based
models. When a software engineer writes the Serendip orchestration description, the AST
(Abstract Syntax Tree) is automatically created as an EMF model [324] as shown in Figure
7-20. These EMF model is used by Xtend to automatically generate the deployable XML
descriptor, Drools templates and XSLT templates. Both Xtext and Xtend are eclipse-based
frameworks that integrate well with eclipse-based technologies such as EMF27.
25 http://www.eclipse.org/Xtext/ 26 http://www.eclipse.org/xtend/ 27 www.eclipse.org/emf/
Editor Area
Generated Artifacts
Source
EMF Model (AST)
Xtend
Software Engineer Editor (Eclipse/Xtext)
XML DescriptorDrools TemplatesXSLT TemplatesROAD Artifacts
196
7.3.2 Monitoring Tools
When a Serendip orchestration is modelled and deployed, the running processes need to be
monitored. To do so, the Serendip Monitoring Tool has been implemented. A snapshot of the
tool is given in Figure 7-22. The Serendip Monitoring Tool is started when a composite is
instantiated. The tool can be used to provide an overview of the available process definitions
and behaviours of the service composite and to view the log messages that are generated during
the runtime. However, the most important feature of the monitoring tool is its capability to
visualise the running process instances.
Figure 7-21. Monitoring the Serendip Runtime
Figure 7-22. Serendip Monitoring Tool
The tool reads the current Serendip runtime models and provides the visual representation of
the running processes as shown in Figure 7-21. Individual process instances can be selected and
viewed. The visualisation of the process instances is achieved using the dynamically constructed
EPC graphs (Section 5.5). This feature is useful because the individual process instances may
have been deviated from the original process definition due to ad-hoc changes (Section 6.3). A
process instance-specific view provides details such as completed tasks and triggered events
(Figure 7-22). In particular, this feature is useful for long running processes. This work reuses
the libraries of the ProM framework [279] with modifications to integrate with the Serendip.
Enactment Engine
PROM Jung
MPF
Serendip Runtime
Monitoring Tool
All the process instances are listed here
This area visualises the selected process instance, e.g., pdGold011
A textual representation of all the tasks and their properties of the selected process instance
197
7.3.3 Adaptation Tools
During the runtime, adaptations on a Serendip orchestration are carried out via the organiser
role as discussed in Chapter 6. The organiser player can be either remotely operating in a
networked environment or locally located in the same server where the service orchestration is
deployed. As shown in Figure 7-23 , irrespective of where the organiser player is located, all the
adaptation decisions need to go through the organiser role.
In the case of remotely located organiser player, the adaptations are carried out via the
exposed operations of the organiser role’s WSDL interface or the Organiser service proxy
(Section 7.2.3). The individual operations can be used to manipulate the service orchestration.
Alternatively more powerful script-based batch mode adaptation mechanism (Section 6.3.3) can
be used through the executeScript() operation. Any Web service invocation tool compatible with
the WSDL standard, e.g., Web service Explorer28 (available as an eclipse tool), can be used to
invoke the operations of the organiser interface.
In the case of a locally located organiser player, the player can use the Serendip Adaptation
Tool as a desktop application. Such a desktop application can share the Serendip runtime and
allow speedy and efficient adaptation of the Serendip runtime without any network delays or
burden of Web service calls. The Serendip Adaptation Tool can be initiated via the Serendip
Monitoring Tool (Section 7.3.2). For example if a software engineer is monitoring the progress
of a process instance and needing to make the selected instance deviate from the original
description, he/she can select the process and initiate the Adaptation Tool.
A screenshot of the Serendip Adaptation Tool is given in Figure 7-24. Once initiated, the
adaptation script is written on the scripting window or loaded from a pre-written script file. The
sample script shown is the sample adaptation script given in Section 6.3.3. Once the script is
executed, the console area shows whether the execution is successful or not. If unsuccessful, the
possible reasons for failure due to syntax errors or constraint violations are shown in the
Console area. A preview of the new dependencies of the adapted process instance can also be
seen as an EPC diagram in the Preview area to the left of the screen.
In order to assist with the writing of adaptation scripts, an eclipse-based script editor called
the Serendip Adaptation Script Editor has been developed. A screenshot of the editor is given in
Figure 7-25. This editor can be used to write adaptation scripts with error and syntax
highlighting support. Once the script is written it can be saved in the file system as a *.sas file.
This file can be loaded by the Adaptation Tool (Figure 7-24) or the contents can be copied to
the Scripting Window of the Adaptation Tool as described earlier. Alternatively, if the
28 http://www.eclipse.org/webtools/jst/components/ws/M4/tutorials/WebServiceExplorer.html
198
Organiser is remotely located, the script content can be sent (e.g., via an FTP client) to the
server and is executed via the executeScript() or scheduleScript() operations exposed in the
organiser interface.
Figure 7-23. Remote and Local Organiser Players
Figure 7-24. The Adaptation Tool
Figure 7-25. Serendip Adaptation Script Editor– An Eclipse Plugin
Serendip Runtime
Adaptation Tool
Organiser Role
Organiser Service Proxy
Remote Org. Player
Local Org. Player
Web service call
Serendip Orchestration
Case 1: Adaptation Decisions From Remote Org. Player
Case 2: Adaptation Decisions From Local Org. Player
Preview Console Area Scripting window
Editor AreaScript files Outline
199
7.4 Summary
This chapter has presented the implementation details of the Serendip framework. The complete
framework is separated into three main parts, i.e., the Serendip-Core, the Deployment
environment and the Tool Support. The Serendip-core was implemented in a deployment-
independent manner to ensure that the core could be used in multiple deployment environments
apart from the currently supported SOAP/Web services deployment environment. The Serendip-
core consists of several interrelated components that provide support for process enactment and
adaptation management.
The deployment environment known as ROAD4WS is developed by extending the Apache
Axis2 Web services engine to provide the adaptive service orchestration capabilities. One of the
important characteristics of ROAD4WS is of its capability to support various message exchange
patterns. Another important characteristic from the implementation point of view is that the
adaptive service orchestration capabilities are introduced without requiring modifications to the
Axis2 code base. The challenges faced and the solutions applied have been discussed.
Finally, the available tool support is presented. The tools have been developed to model,
monitor and adapt Serendip processes. The tools have been implemented using a vast array of
available technologies and also ensuring the compatibility with the existing standards.
200
Chapter 8
Case Study
We presented an example business organisation in Chapter 2, i.e., RoSAS that provides roadside
assistance to motorists by integrating and repurposing a number of services. RoSAS requires an
adaptable service orchestration approach to provide the IT support for its business processes that
may change in an unpredictable manner due to the nature of its operating environment. In this
chapter we present RoSAS’ road side assistance business as a case study to demonstrate how
Serendip can be used to model, enact and manage service orchestrations that has inherent needs
for unpredictable changes.
The objectives of this case study are as follows.
To provide a thorough understanding of the Serendip Language and its supporting
Serendip Orchestration Framework.
To show how the Serendip concepts can be applied to improve the adaptability of a
service orchestration.
To provide a set of guidelines to model and design a Serendip service orchestration
and generate the deployment artifacts using the available tool support.
A Serendip orchestration separates its organisational structure from its processes. Firstly,
Section 8.1 presents how the organisational structure is modelled to implement the RoSAS
business scenario. Secondly, Section 8.2 presents how the processes are defined on top of the
organisational structure. How the deployment artifacts (such as contractual rules and
transformations templates) are specified is explained in Section 8.3. Overall, the first three
sections provide a detailed analysis on how to design and implement a service orchestration
using Serendip. Specific guidelines on how to use the Serendip concepts to model the service
orchestration are presented. Then these guidelines are clarified using the examples from the case
study. The complete service orchestration descriptor for the case study is available in Appendix
B.
Finally, in Section 8.4, the adaptation capabilities of the framework are introduced, and are
demonstrated through several adaptation scenarios from the case study to address specific
adaptation requirements in a service orchestration.
201
A reader may find some of the discussions in this chapter are familiar, as the various
fragments of case study system have been used throughout the discussion of this thesis for
further clarifications via examples. This chapter however, aims to present the case study in a
comprehensive manner to demonstrate how the Serendip approach can be systematically applied
to a business scenario.
8.1 Defining the Organisational Structure
We follow a top-down approach to design the organisational structure. First of all, we will
design the higher level composite structure in terms of its roles and contracts in Section 8.1.1.
Then the required interactions of the composite are specified in the defined contracts as
described in Section 8.1.2.
8.1.1 Identifying Roles, Contracts and Players
The first step of realising the RoSAS business scenario in Serendip is to draft a high level
design of the composite structure. This design provides an abstract representation of the roles
and the relationships (i.e., the contracts) between the roles.
The following guidelines need be followed to draft the high-level design.
G_811.1. For each representation requirement of a participating entity, allocate a new
(functional) role Ri in the organisation29.
G_811.2. If a role R1 needs to interact with or has an obligation to another role R2, define
a contract R1_R2 between them.
Following these guidelines, in RoSAS scenario, there are motorists (MM), Case-Officers
(CO), Tow-Trucks (TT), Taxis (TX) and Garages (GR) as participants. A role should be
allocated to each and every participant (G_811.1). For the RoSAS scenario, we define roles in
the RoSAS composite as shown in Figure 8-1. Then the contracts of the organisation need to be
defined according to the requirements of interactions and obligations (G_811.2). For example,
the Case-Officer (CO) has to send a towing request to Tow-Truck (TT) whom in response
acknowledges the towing. Therefore, a contract CO_TT needs to be established. On the other
hand, there are no interactions or obligations between the Taxis (TX) and Tow-Trucks (TT) for
the given scenario. Consequently there is no requirement to establish a contract between them.
(However, if TX and TT need to coordinate towing of the car and picking up the motorist then
there could be a contract between TX and TT). The other relationships or contracts for the
RoSAS business scenario are similarly defined. The high level design for the RoSAS
29 Definition of the organiser role is implicit.
202
organisation is shown in Figure 8-1. The corresponding Serendip description is given in Listing
8-1.
Figure 8-1. Design of the RoSAS Composite Structure
A role is played by a player (see optional30 clause playedBy in Listing 8-1). If the player
requires receiving messages from the role, the player should implement the required interface
of the role (Section 7.2.4). For example, the task tTow of role TT need to be accomplished by
invoking the currently bound player’s service endpoint. Therefore, there should be an endpoint
specified in the player binding. On the other hand the role MM does not require maintaining an
active endpoint as the player is a client (Section 7.2.4) who is not actively contacted by the
composite. In that case it is not required to specify a player binding for role MM (Listing 8-1).
The initial player bindings with their endpoints can be specified as shown in Listing 8-2. These
player bindings can be dynamically changed during the runtime.
In this design we do not allocate a role for each and every motorist (concrete player).
Therefore any number of motorists can communicate through the role MM. We assume that the
identification of exact player is carried out based on message contents (e.g., using a registration
number). However, for a Garage chain (concrete player) in this design we assign a concrete
endpoint (e.g., Web service that accept repair requests). For a given time, only a single player is
bound to the Garage. It should be noted that all these are design decisions, based on the level of
abstraction that should be captured and the communication requirements, i.e., whether RoSAS
should initiate the communication (e.g., with Garage) or respond to players request (e.g., with
motorists).
The high level design of RoSAS defines the roles or organisational positions. Nevertheless,
what actually defines a role in terms of its capabilities in the composite are its relationships with
other roles. A role itself cannot explain the purpose of its existence. The relationships a role
maintain with other roles, collectively define its purpose of existence. For example the purpose
of role Case Officer is defined by its interactions with and obligations to Members, Tow-
Trucks, Garages and Taxis. These relationships are captured by the relevant obligations of
30Tthe Grammar of the Serendip Language is included in Appendix A
CO
MM
TT
GR
TX
RoSAS Organisation
RolesContracts
AA
CO_GR
GR_TT
CO_MM
CO_TTCO_TX
203
interactions (Section 8.1.2). The progression of processes in RoSAS (Section 8.2) is defined by
how these interactions are interpreted by contracts to trigger events (Section 8.3.1) in terms of
existing conditions of evaluations of service relationships. These events then acts as pre-
conditions to initiate the tasks defined in each role.
Listing 8-1. Serendip Description of the RoSAS Composite Structure
Organization RoSAS; Role CO is a 'CaseOfficer' playedBy copb{.../*Role description*/...} Role GR is a 'Garage' playedBy grpb {.../*Role description*/...} Role TT is a 'TowTruck' playedBy ttpb{.../*Role description*/...} Role TX is a 'Taxi' playedBy txpb{.../*Role description*/...} Role HT is a 'Hotel' playedBy htpb{.../*Role description*/...} Role MM is a 'Member'{.../*Role description*/...} Contract CO_MM{...} Contract CO_TT{ ...} Contract CO_GR{...} Contract CO_TX{...} Contract GR_TT{...} Contract CO_HT{...}
Listing 8-2. Player Bindings
PlayerBinding copb "http://127.0.0.1:8080/axis2/services/S6COService" is a CO; PlayerBinding ttpb "http://136.186.7.228:8080/axis2/services/S6TTService" is a TT; PlayerBinding grpb "http://136.186.7.228:8080/axis2/services/S6GRService" is a GR; PlayerBinding txpb "http://136.186.7.228:8080/axis2/services/S6TXService" is a TX; PlayerBinding htpb "http://136.186.7.228:8080/axis2/services/S6HTService" is a HT;
8.1.2 Defining Interactions of Contracts
A role interacts with adjoining roles to discharge their obligations. These interactions need to be
captured in the contracts identified above31. This provides an explicit representation of service
relationships among the participants of the composite. The following guidelines need to be
followed to define interactions.
G_812.1. For a given contract, define two roles (Role A and Role B) and identify the
possible interactions between the two roles. For each and every identified
interaction, define an interaction term (ITerm).
G_812.2. For each interaction term (ITerm), define
31 Contracts also capture the Rules (to evaluate the interactions) and Facts (to represent the contract
state). These will be discussed in Section 8.3.1 in detail.
204
a. The direction of interaction (i.e,, who initiates the interaction, either AtoB or
BtoA)
b. The data that should be part of the interaction (i.e., message parameters)
c. Whether the interaction is one way or two way (i.e., with or without a response)
The definition of the CO_TT contract is shown in Listing 8-3. It has two interaction terms
(G_812.1), one to order a tow-truck and another to send a payment for towing. The first term
orderTow specifies the interaction between the CO and TT to order the towing as follows:
The direction of interaction is AtoB, i.e. from CO to TT (G_812.2.a)
There is a response (withResponse) for the interaction. (G_812.2.b)
The signature of request and response. (G_812.2.c)
The second term payTow specifies the interaction between the CO and TT to pay for the
towing as follows:
The direction of interaction is again AtoB, i.e. from CO to TT (G_812.2.a)
There is no response for the interaction. (G_812.2.b)
The signature of request and response. (G_812.2.c)
Listing 8-3. The Contract CO_TT
Contract CO_TT{ A is CO, B is TT; //The two roles bound by the contract,; refer to as A and B. ITerm orderTow (String:pickupInfo) withResponse (String:ackTowing) from AtoB; ITerm payTow (String:paymentInfo) from AtoB; ... }
The other contracts of the RoSAS composite can be similarly defined, and their details are
given in Appendix B.
8.2 Defining the Processes
Having defined the organisational structure, the next step is to define processes. In Serendip,
more than one business process can be defined in a service composition to achieve multiple
goals. These goals can arise due to the requirement of leveraging multiple service offerings as
well as to support the requirements of multiple tenants. RoSAS has identified three types of
business offerings (packages) to its tenants depending on different roadside assistance
requirements.
205
Silver package: Provides roadside assistance by offering towing and repairing the
car.
Gold package: Provides roadside assistance by offering not only the towing and
repairing the car but also complementary taxi for the motorist to reach destination in
the event of a breakdown.
Platinum package: Provides roadside assistance by offering towing, repairing,
complementary taxi services and the option of complementary accommodation.
For each of these packages, a process needs to be defined. However, prior to writing down the
process definitions, the required organisational behaviours and the underlying tasks need to be
defined. Section 8.2.1 describes the guidelines to identify and define organisational behaviours.
Then Section 8.2.2 describes how to define the tasks in (the roles of) the organisation. Based on
these discussions, Section 8.2.3 describes how to define the business processes. In addition,
Section 8.2.4 shows how to capture the commonalities and variations among the processes
defined in an organisation.
8.2.1 Defining Organisational Behaviour
The organisational behaviour is represented by an array of behaviour units. While defining
behaviour units it is important to follow the guidelines bellow.
G_821.1. A behaviour unit needs to group related tasks so that a process definition can
refer to such a relatively independent unit of functions in the organisation.
G_821.2. A behaviour unit needs to be defined by considering the possibility of reuse.
This also helps in capturing the commonalities among multiple process definitions.
G_821.3. The task dependencies should be captured as event patterns via pre- and post-
event-patterns.
G_821.4. If any, the constraints to protect the integrity of organisational behaviour need
to be captured in behaviour units (4.2.3).
The RoSAS business model requires that towing, repairing, taxi and accommodation
providing services are carried out as part of a complete roadside assistance process. In addition,
there should be a way to complain about a car breakdown. These are the behaviour units of the
organisation.
Each behaviour unit, as part of the captured organisation behaviour, defines the dependencies
among tasks that should be performed by role players. For example, the role Tow-Truck (TT) is
obliged to perform the task tTow whilst the role Case-Officer (CO) is obliged to perform the
206
payment by executing task tPayTT. The towing behaviour unit groups such tasks and provides
the required abstraction and encapsulation to be used in business processes (G_821.1, G_821.2).
In general, we have the behaviour units of bComplaining, bTowing, bRepairing, bTaxiProviding
and bAccommodationProviding as defined in Listing 8-4. Please refer to Appendix B to view all
the behaviour units of the RoSAS organisation.
Task dependencies are captured by the event-patterns (G_821.3). For example, behaviour unit
bTowing specifies that tPayTT (of the role CO) will be initiated upon the event eTowSuccess,
which is triggered by tTow. When tPayTT is completed, the event eTTPaid is triggered.
Similarly, all the other task dependencies (via events) are captured in the relevant behaviour
units.
Apart from the tasks, a behaviour unit also captures the behaviour constraints
(G_821.4).These constraints ensure that the integrity of the behaviour is protected when there
are runtime modifications (Section 4.4). For example, repairing behaviour (bRepairing) need to
ensure that a payment is made when a repair is completed. This is a constraint that should not be
violated by any modification. Subsequently, this constraint is captured in the behaviour unit
bRepairing as shown in Listing 8-4. The constraint (in TCTL[254]) specifies that the
eRepairSuccess event must be followed by an event eGRPaid. Similarly other constraints of
organisational behaviour are captured in respective behaviour units.
207
Listing 8-4. Behaviour Units of RoSAS Organisation
Behaviour bComplaining { TaskRef CO.tAnalyse { InitOn "eComplainRcvd"; Triggers "eTowReqd * eRepairReqd "; } TaskRef CO.tNotify { InitOn "eMMNotif"; Triggers "eMMNotifDone"; } } Behaviour bRepairing{ TaskRef GR.tAcceptRepairOrder { InitOn "eRepairReqd"; Triggers "eRepairAccept * eDestinationKnown"; } TaskRef GR.tDoRepair { InitOn "eTowSuccess"; Triggers "eRepairSuccess ^ eRepairFailed"; } TaskRef CO.tPayGR { InitOn "eRepairSuccess"; Triggers "eGRPaid * eMMNotif"; } Constraint bRepairing_c1:"(eRepairSuccess>0)->(eGRPaid>0)"; } Behaviour bTowing{ TaskRef TT.tTow { InitOn "eTowReqd * eDestinationKnown"; Triggers "eTowSuccess ^ eTowFailed"; } TaskRef CO.tPayTT { InitOn "eTowSuccess"; Triggers "eTTPaid"; } Constraint bTowing_c1:"(eTowSuccess>0)->(eTTPaid>0)"; } Behaviour bTaxiProviding{ TaskRef CO.tPlaceTaxiOrder { InitOn "eTaxiReqd"; Triggers "eTaxiOrderPlaced"; } TaskRef TX.tProvideTaxi { InitOn "eTaxiOrderPlaced"; Triggers "eTaxiProvided"; } TaskRef CO.tPayTaxi { InitOn "eTaxiProvided"; Triggers "eTaxiPaid";} } Behaviour bAccommodationProviding{ TaskRef CO.tHotelBooking { InitOn "eAccommoReqd"; Triggers "eAccommoReqested"; } TaskRef HT.confirmBooking { InitOn "eAccommoReqested"; Triggers "eAccommoBookingConfirmed"; } TaskRef CO.tPayHotel{ InitOn "eAccommoBookingConfirmed"; Triggers "eHotelPaid"; } }
208
8.2.2 Defining Tasks
The behaviour units as defined above refer to and orchestrate tasks that are performed by the
roles and their corresponding players. These tasks need to be defined in the obligated roles. The
guidelines below need to be followed in defining tasks
G_822.1. For each task referred in a behaviour unit, define a task in the obligated role.
G_822.2. For a defined task, if required, specify what messages need to be used to
perform the task. List these source messages under clause UsingMsgs.
G_822.3. For a defined task, if required, specify what messages would be resulted in
when the task is performed. List these resulting messages under the clause
ResultingMsgs.
For example the tTow task of the towing behaviour obliges or is to be carried out by the role
TT (Listing 8-4). So the task needs to be defined in role TT (G_822.1), as shown in Listing 8-5.
In order to perform tTow, the interactions CO_TT.orderTow.Req and
GR_TT.sendGRLocation.Req are used as source messages (G_822.2). TT’s carrying out tTow
will result in the messages CO_TT.orderTow.Res and GR_TT.sendGRLocation.Res would result
in (G_822.3). Similarly, all the tasks of all the roles can be defined.
Listing 8-5. Task Description
Role TT is a 'TowTruck' playedBy ttpb{ Task tTow { UsingMsgs CO_TT.orderTow.Req , GR_TT.sendGRLocation.Req; ResultingMsgs CO_TT.orderTow.Res, GR_TT.sendGRLocation.Res; } }
8.2.3 Defining Processes
A Serendip process is defined to achieve a business objective of a consumer (or a consumer
group). An organisation can have more than one process definition to serve multiple objectives
of multiple consumers. During the runtime the processes are enacted based on these process
definitions. An important characteristic of Serendip is that the common and single composite
instance is reused to define multiple processes. The commonalities are captured in terms of
reusable behaviour units within a single organisation instance. This is important to support the
single-instance multi-tenancy as required to develop SaaS applications [47, 89].
The guidelines below need to be followed in designing process definitions.
G_823.1. Define a new process definition to achieve the business objectives of each
consumer (consumer group).
209
G_823.2. For a given process definition,
a. Identify the behaviour units that need to be referenced.
b. Identify the conditions of start (CoS) to enact process instances. No two
process definitions should share the same CoS.
c. Identify the conditions of termination (CoT) to terminate process instances.
d. Identify the constraints that need to be imposed to protect the business goals.
G_823.3. Ensure the well-formedness of the defined process by constructing the process
graph via the Serendip runtime as described in Section 5.5.
In the RoSAS scenario, there are three types of consumer groups, silver, gold and platinum.
We will design three process definitions to serve the requirements of these three consumer
groups (G_823.1). Let us call these three definitions pdGold, pdSilv and pdPlat as shown in
Listing 8-6.
The next step is to identify the behaviours that need to be referenced by each process
definition (G_823.2.a). The Silver members require only towing and repairing functionalities
whilst the Gold members require towing, repairing and taxi functionalities as part of roadside
assistance. In addition, the Platinum members require a complementary accommodation
services in a suitable hotel. These differences are captured in the process definitions pdSilv,
pdGold and pdPlat respectively in terms of the references to behaviour units. Consequently,
pdSilv only refers to bComplaining, bTowing, bRepairing; pdGold refers to bComplaining,
bTowing, bRepairing and bTaxiProviding; and pdPlat refers to bComplaining, bTowing,
bRepairing, bTaxiProviding, and bAccommodationProviding.
The intention of modelling reusable behaviour units in fact is to avoid the unnecessary
redundancy that could be introduced otherwise. As an alternative solution, separate process
definitions, without requiring a behaviour layer, could be defined for each customer group by
directly referring to tasks. A service composition modelled in this way would lead to
unnecessary redundancy due to overlapping task references and dependencies. Defining process
definitions based on reusable behaviour units avoid such redundancy.
Once the references to behaviour units are determined, the conditions of start (CoS) and the
conditions of termination (CoT) for each process definitions need to be defined (G_823.2.b and
G_823.2.c). The CoS is used by the orchestration engine to enact a new process instance of
appropriate type. For example, if eComplainRcvdSilv is triggered, the engine knows that a new
process instance of pdSilv needs to be enacted. This event is triggered by the contractual rules in
the CO_MM contract by interpreting the request from the motorist to Case-Officer, which
210
contains the member id or any other form of identification. This means that the complexity of
identifying the event to trigger (thereby type of definition) is handled by the rules (Section
8.3.1). In the process layer the processes are enacted based on the events such as
eComplainRcvdSilv. No two process definitions are allowed to have the same event as the CoS
to avoid enactment conflicts. For example, if both pdSilv and pdGold has the same CoS, then the
engine cannot determine which definition to use to enact a process instance. The CoT is
specified using a pattern of events capturing the safe condition that a process can be terminated.
For example, (eMMNotifDone * eTTPaid * eGRPaid) ^ eExceptionHandled is the CoT for
process instance of pdSilv, which specify either events a combination of events eMMNotifDone,
eTTPaid and eGRPaid; or the event eExceptionHandled should be triggered.
Notably, CoS is an (atomic) event, whereas CoT possibly be an event pattern. The reason for
this difference is that CoS is used by the enactment engine to instantiate a process instance,
Usually these initial events are triggered without a process instance id (pid) by the rules (later
these events are assigned with the instantiated pid of the process instance by the engine).
Therefore, it is not suitable to correlate two such CoS (initial) events as such can lead to
misinterpretations by the engine. On the other hand CoT is used by a process instance within its
scope to determine when it should terminate (and release resources and self-destroy). The events
used in CoT are triggered with specific process instance id and therefore can be correlated to
form an event pattern.
Then, the process level constraints that should be preserved need to be defined in the relevant
process definitions. Similar to the behaviour constraints, the process constraints are specified in
the TCTL language. For example, the constraint specified in the pdGold process states that the
event eComplainRcvdGold should eventually be followed by eMMNotif event.
Finally, the well-formedness of the defined process needs to be ensured (G_823.3) as there
can be problematic dependencies among tasks of defined behaviour units. These problems can
be detected via EPC based well-formedness check (Section 5.5). If a problematic dependency is
found, then that needs to be resolved first. For example, the task tTow specify its pre-event-
pattern as eTowReqd * eDestinationKnown. Unless both these events are triggered by any of the
referenced tasks, the tTow would not initiate. Yet, the event eDestinationKnown is NOT
triggered within the behaviour unit bTowing. This is a problematic dependency. As shown in
Listing 8-6, all the process definitions use behaviour unit bRepairing along with bTowing as a
combination, so that the triggering of eDestinationKnown is always possible. If a new process
definition is defined using bTowing, then at least one of the other behaviour units should
contain a task that triggers the event eDestinationKnown (e.g., new behaviour that allows a
motorist to report the destination address). It is advisable to limit the amount of dependencies
211
towards other behaviour units. However, to provide better flexibility, the dependencies are
specified declaratively and are not limited within a behaviour unit with strict entry and exist
conditions somewhat compromising the modularity of behaviour units. If required, events
triggered in other behaviour units can be used. In fact, there should be at least one such
dependency (otherwise the tasks of behaviour units are not connected to the rest of the process).
However, it is not recommended to overly connect the tasks of two behaviour units. Such an
overly connection may indicate a possible combination of the two behaviour units to a one.
Listing 8-6. Process Definitions
ProcessDefinition pdSilv { CoS "eComplainRcvdSilv"; CoT "(eMMNotifDone * eTTPaid * eGRPaid) ^ eExceptionHandled"; BehaviourRef bComplaining; BehaviourRef bTowing; BehaviourRef bRepairing; Constraint pdSilv_c1: "(eComplainRcvdSilv>0)->(eMMNotif>0)"; } ProcessDefinition pdGold { CoS "eComplainRcvdGold"; CoT "(eMMNotifDone * eTTPaid * eGRPaid * eTaxiPaid) ^ eExceptionHandled"; BehaviourRef bComplaining; BehaviourRef bTowing; BehaviourRef bRepairing; BehaviourRef bTaxiProviding; Constraint pdGold_c1:"(eComplainRcvdGold>0)->(eMMNotif>0)"; } ProcessDefinition pdPlat { CoS "eComplainRcvdPlat"; CoT "(eMMNotifDone * eTTPaid * eGRPaid * eTaxiPaid * eHotelPaid) ^ eExceptionHandled"; BehaviourRef bComplaining; BehaviourRef bTowing; BehaviourRef bRepairing; BehaviourRef bTaxiProviding; BehaviourRef bAccommodationProviding; Constraint pdPlat_c1:"(eComplainRcvdPlat>0)->(eMMNotif>0)"; }
8.2.4 Specialising Behaviour Units
Major commonalities and variations among the process definitions can be captured by referring
to common behaviour units or by referring to different behaviour units as discussed in Section
8.2.1 and Section 8.2.3. However, minor variations of the same behaviour can be captured using
the behaviour specialisation feature (4.2.4) without leading to unnecessary redundancy.
In order to use the behaviour specialisation feature, the guidelines below need to be followed.
G_824.1. Common tasks need to be specified in the parent behaviour unit.
212
G_824.2. Tasks to be specialised needs be defined in the children behaviour units to allow
variations of the same behaviour.
G_824.3. The process that requires slightly deviated behaviours needs to refer to the
specialised child behaviour unit.
For instance, if the platinum members require that when towing is completed (i.e. when task
tTow is completed), an update is sent to the motorist. In order to support this requirement, an
alternative towing behaviour needs to be defined. The platinum process can refer to the
alternative behaviour whilst the other processes can refer to the default towing behaviour.
As shown in Listing 8-7, the behaviour bTowingAlt extends the behaviour bTowing presented
in Listing 8-4. The bTowing behaviour captures the common tasks for all three processes
(G_824.1). The child behaviour (i.e., bTowingAlt) defines the specialised tasks (G_824.2). Then
the pdPlat process can replace the reference to bTowing with the bTowingAlt as shown in
Listing 8-8 (G_824.3). The other processes pdGold and pdSilv can still keep their references to
bTowing. Such replacement can be done via adaptation operations similar to Scenario 842.1
explained in Section 8.4.2.
Due to this specialisation, the pdPlat process contains two inherited tasks from the parent and
one additionally specified task (tAlertTowDone from the child, i.e. the variation in the
bTowingAlt. The behaviour variation and specialisation is illustrated in Figure 8-2.
213
Listing 8-7. Behaviour Specialisation
Behaviour bTowingAlt extends bTowing { TaskRef CO.tAlertTowDone { InitOn "eTowSuccess"; Triggers "eMemberTowAlerted"; } }
Listing 8-8. pdPlat Uses The Specialised Behaviour
ProcessDefinition pdPlat { … BehaviourRef bTowing bTowingAlt; //The old reference has been replaced with the new … }
Figure 8-2. Capturing Commonalities and Variations via Behaviour Specialisation
8.3 Message Interpretations and Transformations
During the runtime events need to be generated from contractual rules (Section 5.3.2) and
messages need to be transformed across the membrane (Section 5.4.2). The Serendip engine
uses these rules and transformations to progress and continue the service orchestration during
the runtime. The next two subsections discuss how to write the rules and transformations to
support the runtime operation of RoSAS.
8.3.1 Specifying Contractual Rules
The contractual rules are specified in Drools language [176] as part of contracts. The Serendip
framework has integrated the Drools Expert engine [308] to interpret the messages that pass
pdSilv
bTowing
bTowingAlt
pdPlat
Specialisation
pdGold
TT.tTow
eTowReqdeDestination
Known
V
eTowSuccess
XOR
eTowFailed
CO.tAlertTowDone
eTowFailed
pdSilv and pdGold
pdPlat
Variation
Commonality
214
across the contracts (Section 5.3.2). The guidelines for defining contractual rules are as follows.
The generation of the rule templates is automated via the Serendip Modelling Tool (Section
7.3.1). The Serendip Modelling Tool follows these guidelines to generate the rule templates for
each contract in the composite.
G_831.1. For each and every contract in the composite, allocate a single Drools rule file.
A Drools rule file may contain multiple rules and multiple facts.
G_831.2. For each and every interaction in a contract allocate rules to evaluate the
corresponding messages. If the interaction is a one-way, then allocate at least one rule
to evaluate the corresponding message. If the interaction is a two-way, then allocate at
least two rules to evaluate corresponding message for each direction, i.e., request path
and response path.
The Serendip Modelling Tool uses the naming convention <contract_id>.drl for rule files.
Accordingly the rule files, CO_MM.drl, CO_TT.drl, CO_GR.drl, GR_TT.drl, CO_TX.drl and
CO_HT.drl corresponding to each and every contract in the descriptor will be generated
(G_831.1). Once the rule templates are generated by the Serendip modelling tool, a software
engineer has to write the business logic to interpret these interactions. This includes adding the
additional conditions and actions to existing rules or introducing new rules.
Listing 8-9 shows the rule file written for the contract CO_TT. The contract CO_TT contains
a single two-way operation and a single one-way operation. Therefore, the Serendip modelling
tool generates three rules (the minimum number of rules required for the contract as per
G_831.2) to evaluate,
The request path of orderTow: operationName == "orderTow", response ==false
The response path of orderTow: operationName == "orderTow", response ==true
The request path of payTow: operationName == "payTow", response ==false
If it is required to evaluate the messages in terms of message content or the state of the
contract, additional rules are usually written. In this thesis, we primarily use contractual rules to
trigger events. For example, the first rule in Listing 8-9, triggers the eTowReqd event by
evaluating the request path (response=false) of the interaction orderTow. The second and third
rules both32 evaluate the response path (Response=true) of interactions orderTow. However in
addition, the rule evaluates the message content (using a Domain specific Java library) to check
if the content (e.g., of a SOAP [22] element) specifies whether the towing is successful or not.
Depending on the outcome of evaluation (See clause eval in Listing 8-9) either the eTowSuccess
32 Drools rules are condition-action rules. In order to specify the if-then-else type of rules, it is
required to capture both if and else condition in two different rules with opposing condition(s).
215
or eTowFailed events are triggered. The fourth rule evaluates the one-way payTow interaction
and triggers event eTPaid.
Listing 8-9. Drools Rule File (CO_TT.drl) of CO_TT Contract
/** 1. Rule to evaluate the interaction orderTow on the request path **/ rule "orderTowRequestRule" when $msg : MessageRecieved(operationName == "orderTow", response ==false) then $msg.triggerEvent("eTowReqd"); end /** 2. Rule to evaluate the interaction orderTow on the response path. When it is successful **/ rule "orderTowResponseRule-Valid" when $msg : MessageRecieved(operationName == "orderTow" , response==true) eval(true==DroolsUtil.evaluate($msg)) then //Everything is fine, trigger success event $msg.triggerEvent("eTowSuccess"); end /** 3. Rule to evaluate the interaction orderTow on the response path. When it is successful **/ rule "orderTowResponseRule-Invalid" when $msg : MessageRecieved(operationName == "orderTow" , response==true) eval(false==DroolsUtil.evaluate($msg)) then //Something is wrong, trigger the failure event $msg.triggerEvent("eTowFailed"); end /** 4. Rule to evaluate the interaction payTow on the request path **/ rule "payTowRequestRule" when $msg : MessageRecieved(operationName == "payTow", response ==false) then $msg.triggerEvent("eTTPaid"); end
In addition to above rules, as discussed in Section 5.3.2, the current context of the service
relationship also influence the final outcome of the evaluation. In Drools, Facts (Java Objects)
[312] can be used to capture the current context of the service relationship. These facts can be
updated and used in the message interpretation process. Let us take the example described in
Section 5.3.2, where bonus payments need to be made if the number of towing requests has
exceeded a certain threshold value, e.g., 20. This is implemented via rules as shown in Listing
8-10. A Fact called TowCounter has been declared to keep the number of towing requests made.
The rule, PaySomeBonus, not only uses the attributes of an interaction message but also use the
attribute of the TowCounter/Fact as a condition. If the conditions of the rule are evaluated to
216
true, the event eTowBonusAllowedis triggered and the counter is reset. The second rule, i.e.,
IncrementTowCount increments the counter for every tow request, otherwise. As mentioned in
Section 7.1.2, the runtime (via Event Observer) make sure that the triggered events conforms
the respective post-event patterns of tasks.
The rules can also be used to evaluate process instance-specific conditions. This is vital for
supporting runtime deviations of process instances to satisfy ad-hoc business requirements.
Such deviations may add additional events that should be triggered as part of task executions.
For example, suppose that an additional bonus payment needs to be made to a specific process
instance pdGold034 due to a prior negotiation. Suppose that a new task has been added to the
process instance with a pre-condition of event eTTAdditionalPaymentAllowed. However, unless
this event is triggered, the task will not be initiated. Furthermore, the event needs to be triggered
only for the process instance pdGold034.
In order to support this requirement a new rule can be added to the CO_TT contract during
the runtime, which will be triggered only when the process instance identifier (pId) of the
message is pdGold034. This new rule can be injected to the contract rule base (i.e. maintained in
Stateful Knowledge Session – Drools [312]) via the adaptation operation addContractRule() (see
Appendix C and Section 6.3). Once invoked, this operation will merge the new rule with the
rule base maintained in the contract. Observably, the rule contains an additional condition as
shown in Listing 8-11. While other rules are common to all the process instances, this rule will
be fired only when the process identifier (pId) attribute of the message is equivalent to String
pdGold034. Later this rule can be removed if no longer required/when the process is terminated.
Contractual rules for other contracts of RoSAS can be similarly defined in respective rule
files. Overall, the integration of business rules allows the specification of complex business
logic to interpret the messages routed across the organisation over its contracts. Business rules
separate the message interpretation concerns from the process or the control-flow concerns. The
control-flow concerns can be captured in the processes while the more complex message
interpretation and business decision making concerns can be captured at the contractual rules.
Such benefits are common to many rule integration approaches proposed in the past [149,
179, 181, 184, 190]. While we learn from these appraoches, we take a further step forward by
capturing the business rules in terms of service relationships. In business environments the
relationships among collaborating services change over time and need to be represented in the
design of the composition. The relationships maintain their own state and a knowledge base for
decision making purposes. In this approach, not only there is a separation of control-flow from
decision making process; but also the decision making concerns are separated across contracts.
217
Separate contracts, e.g., CO_TT, CO_GR maintain their own states and knowledge bases
appropriately representing the relationships among services/roles.
Listing 8-10. Contextual Decision Making via Rules
declare TowCounter count : int end rule "PaySomeBonus" when $msg : MessageRecievedEvent(operationName == "orderTow", response ==false) $tc: TowCount(count > 20) then $msg.triggerEvent("eTowBonusAllowed"); $tc.setCount(0);//Reset end rule "IncrementTowCount" when $msg : MessageRecievedEvent(operationName == "orderTow", response ==false) $tc: TowCount(count < 20) then $tc.setCount($tc.getCount()+ 1);//Increment end
Listing 8-11. A Process Instance Specific Rule
rule "orderTowResponseAdditionalPayRule" when $msg : MessageRecieved(operationName == "orderTow", response==true, pId == "pdGold034") then $msg.triggerEvent("eTTAdditionalPaymentAllowed "); end
The facts can be dynamically added, removed or modified to update the contract state to
represent an up-to-date relationship. In addition, the rules can be dynamically added, removed
or modified during the runtime to further extend these knowledge bases to improve decision
making capabilities. The interpreted events act as the link between the control-flow and the
service relationships as represented by contracts. This design makes the well-captured and -
maintained service relationships an influential factor determining the next task(s) of a specified
control-flow of a service composition.
8.3.2 Specifying Message Transformations
The internal messages need to be translated to external messages and vice versa (Section 5.4) to
enrich the message contents with additional information and ensure the compatibility of
message formats. In the current implementation, these transformations are specified using
218
XSLT [182] (Section 7.1.3). The guidelines below need to be followed hen specifying such
transformations.
G_832.1. For each Task T defined in Role R specify a transformation R.T. This
transformation will create the Out-Message from a set of Source-Messages.
G_832.2. For each Result-Message rm, defined in task T of R, specify a transformation
R.T.rm. Depending on the number of Result-Messages of task T, one or more such
transformations are required, and they create these Result-Messages from the single
In-Message.
The Serendip Modelling Tool helps to generate these transformation templates. The tool uses
the following naming conventions for XSLT files for easy identification.
For Out- Transformations (G_832.1): <role_id>_<task_id>.xsl
For In-Transformations (G_832.2): <role_id>_<task_id>_<resultMsg_id>.xsl
The software engineer then has to complete the transformation logic. For example, let us take
the task tAnalyse, which requires a Case-Officer to analyse an incoming roadside assistance
request (complain). Listing 8-12 shows the out transformation (G1) of the task tAnalyse
specified in the file CO.Analyze.xsl. The source message indicated by the variable
$CO_MM.complain.Req has been used to extract the data to generate the service request. An
XPath expression has been used to copy the content from the source message to the SOAP
request generated for the CO Service. More details about XPath expressions and XSLT can be
found in [313].
Similarly, the transformations to create result messages from an In Message are also
generated (G2). For example, Listing 8-13 shows the transformation to create the
CO_TT.mOrderTow.Req, (The order tow request to Tow-Truck) which is a Result Message
created out of the information available in the response from CO after performing task tAnalyze.
The transformations required by other tasks of RoSAS can be similarly defined.
Overall, the membranous design separates the external interactions from the internal
interactions. The transformations defined in tasks collectively define a membrane that data
In/Out of the composition need to be crossed. The complexity associated with the external
interactions (e.g., message syntax, transformation protocol) is separately addressed. This keeps
the core orchestration simple and easier to manage.
219
Listing 8-12. XSLT File to Generate an Invocation Request
<?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="2.0" xmlns:xsl=http://www.w3.org/1999/XSL/Transform xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:q0="http://ws.apache.org/axis2"> <xsl:output method="xml" indent="yes" /> <xsl:param name="CO_MM.complain.Req" /> <xsl:template match="/"> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> <soapenv:Body> <q0:analyze xmlns:q0="http://ws.apache.org/axis2"> <memId> <xsl:value-of select="$CO_MM.complain.Req/soapenv:Envelope/soapenv:Body/q0:complain/args0" /> </memId> <complainDetails> <xsl:value-of select="$CO_MM.complain.Req/soapenv:Envelope/soapenv:Body/q0:complain/args1" /> </complainDetails> </q0:analyze> </soapenv:Body> </soapenv:Envelope> </xsl:template> </xsl:stylesheet>
Listing 8-13. XSLT File to Derive a Result Message from an Service Response
<?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:q0="http://ws.apache.org/axis2"> <xsl:output method="xml" indent="yes" /> <xsl:param name="Analyze.InMsg" /> <xsl:template match="/"> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" > <soapenv:Body> <q0:orderTow xmlns:q0="http://ws.apache.org/axis2"> <content> <xsl:value-of select="$Analyze.InMsg/soapenv:Envelope/soapenv:Body/q0:analyzeResponse/return/orderTow" /> </content> </q0:orderTow> </soapenv:Body> </soapenv:Envelope> </xsl:template> </xsl:stylesheet>
220
8.4 Adaptations
RoSAS requires adapting its processes during the runtime. Chapter 6 has presented a few
examples from the case study to demonstrate the adaptation capabilities of the framework. In
general, there are numerous possible adaptations to the RoSAS processes at runtime. Instead of
presenting a large number of possible adaptations, this section provides a systematic analysis of
the possible types of adaptations using representative examples from the case study. In addition,
the adaptations required for the structural aspects of the composites are assumed given, in order
to limit the discussion to the process adaptations, which is the focus of this thesis.
Firstly, the operation-based adaptations that need to be carried out from the case study are
presented in Section 8.4.1. Secondly, to support the more complex adaptations required by the
case study, the batch mode adaptations are discussed in Section 8.4.2. Scenarios for both
evolutionary and an ad-hoc adaptations are considered in both sections. Section 8.4.3 analyses
and demonstrates the importance of controlling the changes by considering example scenarios
from the case study.
8.4.1 Operation-based Adaptations
The individual operations of the organiser interface are used to perform operation-based
adaptations (Section 6.3.1). In the following discussion we present two adaptation scenarios
that use operation-based adaptation. Scenario 841.1 is about an evolutionary adaptation whilst
Scenario 841.2 is about an ad-hoc adaptation.
Scenario 841.1 (evolutionary, operation-based adaptation): RoSAS requires the recording of
the payments made to taxis, and consequently requires a change to the current organisational
behaviour coordinating the taxi service. This recording should be performed by the Case-Officer
after a payment is made.
Analysis: In order to perform this adaptation, a new task tRecordPayTaxi needs to be inserted
into the behaviour unit bTaxiProviding. The task needs to have a dependency on the task
tPayTaxi, i.e., the task tRecordPayTaxi should become doable after tPayTaxi task is completed.
Therefore, we will use the event eTaxiPaid as the pre-condition of tRecordPayTaxi to capture
this dependency. Moreover, we will trigger an eTaxiPayRecorded event in order to mark that
the task is completed. As mentioned above, the Case-Officer is obliged to perform the task. The
addTaskToBehaviour() method of the organiser interface can be used to perform the adaptation
as shown below (see also Appendix C for operation definition). Figure 8-3 shows the change as
visualised in EPC graphs.
221
addTaskToBehaviour ("bTaxiProviding", "tRecordPayTaxi" , "eTaxiPaid" ,
eTaxiPayRecorded", "CO",”2h”);
This adaptation could be carried out using the operation-based adaptation because the event
eTaxiPaid has already been defined as a post-condition of task tPayTaxi. Operation-based
adaptations are easy to execute in such scenarios. Has that been not the case, however, we need
to perform another adaptation action to add the event prior to adding this task. The batch-mode
adaptation is useful in such scenarios where two adaptation actions can be carried out in a single
transaction (Section 6.3.2).
Figure 8-3. An Evolutionary Adaptation via Operations
Scenario 841.2 (ad-hoc, operation-based adaptation): A platinum member, soon after
sending the initial request for an assistance, requires that the towing of the car to be delayed
until the taxi picks her up due to bad weather. The already enacted process does not have such a
dependency built-in between the two tasks. If possible, this dependency needs to be introduced.
Analysis: The current pre-condition to start the towing task tTow is (eTowReqd *
eDestinationKnown). For this particular process instance (e.g., pdPlat034), a new dependency
needs to be introduced, making the instance pdPlat034 to deviate from original definition. The
pre-condition of tTow now should also include the event eTaxiProvided, i.e., the pre-condition
of tTow after the adaptation should be (eTowReqd * eDestinationKnown * eTaxiProvided). In
order to realise this deviation, the updateTaskOfProcessInst() operation in the organiser
interface is used as shown below (see Appendix C for operation definition). Figure 8-4 shows
the change as visualised in EPC graphs.
updateTaskOfProcessInst ("pdPlat034","tTow","preEP"," eTowReqd * eDestinationKnown *
eTaxiProvided");
Before After
CO.tPayTaxi
eTaxiPaid
CO.tPayTaxi
eTaxiPaid
CO.tRecordPayTaxi
eTaxiPayRecorded
222
Figure 8-4. An Ad-Hoc Adaptation via Operations
8.4.2 Batch Mode Adaptations
The adaptations for the above two scenarios were carried out using single operations. However,
as mentioned in Section 6.4, operation-based adaptations have limitations when it comes to
performing more complex adaptations. Hence, Script-based batch mode adaptations are
introduced to address these limitations. In this section we will show two scenarios which use the
batch mode adaptations for performing evolutionary (Scenario 842.1) and ad-hoc (Scenario
842.2) adaptations via adaptation scripts.
Scenario 842.1 (evolutionary, batch mode adaptation): Due to popular customer demands,
RoSAS decides that for all Platinum members, the tTow task is to be delayed until the taxi is
provided. This requirement is similar to the Scenario 2 given earlier. However, in this case, the
change is not an ad-hoc deviation of a process instance, but an evolutionary change that will
affect all future process instances of pdPlat.
Analysis: To satisfy this change requirement, the towing behaviour as specified by bTowing
needs to be modified. However, there is a problem of changing bTowing as other process
definitions, i.e., pdGold and pdSilv also use the same behaviour unit. Changing a common
behaviour is detrimental in this case as Gold and Silver members do share this particular
behaviour with Platinum members. Therefore, the change should be applied in a manner in
which the pdGold and pdSilv processes are unchanged. As a solution, an alternative behaviour
unit specific to platinum members need to be specified and referenced in the pdPlat process. We
will use the Serendip behaviour specialisation feature (Section 4.5), which avoids the
redundancy of specifying the common tasks again in the new behaviour. This adaptation would
require the execution of multiple operation-based transactions if operation-based adaptation is
used. This would lead to triggering false alarms from the post-transaction validation steps.
Instead the validation (Section 4.4) should be carried after all the operations are performed
instead of after each operation. To achieve this, we use a batch mode adaptation script to group
the multiple adaptation operations into a single transaction.
TT.tTow
eTowReqdeDestination
Known
V
TT.tTow
eTowReqdeTaxiProvide
d
V
eDestinationKnown
Before After
223
As shown in Listing 8-14, the adaptation script performs multiple operations. Firstly, the
script defines a new behaviour unit bTowingForPlat by extending the existing behaviour unit
bTowing. Secondly, the script adds the task tTow by overriding the preEP property of the parent
behaviour unit bTowing. Note that the new dependency eTaxiProvided has been added to the
preEP. Thirdly, the script removes the behaviour reference to bTowing from the process
definition of pdPlat. Finally, the new reference to the behaviour unit bTowingForPlat has been
added to the process definition pdPlat. This change is illustrated in Figure 8-5 (a). Moreover,
Figure 8-5 (b) shows the change only affects the pdPlat process due to the use of a specialised
behaviour unit. The other two process definitions. i.e., pdSilv and pdGold are still the same as
before after the adaptation.
Listing 8-14. An Adaptation Script for an Evolutionary Adaptation
DEF: addAlternativeBehaviourForPlatinumMembers{ //Define a new behaviour by extending an existing behaviour. addBehaviour bId="bTowingForPlat" extend="bTowing"; //Add a task to the behaviour with the same task identifier (tId) to override the parent properties. The rest of the properties are inherited from the parent. addTaskToBehaviour bId="towingForPlat" tId="tTow" preEP="eTowReqd * eDestinationKnown * eTaxiProvided" ; //Replace the current reference from the pdPlat. First remove existing reference and then add the new reference. removeBehaviourRefFromProcessDef pdId="pdPlat" bId="bTowing" ; addBehaviourRefToProcessDef pdId="pdPlat" bId="bTowingForPlat"; }
Figure 8-5. An Evolutionary Adaptation via Adaptation Scripts
This scenario actually shows two important aspects of the work of this thesis. Firstly, the
scenario shows the capability of batch mode adaptations in contrast to operation-based
adaptations when it comes to performing complex changes. Applying the adaptations in a single
transaction helps to overcome the problem of false alarms (Section 6.2.3 and Section 6.3.2).
Secondly, the scenario highlights the importance of behaviour specialisation as supported by the
Serendip Language. The variation has been captured and supported, yet without introducing any
unnecessary redundancy.
TT.tTow
eTowReqdeDestination
Known
V
TT.tTow
eTowReqdeTaxiProvide
d
V
eDestinationKnown
Before After (for pdPlat) After (for other definitions)
TT.tTow
eTowReqdeDestination
Known
V
pdGoldpdSilv pdPlat
bTowing
bTowingForPlat
Specialization
Adapatation
(a) (b)
224
Scenario 842.2 (ad-hoc, batch mode adaptation): Suppose that for a particular process
instance (pdGold037), the Taxi requires a payment prior to providing the service due to the
excessive distance to be travelled. This requirement has not been captured in the process
definition. However, in order to progress the process instance and due to the current
circumstances, RoSAS takes a decision to assist the motorist and make an advance payment for
this particular case. (Obviously, RoSAS could later take up this issue with the taxi service,
which is beyond our scope).
Analysis: One solution to support this requirement is to perform a payment offline without
changing the existing process. However, this would bypass the system, which is counter-
productive in the long run as no proper records have been made. If the adaptation can be
realised within the IT system, it should be supported and systematically managed. The running
IT process should be as close as possible to the real world process or case. Therefore, this
requirement needs to be supported by performing an ad-hoc modification to deviate the IT
process instance to reflect the change in the decision made in the real world business
process/case. In this case, the scope of modification is limited to process instance pdGold037
only. Consequently, this requires the addition of a new task dependency for task tProvideTaxi to
delay the taxi service until the payment is made. The new task of advanced payment also needs
to be added to the process instance. As such, the adaptation needs to be carried out in two steps
as shown in the script given in Listing 8-15.
Firstly, the existing task tProvideTaxi of the process instance is updated by including the new
pre-condition (eTaxiAdvancePaid * eTaxiOrderPlaced). This causes the tProvideTaxi task to be
initiated only when the advance payment also is made instead of just placing the taxi order.
Then, the new task tSendAdvPayForTaxi is added to the process instance pdGold037. The task
triggers event eTowAdvancePaid as the post condition so that this event can be used as an
additional event in the pre-condition of the existing task tProvideTaxi. In order to initiate the
newly added tSendAdvPayForTaxi, the event eTaxiReqd has been added as its pre-condition.
The change is shown in Figure 8-6.
225
Listing 8-15. An Adaptation Script for an Ad-hoc Adaptation
INST:pdGold037 { //Update the dependency of existing task updateTaskOfProcessInst tId="tProvideTaxi" property="preEP" value="eTaxiAdvancePaid * eTaxiOrderPlaced"; //Add a new task addTaskToProcessInst tId="tSendAdvPayForTaxi" bId="bTaxiProviding " preEP="eTaxiReqd" postEP="eTaxiAdvancePaid" obligRole="CO" pp="2h"; }
Figure 8-6. An Ad-hoc Adaptation via Adaptation Scripts
8.4.3 Controlled Adaptations
Serendip allows both ad-hoc and evolutionary adaptations as shown in previous sub-sections.
However, RoSAS requires that the adaptations be carried out without violating the integrity of
the composition. The impact of an adaptation needs to be analysed and potentially harmful
adaptations need to be controlled. In the following discussion we provide two scenarios
(Scenario 843.1 and Scenario 843.2) from the case study to show how the Serendip approach
identifies the impacts and controls the changes.
Scenario 843.1 (allowed change): Let us revisit the adaptation mentioned in Scenario 842.2.
In this adaptation the organiser attempts to realise an unforeseen requirement that is specific to a
particular process instance by deviating it from the original process description. Suppose that
there is a behaviour constraint bTaxiProviding_c1 defined in the behaviour unit bTaxiProviding
as shown in Listing 8-16. The constraint specifies that if a taxi order is placed, it should be
eTaxiReqd
CO.tPlaceTaxiOrder
eTaxiOrderPlaced
TX.tProvideTa
xi
eTaxiProvided
CO.tPayTaxi
eTaxiPaid
V
CO.tSendAdvPayForTaxi
V
eTaxiReqd
CO.tPlaceTaxiOrder
eTaxiOrderPlaced
TX.tProvideTaxi
eTaxiProvided
CO.tPayTaxi
eTaxiPaid
Before After
eTowAdvancePaid
226
followed by a payment to the taxi. Would the adaptation mentioned in Scenario 842.2 violate
this constraint?
Listing 8-16. A Behaviour Constraint of bTaxiProviding
Behaviour bTaxiProviding{ // ...... Task descriptions Constraint bTaxiProviding_c1:"( eTaxiOrderPlaced>0)->(eTaxiPaid>0)"; }
Analysis: The constraint requires that if the eTaxiOrderPlaced event has triggered, that
should be followed by an eTaxiPaid event. As shown in Figure 8-6, every possible path of the
modified process instance leads from eTaxiOrderPlaced to eTaxiPaid event. Therefore, the
above modification is a valid one and is allowed under the given constraints. Hence, the process
validation mechanism (Section 6.4) will return a result of true, confirming no constraint
violation.
Scenario 843.2 (problematic change): When a taxi service is declined or rejected by the
motorist after a request has been made (e.g., the motorist finds his/her own transport), a
compensation (instead of the full fair) needs to be paid to the Taxi. This requirement needs to be
captured in the taxi-providing behaviour. If the taxi service is provided, the full fair need to be
paid. If the taxi is rejected a compensation needs to be paid. The Case-Officer needs to handle
these two payments separately.
Analysis: In order to handle this new requirement an evolutionary adaptation is carried out on
the behaviour unit bTaxiProviding by adding a new task tPayCompensation. Moreover, it is
required to change the post conditions of task tProvideTaxi to a new event pattern
(eTaxiProvided ^ eTaxiRejected)33. The deviation is visualised in Figure 8-7 (a). As shown a
new branch has been created from the original flow to handle the exceptional (taxi-rejected)
situation. However, this has violated the constraint bTaxiProviding_c1, given in Listing 8-16.
That is, there is a possibility that eTaxiPaid would not be triggered after the event
eTaxiOrderPlaced when the taxi-rejected path is taken. The dependency constraint between the
two events is shown in Figure 8-7 (b). Therefore, the adaptation engine rejects this change,
prompting the organiser to find an alternative solution. Moreover, this adverse impact of the
adaptation can be identified by the process validation mechanism. This adaptation affects both
Gold and Platinum members, as both the pdGold and pdPlat process definitions refer to the
behaviour unit bTaxiProviding.
33 ^ = XOR
227
Alternative solutions: The modification suggested for Scenario 843.2 is flawed due to a
violation of constraint as detected by the validation module of the framework. However, there
can be many alternative solutions that are valid.
One of the solutions is to trigger the event eTaxiPaid, when the task tPayCompensation is
completed as shown in Figure 8-8(a). This would satisfy the constraint bTaxiProviding_c1.
However, this may compromise the understanding of the event eTaxiPaid as it now stands for
both payments made for regular fees as well as compensations.
Figure 8-7. The Deviation Due to Introduction of New Task
Figure 8-8. Alternative and Valid Solutions
eTaxiReqd
CO.tPlaceTaxiOrder
eTaxiOrderPlaced
TX.tProvideTaxi
eTaxiProvided
CO.tPayTaxi
eTaxiPaid
XOR
eTaxiRejected
CO.tPayCompensation
eTaxiReqd
CO.tPlaceTaxiOrder
eTaxiOrderPlaced
TX.tProvideTaxi
eTaxiProvided
CO.tPayTaxi
eTaxiPaid
eTaxiReqd
CO.tPlaceTaxiOrder
eTaxiOrderPlaced
TX.tProvideTaxi
eTaxiProvided
CO.tPayTaxi
eTaxiPaid
XOR
eTaxiRejected
CO.tPayCompensation
bTax
iPro
vidi
ng_c
1
Before After
(a) (b)
After
eTaxiOrderPlaced
TX.tProvideTaxi
eTaxiProvided
CO.tPayTaxi
eTaxiPaid
XOR
eTaxiRejected
CO.tPayCompensation
bTax
iPro
vidi
ng_c
1
(a)
eTaxiOrderPlaced
TX.tProvideTaxi
eTaxiProvided
CO.tPayTaxi
eTaxiPaid
XOR
eTaxiRejected
CO.tPayCompensation
bTax
iPro
vidi
ng_c
1
(b)
V
Solution 1 Solution 2
Constraint bTaxiProviding_c1:"( eTaxiProvided>0)->(eTaxiPaid>0)";
228
Another solution is to modify the constraint as shown in Figure 8-8(b). In place of the event
eTaxiOrderPlaced, the event eTaxiProvided can be used in the constraint. i.e., the expression of
the constraint would look like “(eTaxiOrderProvided>0)->(eTaxiPaid>0)”. This means that the
actual payment needs to be carried out only if the taxi service is provided. In this solution the
organiser in fact modifies the constraint to allow the aforementioned change. Again such a
change in constraint is also subject to a validation against the existing dependencies.
In general, there is not a fixed solution for a particular adaptation requirement. A number of
alternative solutions may exist, which the organiser as the decision making entity can choose
from depending on the business requirements. As shown above, both the dependencies as well
as the constraints can be changed. Both types of changes are “controlled” as follows,
The changes to dependencies are controlled by or checked against the existing
constraints, and
The changes to the constraints are controlled by are checked against the existing
dependencies.
The control does not originate from the rigidity of the language, but due to business
requirements as specified by the constraints and dependencies. The Serendip language allows
the specification of both the dependencies and constraints in a flexible manner, allowing the two
aspects to control or constrain each other depending on the business requirements as shown in
Figure 8-9. From one hand, the dependencies might need to be modified to suit the new
business constraints. On the other hand, existing business constraints may also be relaxed to
support new dependency modifications such as inclusion/deletion/property changes of tasks.
However, for a given time, the organisation contains dependencies and constraints that only
complement the existence of each other.
Figure 8-9. Changes to Constraints and Dependencies are Controlled by Each Other
229
8.5 Summary
In this chapter we have presented a case study based on the business scenario introduced in
Chapter 2. This case study has provided a detailed description of how the Serendip concepts and
approach are used to model a real word business scenario as an adaptable service orchestration.
Throughout this chapter we have gradually introduced each and every aspect of the service
orchestration modelling and adaptation. Firstly, we have provided a broader view of how the
RoSAS business model is envisioned as an organisation. We have identified the different roles
of the organisation and the relationships between these roles, forming the organisation structure
for the service orchestration or composition. Secondly, using the defined structure as the basis,
we have defined the business processes of the RoSAS organisation. We have showed how the
tasks are defined, how behaviour units capture the dependencies among these tasks, and how the
process definitions are formulated by reusing behaviour units to achieve different objectives.
Thirdly, we have shown how the messages of the composition are interpreted and transformed.
All these discussions are supported with a set of general guidelines and representative examples
from the case study that follow these guidelines.
Based on these modelling aspects of the case study system, several scenarios from the case
study have been presented to demonstrate the different adaptation capabilities of the Serendip
approach and framework. These scenarios include both operation-based adaptations and batch-
mode adaptations. Moreover, they also cover both evolutionary and ad-hoc change
requirements. Finally, scenarios have also been introduced to show how the adaptations are
controlled by the business constraints.
230
Chapter 9
Evaluation
In this chapter we evaluate the Serendip process support. In order to evaluate the benefits and
the viability of the proposed Serendip process modelling approach, three types of evaluations
have been carried out.
Firstly, we systematically evaluate the flexibility of our approach. In order to evaluate the
amount of flexibility we use the change patterns introduced by Weber et al.[84]. These change
patterns have been used in the past to evaluate a number of academic approaches and
commercial products. A detailed description of such evaluation conducted by authors is
available in [85]. For each change pattern we will specify whether the Serendip supports the
pattern or not. If supported we will present how the pattern is supported. The details about this
evaluation are available in Section 9.1.
Secondly, we evaluate the runtime performance overhead of the newly introduced Serendip
runtime. The Serendip runtime is specifically designed to support runtime adaptations to satisfy
a set of design expectations as described in Section 5.1.1. The process enactment, representation
and message mediation are designed to support the possible changes both evolutionary and ad-
hoc. Therefore, it is required to evaluate the impact of such a design on process execution,
compared to a static service orchestration runtime. We use Apache ODE as the static service
orchestration runtime for the comparison. The details about the experimentation setup and the
results are available in Section 9.2.
Thirdly, we assess the Serendip approach against key requirements of a service orchestration,
presented in Chapter 2. We used these key requirements to evaluate the existing approaches in
Chapter 3. In this evaluation, we will provide the features of Serendip approach that provide an
improved support for these key requirements. The limitations of the existing framework are also
highlighted. The details about this comparative assessment are available in Section 9.3.
9.1 Support for Change Patterns
Weber et al.[84] proposes a set of change patterns that should be supported by a Process Aware
Information System (PAIS)[283]. These patterns and features have been used to evaluate a
231
number of approaches in the past [146, 167, 325, 326].The complete set of patterns can be
found with more details in [85].
Authors separate these 17 patterns into two groups, adaptation patterns and patterns for
predefined changes. The adaptation patterns allow modifying a process schema or a process
instance using high-level change operations. These adaptations are not needed to be pre-defined
and can be applied both on process definitions as well as on process instances. For example, an
adaptation might allow a process instance to deviate by introducing a new task/activity. In
contrast, the patterns for pre-defined changes allow an engineer to predict and define regions in
the process schema where potential changes may be introduced during the run-time. For
example a place holder in a process is filled by a selected process fragment during the runtime
with the available knowledge.
Apart from these 17 patterns, the author also proposes 6 change support features as well.
These change support features in a PAIS make the above patterns useful in practice[84]. For
example the ability to provide ad-hoc changes and ability to support schema evolution are
example change support features.
In this section we will evaluate the Serendip approach based on these adaptation patterns
(Section 9.1.1), patterns for predefined changes (Section 9.1.2) and change support features
(Section 9.1.3). In order to avoid a prolonged discussion and repetition of original article, we
will refrain from detailed analysis on these patterns. Rather, we will explain the amount of
support for each and every pattern/feature from Serendip point of view. For a detailed
explanation and examples on these change patterns/features please refer to [85]. Nevertheless, in
the upcoming discussion, each pattern/feature is briefly described and explained. In order to
make the clarification on how Serendip supports these patterns/features consistent with the
original publication, we use the same examples from the original publication [85]. Furthermore,
there can be multiple possible solutions from Serendip point of view to support these patterns.
However, the intension of this discussion is to evaluate the ability of Serendip to support the
presented patterns. Hence, we will not provide all the possible solutions but just one solution.
Finally, a summary of the analysis is available in Section 9.1.4.
9.1.1 Adaptation Patterns
This section evaluates the support from the Serendip framework for adaptation patterns [85].
All the adaptation patterns need to be supported in process schema level as well as in process
instance level. Each subsection will provide a description of the pattern and how the pattern is
supported. Note that the patterns shown and the solutions provided, are for generic cases. But
can be applied to domain specific scenarios as well.
232
Note that following notations have been used throughout the explanation.
X.pre means the value of pre-condition (event pattern) of task X
X.post means the value of post-condition (event pattern) of task X
EP1= EP2 means the value of EP2 is assigned to value of EP1, Here both EP1 and EP2 are event patterns.
‘*’ => AND join/split, ‘|’ => OR join/split, ‘ⓧ’ =>XOR join/split
Moreover, following rules are employed
if X and Y are tasks, ¤ ∈ {*, |, ⓧ} and XØ=null-pre/post condition of X34 ,
o XØ ¤ X.pre = X.pre
o XØ ¤ X.post = X.post
if X and Y are tasks, ¤ ∈ {*, |, ⓧ}
o X.pre ¤ Y.pre = Y.pre ¤ X.pre
AP1: Insert Process Fragment
Description: A process fragment is inserted to a process schema. There are three types of
insertions, i.e. (a) Serial Insert, (b) Parallel Insert (c) Conditional Insert as shown in Figure 9-1.
Figure 9-1. AP1: Insert Process Fragment
How (Serial Insert): Requirement here is that X should be always executed after A. B should
be always executed after X. Then,
X.pre = A.post B.pre = X.post
How (Parallel Insert): Requirement here is that both X and B need to executed after A. C
needs to be execute after both X and B. This pattern needs to be supported by modifying the
pre-and post- conditions of existing tasks A and C as follows. The properties of B are not
needed to be changed.
X.pre = A.post C.pre= B.post * X.post
34 X Ø means that task X does not have pre- condition, or X does not have a post-condition.
233
How(Conditional Insert): This pattern needs to be supported by inserting a dynamic rule
which will be evaluated upon the completion of task A. Then the post-condition of A needs to
be modified to trigger either B.pre or X.pre.
Add rule R1, which will be evaluated when task A completes. A.post = B.pre ⓧ X.pre (A riggers either B.pre or X.pre. via rule R1)
AP2: Delete Process Fragment
Description: A process fragment is deleted from a process schema.
Figure 9-2. AP2: Delete Process Fragment
How: In order to support the above pattern the post-conditions of B and pre-conditions of E
need to be modified as follows. Then C needs to be deleted.
B.post = D.pre E.pre = D.post Delete C.
Additional Note: Above deletion is carried out in a parallel segment. Instead a deletion can be
carried out in a serial segment also. However, Weber et al.[85] does not specify such a pattern.
But such patterns can be supported in Serendip. Suppose D need to be deleted from the above
solution (right of Figure 9-3). Then the solution would be,
E.pre=B.post Delete D
AP3: Move Process Fragment
Description: A process fragment is moved from its current position in the process schema to
another position.
Figure 9-3. AP3: Move Process Fragment
How: The requirement here is that D needs to be executed after A. Both A and C, need to be
executed after B. To support this pattern, first of all the pre-conditions of D are changed with
the value of A.post. Then the Task B needs to trigger the conditions to execute both A and C.
Therefore, the post-conditions of B should be an AND combination of pre-conditions of A and
234
C. Finally the pre-conditions of B is set to be the initial condition, which could be a post-
condition of a task which is not shown in the graph.
D.pre = A.post A.pre = B.post B.pre =<init> (initial conditions, which is not shown in the graph)
AP4: Replace Process Fragment
Description: A process fragment is replaced by another process fragment.
Figure 9-4. AP4: Replace Process Fragment
How: Here C is replaced by X. To support this pattern, simply assign the pre- and post-
condition of C for the new X.
X.pre = C.pre X.post = C.post
AP5: Swap Process Fragment
Description: Two existing process fragments are swapped.
Figure 9-5. AP5: Swap Process Fragment
How: Swap the pre- and post-conditions of the swapping fragments. To facilitate the
swapping, temporary variable var1 is used to back up F.post as shown in the following steps.
var1=F.post F.post = C.pre * D.pre F.pre=A.pre A.pre=E.post B.post=var1
AP6: Extract Sub Process
Description: Extract a process fragment from the given process schema and replace it by a
corresponding sub process.
235
Figure 9-6. AP6: Extract Sub Process
How: This pattern can be supported by the behaviour modelling in Serendip. Common tasks
{F,G,H} can be grouped into a single shared behaviour unit.
To elaborate, assume both S1 and S2 are schemas of behaviour units that already exist.
Moreover, assume S1 is referenced by PD1 and S2 is referenced by PD2. Here PD1 and PD2
are process definitions. In this arrangement, the tasks {F,G,H} are duplicated in both PD1 and
PD2 (or more precisely in S1 and S2). A new behaviour unit with Schema P can be defined for
Tasks F,G and H. Then PD1 refers to both S1and P behaviour units. PD2 refers to both S2 and
P. In order to map the pre- and post- conditions the event patterns of tasks of the new schema
(right hand side of Figure 9-6) are defined as follows for the above shown change.
F.pre (of Schema-P) = E.post (of Schema-S1) | C.post (of Schema-S2) E.pre (of Schema-S2) = H.post (of Schema-P)
AP7: Inline Sub Process
Description: A sub process to which one or more process schemes refer is dissolved.
Accompanying to this the related sub process graph is directly embedded in the parent schemes.
Figure 9-7. AP7: Inline Sub Process
How: This pattern needs to be supported by duplicating the tasks of a behaviour unit P to a
couple (or more) of other behaviour units (S1, S2). In order to map the pre- and post- conditions
the event patterns of tasks of the new schema (right hand side of Figure 9-7) are defined as
follows for the above shown change.
236
F.pre (of Schema-S1) = E.post (of Schema-S1) F.pre (of Schema-S2) = C.post (of Schema-S2) E.pre (of Schema-S2) = H.post (of Schema-S2)
AP8: Embed Process Fragment in Loop
Description: Add a loop construct to a process schema surrounding an existing process
fragment.
Figure 9-8. AP8: Embed Process Fragment in Loop
How: In general, a loop needs to have an exit condition. If the exit condition is not met, the
loop continues. This pattern needs to be supported with help of a rule R1 which specifies the
exit condition and the repetition condition. The rule R1 will evaluate upon completion of E, and
triggers B.pre if a repetition (looping) is required. Otherwise triggers F.pre which is the exit
condition from the repetition/loop.
E.post = (B.pre ⓧ F.pre)
AP9: Parallelise Process Fragment
Description: Process fragments which were confined to be executed in sequence are
parallelised.
Figure 9-9. AP9: Parallelise Process Fragment
How: This pattern is supported by changing the pre-conditions of F, G, X and H. The post-
conditions of E are assigned to the pre-conditions of G and X (F already has the required pre-
condition). The pre-condition of H is set as a combination of post-conditions of F, G and X.
G.pre = E.post X.pre= E.post H.pre = (F.post * G.post * X.post)
237
AP10: Embed Process Fragment in Conditional Branch
Description: An existing process fragment shall be only executed if certain conditions are
met.
Figure 9-10. AP10: Embed Process Fragment in Conditional Branch
How: This pattern needs to be supported with an insertion of a rule to be evaluated upon
completion of task A. A triggers B.pre if condition=false or A triggers F.pre if condition=true.
Add rule R1, which will be evaluated when A completes, e.g, as part of evaluating the result messages of A.
A.post = B.preⓧ F.pre (A riggers either B.pre or X.pre. via rule R1)
AP11: Add Control Dependency
Description: An additional control edge (e.g., for synchronising the execution order of two
parallel activities) is added to the process schema
Figure 9-11. AP11: Add Control Dependency
How: This pattern is supported by adding the post-conditions of F to the pre-conditions of G.
G.pre = G.pre * F.post
AP12: Remove Control Dependency
Description: A control edge is removed from the process schema
238
Figure 9-12. AP12: Remove Control Dependency
How: This pattern can be supported by removing the post-conditions of F from the pre-
conditions of G. Here EP is an event pattern.
Suppose, G.pre = EP * F.post then, G.pre = EP
AP13: Update Condition
Description: A transition condition in the process schema is updated.
Figure 9-13. AP13: Update Condition
How: Assume currently the rule R1 which evaluates completion of A, triggers event pattern
(B.pre ⓧ F.pre). The rule is currently as follows, triggers F.pre if (x>50,000) and B.pre
otherwise.
if(x>50000) trigger F.pre
else trigger B.pre
This pattern needs to be supported by simply changing the condition of the rule R1.
Change rule R1, which will be evaluated when A completes to
if(x>100000) trigger F.pre
else trigger B.pre
Note: Drools [303] rules are condition-action (when-then) rules rather than if-then-else rules.
The above rule contains an else part that need to be supported. In Drools such else need to be
explcitely captured in a condition of a second rule. Therefore, in Drools above rule is actually
captured in two rule statements as follows.
when (x>100000) then trigger F.pre
when(x<100000) then trigger B.pre
239
9.1.2 Patterns for Predefined Changes
This section will evaluate how the Serendip is used to support the patterns for predefined
changes [85].
PP1: Late Selection of Process Fragments
Description: For particular activities the corresponding implementation (activity program or
sub process model) can be selected during run-time. At build time only a placeholder is
provided, which is substituted by a concrete implementation during run-time.
Figure 9-14. PP1: Late Selection of Process Fragments
How: Serendip process definitions contains a list of references to behaviour units. This
pattern can be supported by dynamically altering the set of behaviour references of a process.
Suppose a process definition PD wants to delay the selection of the process fragment that should
be executed after E. As shown in Figure 9-14, this can be {X->Y->Z}, {U->V}, S, T or R.
Then PD can model the concrete section of the process which involves tasks A to E in a
behaviour unit. Let us call this behaviour unit Bconcrete. The other alternative behaviours
corresponding to each process fragment that is substituted can be modelled as separate
alternative behaviour units as candidates for replacement. Let us call those alternative behaviour
units B1, B2, B3 and B4 which represents fragments {X->Y->Z}, {U->V}, S, T and R
respectively. During the design time PD only refers to Bconcrete. However, during the runtime new
reference to B1, B2, B3 or B4 can be added to PD. This can be done by manually (possible for
slow running processes) or scheduling an automated adaptation script (Section 6.5) via the
organiser method scheduleScript () (Appendix C).
It should be noted that all alternative behaviour units i.e., B1, B2, B3 and B4 should contain a
task as follows.
T (T pre E post)
240
Additional Note: The above example, taken from the original publication, does not contain a
task after F. To make our argument valid for a general case, assume there is a task G after F. In
that case the alternative behaviour units i.e., B1, B2, B3 and B4 should contain tasks , which
are first and last tasks of the new behaviour unit respectively.
T T (T pre E post) (T post G pre)
PP2: Late Modelling of Process Fragments
Description: Parts of the process schema have not been defined at build-time, but are
modelled during run-time for each process instance. For this purpose, placeholder activities are
provided, which are modelled and executed during run-time. The modelling of the placeholder
activity must be completed before the modelled process fragment can be executed.
Figure 9-15. PP2: Late Modelling of Process Fragments
How: This pattern could be supported via the same solution described in PP1. The only
difference is that new behaviour unit(s) needs to be dynamically added rather than switching
between the existing ones. However, there is another improved solution that can avoid
modifications to the process definition, which was required to support PP1.
As the alternative solution, this pattern can be supported via a placeholder behaviour unit,
which will be replaced by a concrete behaviour unit(s) added during the runtime. A process
definition can define a reference to a placeholder behaviour unit in its definition (just like to any
other behaviour unit). The place holder behaviour unit contains a dummy task (say T) which
contains pre-/post-event patterns to suit pre-/post-event patterns of adjoining tasks. Which
means in this case, T.pre = A.post and T.post=(C.pre * D.pre). During runtime, a designer can
dynamically add behaviour unit to replace the placeholder behaviour unit. This new behaviour
unit may contain multiple tasks. However, those tasks should confirm to the pre- and post-
conditions requirements as follows. Here are first and last tasks of the new behaviour unit
respectively. Again this can be done manually or via an automated script.
T T (T pre A post) (T post C pre D pre)
241
PP3: Late Composition of Process Fragments
Description: At build time a set of process fragments is defined out of which a concrete
process instances can be composed at run time. This can be achieved by dynamically selecting
fragments and adding control dependencies on the fly.
Figure 9-16. PP3: Late Composition of Process Fragments
How: Supporting this pattern is straightforward in Serendip due to declarative nature of
behaviour modelling and the lightweight-ness of process definitions. A Serendip process
definition is a collection of references to existing behaviour units. Behaviour units can be
declaratively specified in the organisation. During runtime, new process definitions can be
defined by referring to these behaviour units (composing behaviours). Please refer to Section
4.2.1 and 4.2.2 for more details.
PP4: Multi-Instance Activity
Description: This pattern allows the creation of multi instances of a respective activity during
run-time.
Figure 9-17. PP4: Multi-Instance Activity
How: The solutions to support this pattern can be divided into two categories. (a) multiple
activity instances are executed sequentially (b) multiple activity instances are executed
concurrently.
242
(a). In the case of sequential multi-instance activity, a Serendip task can be carried out
multiple times provided the pre-conditions for that task are triggered multiple times.
For example, if task F need to be carried out multiple times, then the pre-conditions
of F, i.e., F.pre needs to be triggered multiple times. This can be done by a rule (r1)
that inspects completion of F. For the given example in [85], the rule r1 can trigger
F.pre if one or more parcels remain for scan. It is not required to know the number or
repetitions at the design time.
(b). In the case of concurrent multi-instance activity, it should be possible to let multiple
instances of same activity to be performed concurrently. However, to perform
multiple activities concurrently, there should be multiple executing entities. In a real
world scenario, this is equivalent to hiring new labourers to speed up a completion of
construction. In Serendip, a Role can be bound by a single player (executing entity)
for a given time. As such, there is a requirement to dynamically add new roles
(R1….Rn) to concurrently perform the same task (F). Dynamically adding new roles
and tasks is supported by the Serendip runtime. Albeit appearing in different roles,
these tasks (R1.F ….Rn.F) serve the same purpose. Moreover, these tasks subscribe to
the same event pattern via a new behaviour unit, so that all the tasks become doable
at the same time and consequently performed concurrently. The organiser can
determine the number of role instances that need to be created based on runtime
knowledge.
For both solutions the number of repetitions or the concurrent executions (via R1….Rn) does
not need to be known a priori.
9.1.3 Change Support Features
This section will evaluate how Serendip framework facilitates the change support features [85].
F1: Schema Evolution, Version Control and Instance Migration
Description: Schema Evaluation, Version control for process specifications should be
supported. In addition, controlled migration might be required.
Note: Although, Weber has categorised these features into a single feature, we will not
consider them as a single feature. Instead, for clarity, we will tackle these three features
individually. Next three paragraphs will provide the details on how Serendip supports Schema
Evaluation, Version Control and Instance Migration.
How (a. Schema Evolution): Serendip allows carrying out adaptations on process definitions
and behaviour definitions during the runtime. These changes are carried out on the models
243
maintained in the MPF (see Model Provider Factory in Section 5.1). Process instances are
enacted based on the currently available model of the process definition (and behaviours) as
maintained by the MPF. By allowing changes to these models in MPF, Serendip framework
supports the Schema Evolution. When a model corresponding to pdGold definition is changed
(e.g., to alter the condition of termination - CoT), an instance enacted after committing the
change would use the new CoT. When a schema is evolved, running instances of old schema
remain in the system (Option F1.2 of Fig19 [85]) unless individually aborted or migrated.
How (b. Version Control): The version control can be supported by duplicating a process
definition and applying necessary changes to create a newer version without overriding. Both
the new version and the old versions can co-exist and continue to instantiate process instances.
Users may develop their own naming conventions to name the process definitions to identify the
different versions, e.g., pdGoldv1, pdGoldv2. (Option F1.3 of Fig19 [85]). Serendip allows both
options, i.e., schema evolution and version control without explicit support for instance
migration.
How (c. Instance Migration): The framework does not explicitly support instance migration
yet. That means, when a process instance is enacted, the process instance maintains its own
runtime models. A change in the original process definition does not affect the runtime models
maintained at the process instance at all. Newly enacted process instances will behave according
to the new definition/schema while already enacted process instances behave according to the
schema at the time of enactment. Nonetheless, the individual process instances can be changed
to reflect the changes of the original schema. This should be carried out case by case as the
progress made by each process instance can be different to each other. Such realisation of
adaptation should be carried out by considering the current state of the process instance (refer to
Section 6.4 for state checks). It should be noted that the script-based adaptations make it easy to
perform adaptations on multiple instances. The same adaptation script can be used repeatedly on
different process instance scopes (See also support for F6: Change Reuse). As future work such
features can be used to automate the instance migration. We refer to previous work [327] for
further details on process instance migration.
F2: Support for Ad-hoc Changes
Description: In order to deal with exceptions PAIS must support changes at the process
instance level
How: Serendip supports ad-hoc changes. Individual process instances can be changed to
support the unexpected changes. For example new tasks can be added; existing tasks can be
244
deleted or modified. Please refer to Chapter 6, which extensively discussed the support for ad-
hoc modifications in Serendip.
F3: Correctness of Change
Description: The application of change patterns must not lead to run-time errors.
How: Changes to process definitions and instances are associated with an automated change
validation process (Section 5.6). Such a change validation process ensures that the new schema
or instance is sound and does not violate the business constraints. The soundness is ensured via
the rules specified in [246]. The constraint violation is identified via the two-tier constraint
validation feature introduced in Section 4.2.3. Moreover, for process instance level changes, a
state-check is carried out to ensure that the process instance is in a correct state to realise the
change (Section 6.4). For example a change of a property of a completed task is not allowed.
These changes are carried out via the organiser interface. If the change is not successful, then
the reply contains the reason for being unsuccessful as well.
F4: Traceability and Analysis
Description: All changes are logged so that they can be traced and analysed.
How: Supporting this feature is out of scope of this thesis.
F5: Access Control for Changes
Description: Change at the specification and instance levels must be restricted to authorised
users.
How: Supporting this feature is out of scope of this thesis.
F6: Change Reuse
Description: In the context of ad-hoc changes “similar” deviations (i.e., combination of one
or more adaptation patterns) can occur more than once.
How: Serendip support batch mode adaptations via adaptation scripts (Section 6.3). A script
allows performing several adaptations in a single transaction. The same adaptation script can be
reused to apply adaptations on multiple instances via operation scheduleScript (). This operation
makes it possible for the same script to be executed upon multiple process instance and upon
multiple conditions.
For example, suppose a script (script1) is defined to change a process instance pdGold011
then the same script can be reused to adapt another process instance pdGold012.
scheduleScript (“script1.sas”, “EventPattern”, “pdGold011: eTowFailed”);
245
scheduleScript (“script1.sas”, “EventPattern”, “pdGold012: eTowFailed”);
As another example, the same script (script1) can be executed due to two events (eTowFailed
, eTowCancelled ) of the same process instance (pdGold011).
scheduleScript (“script1.sas”, “EventPattern”, “pdGold011: eTowFailed”);
scheduleScript (“script1.sas”, “EventPattern”, “pdGold011: eTowCancelled”);
9.1.4 Results and Analysis
The above evaluation shows that Serendip supports all 13 adaptation patterns and 4 patterns for
predefined changes. However, the evaluation also acknowledges there is a lack of support for
certain Change Support Features, i.e. Controlled Migration (F1.c), Traceability and Analysis
(F4) and Access Control for Changes (F5). As mentioned such support is beyond the scope of
this thesis. For example, controlling the access for changes is handled in a separate security
layer of the ROAD deployment infrastructure. Traceability and Analysis needed to be handled
not only process level changes but also in the complete composite level. Controlled migration
needed to be handled at the management level (organiser) which involves a decision making
process. As such, the fundamental change operations and mechanisms can be used to facilitate
such a controlled process migration.
One of the main characteristics of Serendip Language that facilitated supporting the patterns
is the loosely-coupled and declarative nature of task descriptions (Section 4.2.1). This allows
performing changes by simply modifying the properties of a task. Moreover, the behaviour-
based process modelling (Section 4.2.2) improved the ability to easily introduce/remove/replace
process fragments. The design of the Serendip runtime also is aided to facilitating these
patterns. For example, the event-cloud allows new tasks (e.g. to support AP1) to dynamically
subscribe-to or publish events, the support for dynamic business rules helps to dynamically
changing the conditions which will determine the cause of progression of the process instance
(e.g., to support AP13). Table 9-1 has summarised the results of the above evaluation.
246
Table 9-1. Evaluation Summary - The Support for Change Patterns
AP1 AP2 AP3 AP4 AP5 AP6 AP7 AP8 AP9 AP10 AP11 AP12 AP13
PP1 PP2 PP3 PP4
F1.a F1.b F1.c F2 F3 F4 F5 F6
Support No support
9.2 Runtime Performance Overhead
As shown earlier, the framework is implemented to provide the required runtime adaptability
for service orchestration. The design and implementation of a service orchestration runtime that
anticipates dynamic changes can compromise the performance. The performance of such a
runtime can be inferior compared to the performance of another runtime (orchestration engine)
that is designed and implemented to statically load and execute the orchestration descriptors.
Such a cost of performance can arise due to the introduced dynamic rule evaluations, dynamic
message routing, dynamic data transformations and event processing capabilities described in
revision chapters of this thesis. In order to evaluate the cost of providing such adaptability on
the performance of the service orchestration, we setup an in-lab experiment. In this experiment
we compare the performance of (a) WS-BPEL descriptor deployed in Apache ODE
orchestration engine[265], against (b) Serendip descriptor deployed in the Serendip
orchestration framework. The next sections will provide details of the experiment followed by
the observed results and an analysis.
9.2.1 Experimentation Set-up
We designed a test scenario based on the case study presented above. The scenario requires
serving a client request with the involvement of two partner services, i.e., a Case-Officer
Service and Tow-Truck Service. In order to serve the client request, a service composition has
to invoke these partner services. The service composition accepts a client request, which is a
complaint about a car break-down and then invokes the partner services. Upon completion, the
result is sent back in response to client’s request. All the partner services are automated and do
not require any human involvement. An overview is given in Figure 9-18. For this experiment
we implemented two service composites.
a. Composite 1 (Comp1) is implemented using WS-BPEL and is deployed in Apache
ODE orchestration engine.
247
b. Composite 2(Comp2) is implemented using Serendip language and is deployed in
Serendip Orchestration Engine.
Figure 9-18. Performance Overhead Evaluation - Experiment Setup
The experiment intends to compare the average time taken (Δt) to serve the requests by
Comp1 and Comp2. Let us call the average time taken by Comp1 is Δt1and the average time
taken by Comp2 is Δt2. The Percentage Performance Lost (PPL) will be calculated by the
following formula.
PPL =
x 100 % ……………………………..….. (1)
We selected WS-BPEL as the supplementary orchestration language as it is the de-facto
standard for orchestrating Web services. We chose Apache ODE (v 1.3.5) to run the BPEL
scripts due to its stability, performance, wide usage and free-license. Moreover, Apache ODE
can be easily integrated with Apache Tomcat, which is the same servlet container used by
Serendip implementation as well. In this sense both Comp1 and Comp2 uses Apache Tomcat as
the servlet container to accept SOAP messages over HTTP.
The experiment35 was run in a controlled environment, where all the partner services and the
two composites are located in the same machine and in the same server. We did not use external
Web services to avoid the impacts such as inconsistent network delays and possible network
failures on the overall execution and thereby the results. Moreover, both composite invokes the
same partner service implementations. The test was run on a 2.52 GHz Intel Core i-5 with 4 GB
RAM. The operating system was 32-bit Windows 7 Home Premium. The servlet container was
Apache Tomcat 7.0.8.
9.2.2 Results and Analysis
First, we evaluated the average time taken for Comp1 (BPEL+ODE) to serve requests. The
average time was calculated based on 100 request/responses. Then similarly the average time
taken by the Comp2 (Serendip) was also calculated based on 100 request/responses. Figure 9-19
plots the time taken to serve 100 requests. The results are shown in Table 9-2. According to the
results the average time taken by Comp1 (Δt1) is 210.28 ms whereas the average time taken by
35 The source files are available at http://www.ict.swin.edu.au/personal/mkapuruge/files/perfEval.zip
Service Composition
Case-Officer Service
Tow-Truck ServiceThe Client
Δt
248
Comp2 (Δt2) is 239.77 ms. That means the Comp2 is a little slower than the Comp1 serving the
requests (Δt1 < Δt2), which is expected.
Figure 9-19. Performance Comparison
Figure 9-20. Performance Comparison - Neglecting the First Request
0
200
400
600
800
1000
1200
1400
1600
1 11 21 31 41 51 61 71 81 91
Tim
e in
mill
ise
con
ds
Request Number
ODE+BPEL
Serendip
0
50
100
150
200
250
300
350
1 11 21 31 41 51 61 71 81 91
Tim
e in
mill
ise
con
ds
Request Number
ODE+BPEL
Serendip
249
Table 9-2. Results of Runtime Performance Overhead Evaluation
Parameter Response Times (Δt)
Comp1 (ODE+BPEL) Comp2 (Serendip)
Average Δt1 = 210.28 Δt2 = 239.77
Maximum 1466 1139
Minimum 140 202
∴ PPL =
x 100 % = 14.02%
Figure 9-19 also shows that to serve the first request, both Comp1 and Comp2 take a
significant amount of time. But later both serve requests fast. This behaviour is due to the
additional initialisations that are carried out within the composite at the first request. This
performance hit for both composites can be neglected in practical scenarios as initialisation is
just one-time activity. For clarity, Figure 9-20 shows the response time without the first request.
Note that these results are taken in a controlled environment where network delays were
avoided. All the partner services, composites and clients reside on the same machine. In a real
world service orchestration environment network delays also adds to the response times (Δt1
and Δt2). Consequently, the PPL can be insignificant compared to the network delays.
9.3 Comparative Assessment
In Chapter 2 we discussed a set of key requirements that should be fulfilled by a service
orchestration approach. Then in Chapter 3, we analysed a number of existing approaches and
how those approaches have addressed these key requirements. In this section we will evaluate
how Serendip service orchestration approach fulfils these key requirements.
9.3.1 Flexible Processes
Flexibility to change defined business processes is paramount to provide adaptive service
orchestration support. We already provided an exhaustive evaluation of the amount of process
flexibility exhibited by the Serendip framework based on the set of change patterns proposed by
Weber et al. [84, 85] in Section 9.1.
In Serendip approach, the flexibility of adapting service orchestration has been achieved in
two dimensions.
1. The Language and Meta-Model (Chapter 4).
2. The Runtime Design (Chapters 5, 6).
250
As shown in Section 4.2.1, in Serendip language, the task dependencies are not tightly
coupled. Instead the task dependencies are specified in a loosely coupled manner via events (or
event patterns). As shown in Section 4.2.1 this improves the flexibility of modifying the
processes by easily adding/removing tasks and modifying dependencies of existing tasks. This
flexibility is achieved both in the process definition level as well as in the process instance level
to support both evolutionary and ad-hoc changes that a service orchestration can be subjected to.
In addition, the Serendip meta-model decouple the structure of the organisation from the
processes. Again, the events are used to serve this purpose. Processes do not directly depend on
the defined interactions. Rather, processes use the events, which is the outcome of event
interpretations (Section 4.6). With the advantage of decoupling the processes from structure,
many processes can be defined over the same structure and continuously optimised.
Figure 9-21. Two-Dimensional Decoupling via Events
Such a process flexibility is well-supported by the design of the Serendip runtime as well.
The Serendip runtime allows enacting such a declarative business processes via an event-driven
enactment engine. As shown in Section 5.2, the task coordination is done via the publish-
subscribe mechanism, where the subscribers of specific event patterns are notified when the
events are published in an event-cloud. This decouples the publishers and subscribers of a
composite. Most importantly from the flexibility point of view, this allows late modifications to
task dependencies to deviate the running process instances as illustrated in Figure 9-22.
In addition, the Serendip runtime is an extension of the ROAD framework. The ROAD
runtime provides adaptability to a composite structure. The roles and the contracts can be added,
removed or modified. Moreover, the service endpoints or the player bindings can be altered
during the runtime. This provides the required flexibility for a service orchestration to change
the structural aspects as well.
Task i Task j
Processes
Structure
Event
Event
Roles and Relationships
Processes and Behaviours
251
Figure 9-22. Late Modifications to Tasks of a Process Instance
9.3.2 Capturing Commonalities
Capturing commonalities is essential to avoid the redundancy and improve the maintainability
in software systems. Redundancy in service orchestration descriptions not only make the
modifications inefficient but also can make them error-prone and inconsistent. This requirement
has been addressed by two features of Serendip process modelling, i.e.:
1. Behaviour reuse: A common behaviour unit defined in the organisation can be reused
by many process definitions (Section 4.2.2). As shown in (#1) of Figure 9-23, the
commonalities of both process definitions PD1 and PD2 are captured by behaviour
unit B2. A modification to a behaviour unit is automatically reflected in all the
process definitions. This improves the agility and consistency of modifications of
multiple process definitions of a service composition.
2. Behaviour specialisation: A parent behaviour unit can capture the common behaviour
of a behaviour hierarchy (Section 4.2.4). As shown in (#2) of Figure 9-23, the
commonalities of both behaviour units B3.1 and B3.2 are captured in the parent
behaviour unit B3. The children behaviour units only provide the variations (Section
9.3.3) and inherit the common behaviour captured in the parent behaviour unit.
Figure 9-23. Capturing Commonalities in a Behaviour Hierarchy
The ability of capturing commonalities is important for developing SaaS applications [47].
The tenants of a SaaS application have common interests and providing separate isolated
process definitions can cause maintainability issues. Instead, a SaaS vendor’s best interest is to
Event Cloud Task 1
Task 3Task 2
Task n...su
bscri
ption
s
Late modification to pre-event pattern of Task 2 of process instance pdGold011
Process Instance pdGold011
A task as a subscriber maintains the pre-and post-event-patterns locally.
B1 B2
PD1 PD2
B3.1 B3.2
B31
Bn...
PDm...Process Definitions
Behaviour Units
2
252
increase the reuse of the application and thereby exploit the economies of scale [89, 92].
Contemporary service orchestration approaches need to improve the ability of capturing
commonalities if a service orchestration truly needs to be used in a multi-tenanted SaaS
applications.
9.3.3 Allowing Variations
While the support for Behaviour Specialisation allows capturing commonalities, it also allows
specifying the variations in organisational behaviour. An existing behaviour unit can be
specialised to provide a varied behaviour. The benefit of using behaviour specialisation is that it
allows specifying variations without accounting for unnecessary redundancy. As mentioned
earlier, only the additional or the varied part of behaviour is specified in the specialised or the
child behaviour unit. The commonalities are still captured in higher level of behaviour
hierarchy. Allowing such variations in service orchestration is important for multi-tenanted
SaaS application development [47]. A SaaS application needs to serve multiple tenants or
multiple groups of tenants via the same application instance (service composite) [89]. These
tenants or tenant groups may have common business requirements but there can be slight
variations in the manner the processes should be implemented.
These variations can be unforeseen and appear when the business evolves as well. The
Serendip orchestration framework allows specialising the existing behaviour units during the
runtime. The SaaS vendor can dynamically switch from the parent behaviour unit to the new
specialised behaviour unit at runtime to provide such unforeseen variations (Section 4.2.4).
This advantage of behaviour specialisation can be highlighted in contrast to other techniques
such as Aspect-Orientation [181, 190], Variability-descriptors [42], Variability-points [67] and
templates [328]. In all these techniques they assume there is a fixed and volatile part that could
be pre-identified. For example, the aspects/rules viewed via point-cuts in the AO4BPEL [190]
represents the volatile part, while the abstract process which defines the point-cuts represents
the fixed part. Similarly in [42], the variability points represents the volatile part while the
provided template represents the fixed part. But with behaviour specialisation there is no such
fixed parts that restricts the amount of variability that can be shown during the runtime. The
properties of behaviour unit can be specialised to allow variations to facilitate unforeseen
variations without requiring arrangements of volatile parts.
9.3.4 Controlled Changes
The concept of two-tier constraints has been presented in Section 4.4. The primary objective of
having such two-tier constraint concept is to control the changes to organisational behaviour.
253
The constraints are specified in both behaviour level as well as process level. The behaviour
level constraints specified in behaviour units ensure the changes to organisational behaviour do
not violate collaboration objectives. On the other hand the process level constraints specified in
process definitions ensure the objectives/goals of a business process are safeguarded amidst
modifications to the collaborations. The validation is carried out by formally mapping the
organisational behaviours to Petri-Net model (Section 5.6). Then the Petri-Net model is
validated via the integrated Romeo model checker [258].
Due to the two-tier constraints and the modularity provided via the behaviour modelling; only
the relevant minimal set of constraints is used to validate a modification. The relevancy is
captured by the interconnectedness of process definitions and the behaviour units (Section
4.2.3). This is an improvement in contrast to the global constraint specification practice. The use
of two-tier constraints provides better modularity to specify business constraints. In addition, it
helps avoiding the unnecessary restrictions on runtime changes, because, CSmsc ⊆ CSglobal.
The approach also pays a special attention to multi-tenanted environments where multiple
tenants are served with a single application instance [47, 89]. Tenants tend to request
customisation to their packages or business processes as if the tenant is the sole user of the
system [45]. Facilitating the changed requirements of a particular tenant needs to be controlled
so that the requirements expected by other tenants are not compromised. If possible, the changes
should be accommodated. Again, this requirement is facilitated by the two-tier constraint
specification mechanism. The interconnectedness of behaviour units and tenants processes
would determine the possibility of allowing the changes to the single application instance.
9.3.5 Predictable Change Impacts
A service composition is a collaborative environment where a number of partner services
collectively used to achieve a business objective. In multi-tenant environments the multiple
processes need to be defined to suit different users/tenants upon the same service composition
[47, 89, 92]. The changes carried out to facilitate a change of a tenant or underlying
collaboration have an impact on other users/tenants and underlying service providers. In
addition, the adaptations on an adaptive service orchestration can be frequent. Manually testing
and verifying those adaptations and analysing the impact can be difficult and perhaps
impossible in practice.
The work of this thesis has automated such impact analysis and predicts the possible impacts
on collaborations and user/tenant goals. Such predictability is achieved due to two factors.
1. Formal Validation: A formal validation that is carried out prior to the change
realisation (Section 6.3).
254
2. Explicit Process-Behaviour Linkage: The linkage among defined process definitions
and the behaviour units are explicit (Section 4.2.3).
The formal validation based on Petri-Nets helps to automate the validation process. This
allows identifying whether the change is valid or not prior to the change is realised in the
orchestration. Importantly, identifying the potentially affecting process definitions or behaviours
is possible due to the explicit representation of process-behaviour linkage that is used by the
formal validation mechanism. The impact upon tenants, due to a proposed modification in an
underlying collaboration, is identified via the linkage between the process serving the
tenant/customer and behaviour unit representing the collaboration. The same is true for the
reverse also, as the impact upon underlying collaborations, due to a change proposed from a
tenant’s point of view, is possible due to the explicit representation of process-behaviour
linkage.
9.3.6 Managing the Complexity
The complexity of a service orchestration may reflect the complexity of the underlying business.
A business with complex business processes may lead to a complex service orchestration. For
example, a service orchestration that supports business processes in a multi-tenanted
environment, the complexity can be relatively high. Primarily because a single application
instance need to be used to serve multiple tenants. In addition, the support for adaptability can
also increase the complexity due to numerous runtime changes that are not expected during the
design time. An adaptive architecture that leads to increasingly complex service orchestration
systems can gradually become unusable. Therefore, the architecture or the design of the service
orchestration needs to pay a special attention on managing the complexity.
A service orchestration designed as per Serendip architecture manages the complexity in two
different conducts.
1. By applying ROAD concepts on adaptive service orchestrations. (See 1.1 – 1.4
below)
2. By applying the concepts introduced as part of process support. (See 2.1 – 2.4 below)
Serendip concepts are grounded upon Role Oriented Adaptive Design (ROAD). ROAD
provides several concepts on how to manage the complexity of a composite application [79].
Hence, Serendip also benefits from ROAD concepts that help managing the complexity. There
are four main concepts of ROAD that are being used to reduce the complexity of a Serendip
Orchestration.
255
1.1 Separation of Management and Functional System: ROAD separates the management
system from the functional system (Section 6.2.1). The controller is always external
to the composite that being controlled. A composite only specifies how the
functional system is designed separating out the complexity of the controlling entity.
Mapping this to Serendip, the core orchestration is kept simple and independent form
the adaptation logic, which specifies how the orchestration should be adapted. In
addition, the runtime also reflect this separation via a clear separation of functional
system (Enactment Engine, MPF) from the management system (Adaptation Engine,
Organiser role, Validation Module).
1.2 Indirection among Changing Entities: Provides freedom to function irrespective of
changes to other entities. ROAD views a software system as entities that are
connected via adaptive connectors. In service orchestration domain, this means that
the individual services are not connected directly, but via the adaptive contracts
providing the required indirection. The adaptability is seen as a property of the
relationship (contracts) rather than roles itself. This advantage of ROAD is naturally
reflected in Serendip service orchestration as well (Section 4.1.1). The individual
partner services and their dependencies are not captured directly into a process flow.
Rather the actual services that perform tasks are represented by roles and their
interactions are captured by contracts.
1.3 Support for Heterogeneity in Task Execution: ROAD provides platform independent
mechanism for interactions among heterogeneous entities that reside in a distributed
environment. Player implementation is independent that of the composite. Adopting
this concept, a task defined in a Serendip core orchestration does not specify how the
player should perform the task. Serendip processes are grounded upon abstract
structure formed by contracts and roles, rather than directly grounding upon possibly
heterogeneous individual services. This design provides a cleaner separation for
addressing process level concerns without concerning about the heterogeneity of
underlying services.
1.4 Recursive Composition: ROAD allows decomposing a larger service composition in
to a manageable number of smaller composites [79]. All the composites are mutually
opaque and does not attempt to micro-manage each other. Instead each composite are
self-managed with their own management system. In this sense, a larger service
orchestration can be decomposed to a number of relatively smaller orchestrations
(Section 4.3.3). Each composite can consist of its own structure and processes
without requiring defining a larger single service orchestration.
256
Apart from reusing ROAD concepts and applying those concepts in a service orchestration
environment, Serendip also introduces four novel concepts to manage the complexity in an
adaptive service orchestration.
2.1 Modular Behaviour: According to Parnas [329] “ the effectiveness of modularisation
is dependent upon a criteria that divides a complete system into several modules”.
The modularity provided by the behaviour-based process modelling helps to divide a
complete service orchestration into several modular behaviour units. Relevant tasks
and constraints upon them can be brought together and defined as a single behaviour
unit (Section 4.2.2). The existence of a behaviour unit is independent of the processes
being used. A complete service orchestration to serve a consumer demand is built by
assembling these modularised behaviour units.
2.2 Avoiding Unnecessary Redundancy: The reusability of the common organisational
behaviours helps to avoid the unnecessary redundancy. The unnecessary redundancy
increases the complexity of managing the composite. The modifications need to be
carried out repetitively on multiple locations of separate processes. With behaviour-
based process modelling the commonalities are captured in reusable behaviour units.
This avoids requirement for changing multiple process definitions. Instead a common
behaviour unit is modified. Subsequently, the change is reflected in all the process
views automatically (4.2.2).
2.3 Separation of Control- and Data-Flow Concerns: Serendip isolate the control-flow
and data-flow concerns (Section 4.1.2). The control-flow concerns are handles in the
behaviour layer of the architecture whereas the data-flow concerns are handled in the
contractual interactions layer as shown in figure 4-6. The process layer is driven by
the events triggered in the interaction layer. The task dependencies in behaviour units
are specified as patterns of events. This allows changing the interactions with
minimal effect on process layer and vice versa.
2.4 Separation of Internal- and External-Interactions: The membranous design (Section
4.2.5) isolates the internal interaction from the external interactions. This separation
allows both internal and external interactions to evolve independently. For example,
when a player (service) is swapped with another or an existing player changes its
interface, the external interaction tends to change. This can be facilitated by adapting
the transformation (Section 5.4.1) without requiring any modifications to internal
interactions (unless there is a requirement for new source of data, which is a separate
issue). The same isolation is applicable for the reverse direction also. That means the
internal interactions may change over time due to contractual changes. However, the
257
transformation can be adapted without requiring any modifications to external
interactions.
9.3.7 Summary of Comparison
Table 9-3 presents a summary of the above analysis. Table 9-4 compares Serendip with other
approaches in terms of the given criteria.
Table 9-3. Serendip Features to Satisfy Key Requirements – A Summary
Key Requirement Features of Serendip that satisfies
Flexibility Language and Meta-model Promotes Loose-coupling Event-Driven Publish-Subscribe Mechanism
Capturing Commonalities Behaviour Reuse Behaviour Specialisation
Allowing Variations Behaviour Specialisation
Controlled Changes Two-tier Constraints
Predictable Change Impacts
Formal Validation Explicit Process-Behaviour Linkage
Managing the Complexity Separation of Management and Functional System Indirection Among Changing Entities The Support for Heterogeneity Recursive Composition Modular Behaviours Avoiding Unnecessary Redundancy Separation of Control- and Data-Flow Concerns Separation of Internal- and External-Interactions
258
Table 9-4. Results of the Comparative Assessment
Approach Improved
Flexibility
Capturing
Commonalities
Allowing
Variations
Controlled
Changes
Predictable
Change
Impacts
Managing
Complexity
Robust-BPEL[223] + - - - - -
TRAP-BPEL[140] + - ~ - - -
Robust-BPEL2[65] + - - - - -
eFLow [156] + - - + - -
Canfora et al.[157] + - ~ - - -
WASA[159] [160] + - - - - -
Chautauqua[163] + - - - - -
WIDE[164, 165] + - - - - -
ADAPTflex [167] + - - + - -
Fang et al.[57] + - - - - -
SML4BPEL [170] + - - - - -
Yonglin et al.[141] + - - - - -
Rosenberg[179] + - - - - -
Graml et al.[181] + ~ + - - -
rBPMN[184] + ~ ~ + - -
AO4BPEL[68] + ~ ~ ~ - +
MoDAR[149] + ~ ~ ~ - +
Courbis[191] + - - ~ - -
VieDAME[192] + - - ~ - ~
AdaptiveBPEL[194] + - - ~ - ~
VxBPEL[67] + ~ ~ ~ - +
Karastoyanova et al.[66] + ~ ~ - - +
Padus[193] + ~ ~ - - +
Geeblan 2008 [203] + ~ ~ - - -
Lazovic et al. [205] + ~ ~ ~ ~ -
Mietzner[42] + ~ ~ - - -
PAWS 2007 [209] + - - - - -
Discorso 2011 [210] + - - - - -
Zan et al.[211]. + + + ~ ~ ~
Mangan et al. [213] + - - + ~ -
Rychkova et al. [215] + - - + ~ -
DecSerFlow[71] + ~ ~ + - -
Condec[73] + ~ ~ + - -
Guenther [69] + - - + - -
Serendip + + + + + +
+ Explicitly Supported, ~ Supported to a Certain Extent, - Not Supported
259
9.4 Summary
In this chapter we evaluated Serendip approach for modelling and enacting service
orchestrations, in three different fronts.
Firstly, the adaptability of the approach has been systematically evaluated against the change
patterns and change support features introduced by Weber et al. [84, 85]. The results of this
evaluation showed that Serendip supports all 13 adaptation patterns and 4 patterns for
predefined changes. However, Serendip falls short in supporting 3/6 change support features.
Secondly, the runtime performance overhead due to the support for adaptability is evaluated
by performing an in-lab experiment. The experiment compares Serendip runtime against
Apache ODE runtime which executes static processes. The experiment showed that there is a
performance overhead. However, the performance overhead is negligible, when compared to the
benefits of adaptability that Serendip runtime offers. We expect to further increase the
performance by improving the implementation of Serendip runtime.
Thirdly, the approach has been systematically assessed against the key requirements
presented in Chapter 2. The comparative assessment showed that compared to the other
approaches, Serendip provides a better support to fulfill the key requirements. The assessment
also highlighted the features of the approach that supports these key requirements.
260
Chapter 10
Conclusion
We have in this thesis presented an approach for realising adaptive service orchestration
systems. The approach adopts an organisation-centric viewpoint of a service orchestration, in
contrast to the process-centric viewpoint. A service orchestration is modelled as an adaptive
organisation that facilitates the changes in both the processes and partner services. The
motivation for this new perspective stems from the need to address the limitations of existing
approaches as well as the new challenges faced by the design of service orchestration systems in
terms of adaptability. The outcome of this study is a novel approach to the design and enactment
of adaptive service orchestrations.
In this chapter we first highlight the important contributions this thesis makes to the field of
adaptive service orchestration, and then discuss some related topics for future investigation.
10.1 Contributions
The existing norm of using process- or workflow-based wiring of services to define a service
orchestration has major limitations in achieving adaptability for two reasons. Firstly, processes
or workflows define a ‘way’ of achieving a business goal and are short-lived compared to its
business goal. A service composition defined with such a process-centric view point is bound to
fail when the ‘way’ is no longer valid or efficient and there is an ‘alternative/better way’ to
achieve the business goal. Secondly, there are many ‘simultaneous ways’ to provide services
using the same composition of services. Due to these two reasons, it should be possible to
define many processes as well as change them accordingly in a service composition. It follows,
rather than a process, the business goals should be represented in centre-stage of a service
orchestration.
In addition, the existing norm is to ground a service orchestration upon a concrete set of
service. The orchestration logic defined to support interactions of currently bound concrete
services and there is no sufficient abstraction to explicitly represent the service relationships.
However, a service orchestration is usually operated in a distributed and volatile environment
where participating services can appear and disappear. Grounding a service orchestration
directly upon such volatile services makes the core service orchestration inherently brittle in the
long run. It follows, rather than defining a service orchestration based on concrete set of
261
services, it should be possible to define a service orchestration upon an abstract structure that
captures the relationships among services.
In the Serendip approach of this thesis, we have consequently proposed to have an
organisational structure that captures the service relationships among collaborators to provide an
extra layer of abstraction between the processes and the underlying services. This relationship-
based organisational layer provides the necessary stability for defining and evolving business
processes over a volatile base of partner services. In fact, different partner services can be
bound, swapped and unbound to the organisation as the base of partner services and the
organisational goals change. Furthermore, different processes can be defined on top of the same
organisation of services for customers or customer groups which have similar and yet different
requirements, achieving the managed sharing of partner services in an effective manner.
We have proposed a new meta-model and a language for defining service orchestrations
according to the organisation-based approach in Serendip. The meta-model is a further
extension of the Role Oriented Adaptive Design (ROAD) meta-model. ROAD provides the
capability of modelling an adaptive software system as having an organisation structure with
the key concepts of roles, contracts and players (services) [81]. In Serendip, we extend ROAD
to achieve adaptable process support with a number of further key concepts or constructs,
including tasks, behaviour units and process definitions. These new concepts are naturally based
on or relate to the ROAD concepts and they together form the basis for defining adaptable
service orchestrations. Serendip uses a declarative, event-driven language to specify the
dependencies among tasks and defining organisational behaviours and processes in a loosely-
coupled manner. This provides the greater flexibility required in modifying Serendip processes
and realising both evolutionary and ad-hoc change requirements. We have analysed the amount
of flexibility offered by Serendip in terms of the evaluation criteria proposed by Weber et
al.[84]
The Serendip language has provided explicit support for capturing commonalities and
allowing variations in business processes. In doing so, special attention has been paid to avoid
the unnecessary redundancy in process definitions and achieve better maintainability. This has
been achieved by behaviour-based process modelling and behaviour specialisation. Variations
of the same behaviour unit can co-exist within an organisation, having hierarchies of
behaviours. Process definitions refer to and reuse these behaviour units to capture the
commonalities while achieving variations. Such capabilities are beneficial when different
business processes are defined over the same set of resources/services. In online marketplaces,
it is necessary to quickly cater for the diverse requirements of service consumers. Especially in
the Software as a Service (SaaS) environment, the same application instance and its underlying
262
partner services/resources can be reused to cater for the requirements of tenants, who have
similar or common and yet different requirements.
A service orchestration is an IT realisation of a business. The business goals expected by the
service aggregator as well as its collaborators need to be duly represented in the design of the
service orchestration. Without such a representation, it is difficult to assess the impact of
proposed changes to processes and thereby control them. Uncontrolled changes on a service
orchestration can easily damage the objectives of its underlying business. Therefore, for a given
process change, it should be possible to identify a safe-boundary, in which the change can be
realised without any impact to goals of the aggregator as well as collaborators. However, this
boundary need not be unnecessarily restrictive, such that even the possible changes are
prevented hindering the business opportunities. In Serendip, we use the modularity provided by
the process definitions and behaviour units to identify the minimal set of constraints that form
the safe-boundary for a modification.
A novel runtime environment for Serendip has been developed to meet the runtime
adaptability requirements. Instead of executing statically loaded process definitions, the runtime
is designed to support changes to both process definitions and running process instances. The
runtime adaptability has been achieved by adopting a number of key concepts, including Event
Cloud, Publish-Subscribe driven process progression, and Dynamic Workflow Construction.
Furthermore, the Serendip runtime uses clear separation of concerns in its design, including a
membranous design to separate the internal and external data-flows, business rules integration
to separate message interpretation and the control-flow, and the Organiser concept to separate
the functional and management concerns.
The Serendip adaptation management system provides a design and associated mechanisms
to systematically manage the runtime adaptation of service orchestrations. Both evolutionary
and ad-hoc adaptations can be carried out during the runtime without requiring a system restart.
In order to ensure the atomicity, consistency, isolation and durability properties when carrying
out multiple adaptation actions, batch-mode adaptation has been introduced. A scripting
language has been defined and an interpreter has been implemented as part of the adaptation
management framework, to ensure the transactional properties of batch-mode adaptations.
Furthermore, to ensure the goals of the consumers, aggregator and collaborators are protected
during adaptation, a validation mechanism based on the aforementioned minimal set of
constraints concept, has been introduced to check the validity of the changes before the
adaptations are applied to the service orchestration.
The capabilities of Axis2[298, 299] has been extended in ROAD4WS, providing the
deployment environment for Serendip. The design of ROAD4WS[261] allows users to deploy
263
an adaptive service orchestration and enact adaptive business processes using the Serendip
runtime. The Serendip runtime works hand-in-glove with the Axis2 runtime to allow message
mediation based on enacted processes. This has been done by extending the Axis2 modular
architecture without requiring any modifications to the Axis2 code base. Such a design ensures
that this work is not locked to a particular version of Axis2. Moreover, the work can be
integrated with Axis2 runtime in production environments without any interruptions.
Overall, Serendip addresses the key requirement of a design of novel service orchestration
systems, i.e., increased flexibility, capturing commonalities, allowing variations, controlled
changes, automated impact analysis capabilities and managing the complexity. In this thesis we
have presented how the above key requirements are addressed in terms of three essential aspects
of an adaptable service orchestration, i.e., flexible modelling, an adapt-ready enactment and an
adaptation management mechanism.
10.2 Future Work
Much work can be done to further enhance the framework. The following is a list of possible
work that is left as future work.
Organiser Player: To provide the self-adaptive-ness for a service orchestration it is
necessary that the organiser player entity is modelled as an automated software
system that makes decisions on how to adapt. An organiser player with such decision
making capabilities integrated with the framework would close the feedback loop
which is necessary in self-adaptive software design [260]. For example, heuristic,
control-theoretic or rule-based strategies could be explored to make such decisions in
an automated manner with no or minimal requirements for human involvement. We
note that such self-adaptive systems with automatic decision-making capabilities are
an active area of research on their own and may be possible only for some selected
well-understood application situations.
Process Instance Migration: The current adaptation management design does not
explicitly support automated process instance migration. Upon an evolutionary
change, only consequent new process instances behave according to the changed
process definition by default. There is no automated mass migration of existing
instances to the new definition. However, the existing process instances can be
migrated (or adapted) case-by-case via adaptation scripts. As future work such mass
process instance migration strategies can be investigated and automated to further
enhance the capability of the framework.
264
Impact Analysis of Structural Changes: Currently, the impact analysis capability of
the validation module, which is based on a Petri-Net tool, is limited to process level
changes as the focus of this thesis. However, this needs to be further extended to
analyse the impact of structural changes (roles/contracts) on the orchestration.
Serendip uses ROAD to perform the structural changes of a service orchestration but
ROAD does not have such impact analysis capability for structural changes at resent.
A formal validation mechanism needs to be built by further extending the Serendip
validation module in order to support this requirement.
Traceability and Analysis: Currently the framework provides Apache Log4j36 based
logging mechanism to create execution logs. However, this is not adequate for more
sophisticated traceability and analysis of change operations [85]. These logs need to
be annotated with more semantics such as reasons and context of change [330]. The
framework could be further improved by integrating such a logging and change
mining mechanism to enable more sophisticated traceability and analysis [284, 331]
of Serendip processes.
Access Control: In Serendip, only the organiser is authorised to perform the
adaptations. Therefore, the access control needs to be embodied in the design of the
organiser player. As such, different levels and granularities of access to change
operations can be made. From the implementation point of view, the Apache Rampart
[323] module could be easily integrated with the framework for this purpose due to
the use of WS-* compliant standards and Apache Axis2 in the current
implementation.
Improvement for Tool Support: The Serendip Modelling Tool (Section 7.3.1)
supports the development of Serendip models and automatically generates the ROAD
descriptors (in XML) and other related artifacts. This helps software engineers to
quickly define Serendip processes. However, this tool could be further enhanced by
integrating with the ROADdesigner37, which is a graphical modelling tool for the
ROAD framework still under construction. The Serendip monitoring and adaptation
tools also help to manage a service orchestration at runtime. However, this support
can be further enhanced to provide better usability for software engineers. For
example, the current composite monitoring capabilities are limited to local
monitoring (or a limited support for remote monitoring via the available operations –
see Appendix C – exposed as Web service operations). This could be further
36 http://logging.apache.org/log4j/ 37 http://www.swinburne.edu.au/ict/research/cs3/road/roaddesigner.htm
265
improved by allowing a remotely located human organiser to monitor the composite
via the ROADdesigner and thereby take decisions to adapt the composite in a
comprehensive manner.
Other Messaging Protocols: At present the Serendip framework implementation is
limited to SOAP-based [22, 262] message exchanges with the players. However, the
Serendip core is implemented independent of a specific message exchange protocol.
This has been done with the intention of using the framework over an array of
technologies such as RESTful Services [263, 300, 301], XML/RPC [264], and Mobile
Web services[302]. Introduction of such capabilities into the framework are also
valuable additions.
266
Given below is the grammar for the Serendip Language (SerendipLang). The grammar is
presented in an EBNF dialect used by XTEXT framework. For more information about the
XTEXT’s EBNF dialect please refer to XTEXT documentation38.
Listing A-1. Grammar of SerendipLang
grammar org.serendip.SerendipLang with org.eclipse.xtext.common.Terminals generate serendipLang "http://www.serendip.org/SerendipLang" Script: (imports+=Import)* ('Organization' orgName=ID ';') (processDefs+=PdDef)* (btDefs+=Behavior)* (roleDefs+=Role)* (contracts+=Contract)* (playerBindings+=PlayerBinding)* (organizerBinding=OrganizerBinding)?; Import: 'import' qlfdName=path 'as' name=ID ';'; PdDef: 'ProcessDefinition' name=ID '{' cos=Cos cot=Cot brefs+=BehaviorRef+ cons+=Cons* '}'; Behavior: (abstract='abstract')? 'Behavior' name=ID ('extends' superType=[Behavior])? '{' tasks+=TaskRef+ cons+=Cons* '}'; TaskRef: 'TaskRef' role=[Role] '.' name=ID '{' 'InitOn' eppre=Eppre ';' ('Triggers' eppost=Eppost ';')? ('PerfProp' 'type' ppType=("time" | "cost") pp=ppVal ';')? '}'; TaskDef: 'Task' name=ID '{' ('UsingMsgs' srcMsgs=Msgs ';')? ('ResultingMsgs' resultMsgs=Msgs ';')? ('DeliveryType' deliveryType=("push" | "pull") ';')? '}'; Msgs: msgs+=Msg (',' msgs+=Msg)*; Msg:
38 http://www.eclipse.org/Xtext/documentation/
Appendix A. SerendipLang Grammar
267
(contractPart=[Contract]) '.' (termPart=ID) '.' (direction=("Req" | "Res")) ; Eppre: value=STRING; Eppost: value=STRING; ppVal: value=INT ID; Cot: 'CoT' value=STRING ';'; Cos: 'CoS' value=STRING ';'; Cons: 'Constraint' name=ID ':' expression=STRING ';'; BehaviorRef: 'BehaviorRef' parentBT=[Behavior] ';'; Role: 'Role' name=ID ('is a' descr=STRING)? ('playedBy' playedBy=[PlayerBinding])? '{' taskDefs+=TaskDef* '}'; Contract: 'Contract' name=ID '{' 'A is' roleA=[Role] ',' 'B is' roleB=[Role] ';' (facts+=FactType)* terms+=ITerm* ('RuleFile' ruleFileName=STRING ';')? '}'; ITerm: 'ITerm' name=ID '(' params+=ParamType (',' params+=ParamType)* ')' ('withResponse' '(' returnParam=(ParamType) ')')? 'from' direction=("AtoB" | "BtoA") ';'; FactType: 'Fact' name=ID '(' attribs+=FactAttrib (',' attribs+=FactAttrib)* ')' ';'; FactAttrib: type=ID ':' name=ID; ParamType: paramType=ID ':' paramName=ID; PlayerBinding: 'PlayerBinding' name=ID player=STRING 'is a' role=[Role] ';'; OrganizerBinding: 'OrganizerBinding' name=ID player=STRING 'is the Organizer' ';'; path: pathValue=ID ('.' ID)*;
268
This appendix contains the Serendip descriptor files used for the case study (Listing B - 1). A
highlevel overview of RoSAS organisation structure and the processes is given in Figure B - 1.
Due to space limitations we do not include all the Drools rule files (6) and XSLT transformation
files (37) used. However, a sample Drools rule file and an XSLT transformation file is given in
Listing B - 2 and Listing B - 3 respectively that were used to describe the case study earlier. The
generated deployable XML descriptor is given in Listing B - 4. All the related files for the
RoSAS scenario are available to download from,
http://www.ict.swin.edu.au/personal/mkapuruge/files/RoSAS_Scenario6.zip
pdSilv
bComplaining
bComplainingGold
bComplainingPlat
pdGold pdPlat
bTowing
bRepairingbHandleAnyEx
ception
bTaxiProviding
bAccommodationProviding
bTowingAlt bTowingAlt2
COMM
TT
GR
TX
Roles
Contracts
XX
CO_GR
GR_TTCO_MM
CO_TTCO_TX
HT
CO_HT
bX
pdX Process Definitions
Behaviour Units
Legend
Processes
Structure
Reference
Specialisation
Figure B - 1. RoSAS Organisation – An Overview
Listing B - 1. Rosas Orchestration Description In SerendipLang
Organization RoSAS; /*Process Definitions*/ ProcessDefinition pdSilv { CoS "eComplainRcvdSilv"; CoT "(eMMNotifDone * eTTPaid * eGRPaid) ^ eExceptionHandled"; BehaviorRef bComplaining; BehaviorRef bTowing; BehaviorRef bRepairing;
Appendix B. RoSAS Description
269
BehaviorRef bHandleAnyException; Constraint pdSilv_c1: "(eComplainRcvdSilv>0)->(eMMNotifDone>0)"; } ProcessDefinition pdGold { CoS "eComplainRcvdGold"; CoT "(eMMNotifDone * eTTPaid * eGRPaid * eTaxiPaid) ^ eExceptionHandled"; BehaviorRef bComplainingGold; BehaviorRef bTowing; BehaviorRef bRepairing; BehaviorRef bTaxiProviding; BehaviorRef bHandleAnyException; Constraint pdGold_c1:"(eComplainRcvdGold>0)->(eMMNotifDone>0)"; } ProcessDefinition pdPlat { CoS "eComplainRcvdPlat"; CoT "(eMMNotifDone * eTTPaid * eGRPaid * eTaxiPaid * eHotelPaid) ^ eExceptionHandled"; BehaviorRef bComplainingPlat; BehaviorRef bTowingAlt2; BehaviorRef bRepairing; BehaviorRef bTaxiProviding; BehaviorRef bAccommodationProviding; BehaviorRef bHandleAnyException; Constraint pdPlat_c1:"(eComplainRcvdPlat>0)->(eMMNotifDone>0)"; } /*Behavior Definitions*/ Behavior bComplaining { TaskRef CO.tAnalyze { InitOn "eComplainRcvdSilv"; Triggers "eTowReqd * eRepairReqd"; } TaskRef CO.tNotify { InitOn "eMMNotif"; Triggers "eMMNotifDone"; } } Behavior bComplainingGold extends bComplaining{ TaskRef CO.tAnalyze { InitOn "eComplainRcvdGold"; Triggers "eTowReqd * eRepairReqd * eTaxiReqd"; } } Behavior bComplainingPlat extends bComplaining{ TaskRef CO.tAnalyze { InitOn "eComplainRcvdPlat"; Triggers "eTowReqd * eRepairReqd * eTaxiReqd * eAccommoReqd"; } } Behavior bRepairing{ TaskRef GR.tAcceptRepairOrder { InitOn "eRepairReqd"; Triggers "eDestinationKnown"; } TaskRef GR.tDoRepair { InitOn "eTowSuccess"; Triggers "eRepairSuccess ^ eRepairFailed"; } TaskRef CO.tPayGR { InitOn "eRepairSuccess"; Triggers "eGRPaid * eMMNotif"; } Constraint bRepairing_c1:"(eRepairSuccess>0)->(eGRPaid>0)"; } Behavior bTowing{ TaskRef TT.tTow { InitOn "eTowReqd * eDestinationKnown"; Triggers "eTowSuccess ^ eTowFailed"; } TaskRef CO.tPayTT { InitOn "eTowSuccess"; Triggers "eTTPaid"; } Constraint bTowing_c1:"(eTowSuccess>0)->(eTTPaid>0)";
270
} Behavior bTowingAlt extends bTowing { TaskRef CO.tAlertTowDone { InitOn "eTowSuccess"; Triggers "eMemberTowAlerted"; } } Behavior bTowingAlt2 extends bTowing { TaskRef TT.tTow { InitOn "eTowReqd * eDestinationKnown * eTaxiProvided"; } } Behavior bTaxiProviding{ TaskRef CO.tPlaceTaxiOrder { InitOn "eTaxiReqd"; Triggers "eTaxiOrderPlaced"; } TaskRef TX.tProvideTaxi { InitOn "eTaxiOrderPlaced"; Triggers "eTaxiProvided"; } TaskRef CO.tPayTaxi { InitOn "eTaxiProvided"; Triggers "eTaxiPaid"; } Constraint bTaxiProviding_c1:"(eTaxiReqd>0)->(eTaxiPaid>0)"; Constraint bTaxiProviding_c2:"(eTaxiOrderPlaced>0)->(eTaxiProvided>0)"; } Behavior bAccommodationProviding{ TaskRef CO.tBookHotel { InitOn "eAccommoReqd"; Triggers "eAccommoReqested"; } TaskRef HT.tConfirmBooking { InitOn "eAccommoReqested"; Triggers "eAccommoBookingConfirmed"; } TaskRef CO.tPayHotel{ InitOn "eAccommoBookingConfirmed"; Triggers "eHotelPaid"; } } Behavior bHandleAnyException{ TaskRef CO.tHandleException { InitOn "eTowFailed ^ eRepairFailed"; Triggers "eExceptionHandled"; } } /*Role Definitions*/ Role CO is a 'CaseOfficer' playedBy copb{ Task tNotify{ ResultingMsgs CO_MM.complain.Res; } Task tAnalyze{ UsingMsgs CO_MM.complain.Req; ResultingMsgs CO_TT.orderTow.Req, CO_GR.orderRepair.Req, CO_TX.orderTaxi.Req, CO_HT.orderHotel.Req; } Task tPayGR { UsingMsgs CO_GR.orderRepair.Res; ResultingMsgs CO_GR.payRepair.Req, CO_MM.complain.Res; } Task tPayTT { UsingMsgs CO_TT.orderTow.Res; ResultingMsgs CO_TT.payTow.Req; } Task tPlaceTaxiOrder { ResultingMsgs CO_TX.orderTaxi.Req; } Task tPayTaxi { UsingMsgs CO_TX.orderTaxi.Res;
271
ResultingMsgs CO_TX.payTaxi.Req; } Task tBookHotel{ ResultingMsgs CO_HT.orderHotel.Req; } Task tPayHotel{ UsingMsgs CO_HT.orderHotel.Res; ResultingMsgs CO_HT.payHotel.Req; } Task tHandleException{ ResultingMsgs CO_MM.complain.Res; } } Role GR is a 'Garage' playedBy grpb { Task tAcceptRepairOrder{ UsingMsgs CO_GR.orderRepair.Req; ResultingMsgs CO_TT.sendGRLocation.Req; DeliveryType pull ; } Task tDoRepair { UsingMsgs GR_TT.sendGRLocation.Res; ResultingMsgs CO_GR.orderRepair.Res; } Task tAcceptPayment { UsingMsgs CO_GR.payRepair.Req; } } Role TT is a 'TowTruck' playedBy ttpb{ Task tTow { UsingMsgs CO_TT.orderTow.Req , GR_TT.sendGRLocation.Req; ResultingMsgs CO_TT.orderTow.Res, GR_TT.sendGRLocation.Res; } Task tAcceptPayment { UsingMsgs CO_TT.payTow.Req; } } Role TX is a 'Taxi' playedBy txpb{ Task tProvideTaxi { UsingMsgs CO_TX.orderTaxi.Req; ResultingMsgs CO_TX.orderTaxi.Res; } Task tAcceptPayment { UsingMsgs CO_TX.payTaxi.Req; } } Role HT is a 'Hotel' playedBy htpb{ Task tConfirmBooking { UsingMsgs CO_HT.orderHotel.Req; ResultingMsgs CO_HT.orderHotel.Res; } Task tAcceptPayment { UsingMsgs CO_HT.payHotel.Req; } } Role MM is a 'Member'{ } /*Contract Definitions*/ Contract CO_MM{ A is CO, B is MM ; ITerm complain (String:memId, String:complainDetails ) withResponse (String:response) from BtoA; } Contract CO_TT{ A is CO, B is TT; Fact TowCounter(int:counter) ; Fact ContractState(String:state); ITerm orderTow (String:pickupInfo) withResponse (String:ackTowing) from AtoB; ITerm payTow (String:paymentInfo) from AtoB; RuleFile "CO_TT.drl"; } Contract CO_GR{
272
A is CO, B is GR; Fact ContractState(String:state); ITerm orderRepair (String:repairInfo) withResponse (String:ackRepairing) from AtoB; ITerm payRepair (String:paymentInfo) from AtoB; RuleFile "CO_GR.drl"; } Contract CO_TX{ A is CO, B is TX; Fact ContractState(String:state); ITerm orderTaxi (String:taxiInfo) withResponse (String:ackTaxiAccept) from AtoB; ITerm payTaxi (String:paymentInfo) from AtoB; RuleFile "CO_TX.drl"; } Contract GR_TT{ A is GR, B is TT; Fact ContractState(String:state); ITerm sendGRLocation (String:addressOfGarage) withResponse (String:ack) from AtoB ; RuleFile "GR_TT.drl"; } Contract CO_HT{ A is CO, B is HT ; Fact ContractState(String:state); ITerm orderHotel (String:guestName ) withResponse (String:response) from AtoB; ITerm payHotel (String:paymentInfo ) from AtoB; RuleFile "CO_HT.drl"; } /*Player Binding Definitions*/ PlayerBinding copb "http://127.0.0.1:8080/axis2/services/S6COService" is a CO ; PlayerBinding ttpb "http://136.186.7.228:8080/axis2/services/S6TTService" is a TT ; PlayerBinding grpb "http://136.186.7.228:8080/axis2/services/S6GRService" is a GR ; PlayerBinding txpb "http://136.186.7.228:8080/axis2/services/S6TXService" is a TX ; PlayerBinding htpb "http://136.186.7.228:8080/axis2/services/S6HTService" is a HT ; OrganizerBinding org "http://127.0.0.1:8080/axis2/services/rosas_organizer" is the Organizer ;
Listing B - 2. Sample Drools Rule File for CO_TT.drl
/**Generated by Serendip**/ import au.edu.swin.ict.road.composite.rules.events.contract.MessageRecievedEvent; import au.edu.swin.ict.road.composite.rules.events.contract.ObligationComplianceEvent; import au.edu.swin.ict.road.composite.rules.events.contract.TermExecutedEvent; import au.edu.swin.ict.road.composite.contract.Contract; import au.edu.swin.ict.road.composite.message.MessageWrapper; import au.edu.swin.ict.road.regulator.FactObject; import au.edu.swin.ict.road.regulator.FactTupleSpace; import au.edu.swin.ict.road.regulator.FactTupleSpaceRow; import java.util.List; import au.edu.swin.ict.serendip.rosas.util.DroolsUtil; /** Global variables **/ global Contract contract; global FactTupleSpace fts; /** Interaction message **/ declare MessageRecievedEvent @role(event) end /* Fact to keep the state of the contract */ declare ContractState state : String end /*Fact to keep the number of tow requests*/ declare TowCounter count : int end /** 1. The rule to evaluate the interaction orderTow on the request path **/ rule "orderTowRequestRule" when $msg : MessageRecieved(operationName == "orderTow", response ==false) then $msg.triggerEvent("eTowReqd"); end
273
/** 2. The rules to evaluate the interaction orderTow on the response path. In here we have two rules to check if evaluation of message is successful or not. Here, DroolsUtil is a Java class implemented to evaluate the message **/ rule "orderTowResponseRule Valid" when $msg : MessageRecieved(operationName == "orderTow" , response==true) eval(true==DroolsUtil.evaluate($msg)) then //Everything is fine, trigger success event $msg.triggerEvent("eTowSuccess"); end rule "orderTowResponseRule Invalid" when $msg : MessageRecieved(operationName == "orderTow" , response==true) eval(false==DroolsUtil.evaluate($msg)) then //Something is wrong, trigger the failure event $msg.triggerEvent("eTowFailed"); end /** 3. The rule to evaluate the interaction payTow on the request path **/ rule "payTowRequestRule" when $msg : MessageRecieved(operationName == "payTow", response ==false) then $msg.triggerEvent("eTTPaid"); end /** 4. The rule to pay Bonus **/ rule "Pay some bonus" when $msg : MessageRecievedEvent(operationName == "orderTow", response ==true) $tc: TowCounter( count > 20) then $msg.triggerEvent("eTowBonusAllowed"); $tc.setCount(0); end /** 4. The rule to increment the tow counter **/ rule "IncrementTowCount" when $msg : MessageRecievedEvent(operationName == "orderTow", response ==false) $tc: TowCounter( count < 20) then $tc.setCount($tc.getCount()+ 1);//Increment end
Listing B - 3. Sample XSLT File to Generate The CO.tAnalyze.Req
<?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="2.0" xmlns:xsl=http://www.w3.org/1999/XSL/Transform xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:q0="http://ws.apache.org/axis2"> <xsl:output method="xml" indent="yes" /> <xsl:param name="CO_MM.complain.Req" /> <xsl:template match="/"> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> <soapenv:Body> <q0:analyze xmlns:q0="http://ws.apache.org/axis2"> <memId> <xsl:value-of select="$CO_MM.complain.Req/soapenv:Envelope/soapenv:Body/q0:complain/args0" /> </memId> <complainDetails> <xsl:value-of select="$CO_MM.complain.Req/soapenv:Envelope/soapenv:Body/q0:complain/args1" /> </complainDetails> </q0:analyze> </soapenv:Body> </soapenv:Envelope> </xsl:template> </xsl:stylesheet>
274
Listing B - 4. The Deployable XML Descriptor File
<?xml version="1.0" encoding="UTF-8"?> <tns:SMC dataDir="sample/Scenario6/data" compositeRuleFile="composite.drl" routingRuleFile="routing.drl" name="RoSAS" xmlns:tns="http://www.swin.edu.au/ict/road/smc" xmlns:tns1="http://www.ict.swin.edu.au/serendip/types" xmlns:tns2="http://www.swin.edu.au/ict/road/term" xmlns:tns3="http://www.swin.edu.au/ict/road/fact" xmlns:tns4="http://www.swin.edu.au/ict/road/role" xmlns:tns5="http://www.swin.edu.au/ict/road/contract" xmlns:tns6="http://www.swin.edu.au/ict/road/monitor" xmlns:tns7="http://www.swin.edu.au/ict/road/player" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.swin.edu.au/ict/road/smc smc.xsd "> <!--Process Definitions are listed below --> <ProcessDefinitions> <tns1:ProcessDefinition id="pdSilv" descr=""> <tns1:CoS>eComplainRcvdSilv</tns1:CoS> <tns1:CoT>(eMMNotifDone * eTTPaid * eGRPaid) ^ eExceptionHandled </tns1:CoT> <tns1:BehaviorTermRefs> <tns1:BehavirTermId>bComplaining</tns1:BehavirTermId> <tns1:BehavirTermId>bTowing</tns1:BehavirTermId> <tns1:BehavirTermId>bRepairing</tns1:BehavirTermId> <tns1:BehavirTermId>bHandleAnyException</tns1:BehavirTermId> </tns1:BehaviorTermRefs> <tns1:Constraints> <tns1:Constraint Id="pdSilv_c1" Expression="(eComplainRcvdSilv>0)->(eMMNotifDone>0)" enabled="true" soft="false" /> </tns1:Constraints> </tns1:ProcessDefinition> <tns1:ProcessDefinition id="pdGold" descr=""> <tns1:CoS>eComplainRcvdGold</tns1:CoS> <tns1:CoT>(eMMNotifDone * eTTPaid * eGRPaid * eTaxiPaid) ^ eExceptionHandled</tns1:CoT> <tns1:BehaviorTermRefs> <tns1:BehavirTermId>bComplainingGold</tns1:BehavirTermId> <tns1:BehavirTermId>bTowing</tns1:BehavirTermId> <tns1:BehavirTermId>bRepairing</tns1:BehavirTermId> <tns1:BehavirTermId>bTaxiProviding</tns1:BehavirTermId> <tns1:BehavirTermId>bHandleAnyException</tns1:BehavirTermId> </tns1:BehaviorTermRefs> <tns1:Constraints> <tns1:Constraint Id="pdGold_c1" Expression="(eComplainRcvdGold>0)->(eMMNotifDone>0)" enabled="true" soft="false" /> </tns1:Constraints> </tns1:ProcessDefinition> <tns1:ProcessDefinition id="pdPlat" descr=""> <tns1:CoS>eComplainRcvdPlat</tns1:CoS> <tns1:CoT>(eMMNotifDone * eTTPaid * eGRPaid * eTaxiPaid * eHotelPaid) ^ eExceptionHandled</tns1:CoT> <tns1:BehaviorTermRefs> <tns1:BehavirTermId>bComplainingPlat</tns1:BehavirTermId> <tns1:BehavirTermId>bTowingAlt2</tns1:BehavirTermId> <tns1:BehavirTermId>bRepairing</tns1:BehavirTermId> <tns1:BehavirTermId>bTaxiProviding</tns1:BehavirTermId> <tns1:BehavirTermId>bAccommodationProviding</tns1:BehavirTermId> <tns1:BehavirTermId>bHandleAnyException</tns1:BehavirTermId> </tns1:BehaviorTermRefs> <tns1:Constraints> <tns1:Constraint Id="pdPlat_c1" Expression="(eComplainRcvdPlat>0)->(eMMNotifDone>0)" enabled="true" soft="false" /> </tns1:Constraints> </tns1:ProcessDefinition> </ProcessDefinitions> <!--Behavior Terms are listed below --> <BehaviorTerms>
275
<tns1:BehaviorTerm id="bComplaining" isAbstract="false"> <tns1:TaskRefs> <tns1:TaskRef Id="CO.tAnalyze" preEP="eComplainRcvdSilv" postEP="eTowReqd * eRepairReqd" /> <tns1:TaskRef Id="CO.tNotify" preEP="eMMNotif" postEP="eMMNotifDone" /> </tns1:TaskRefs> <tns1:Constraints> </tns1:Constraints> </tns1:BehaviorTerm> <tns1:BehaviorTerm id="bComplainingGold" extends="bComplaining" isAbstract="false"> <tns1:TaskRefs> <tns1:TaskRef Id="CO.tAnalyze" preEP="eComplainRcvdGold" postEP="eTowReqd * eRepairReqd * eTaxiReqd" /> </tns1:TaskRefs> <tns1:Constraints> </tns1:Constraints> </tns1:BehaviorTerm> <tns1:BehaviorTerm id="bComplainingPlat" extends="bComplaining" isAbstract="false"> <tns1:TaskRefs> <tns1:TaskRef Id="CO.tAnalyze" preEP="eComplainRcvdPlat" postEP="eTowReqd * eRepairReqd * eTaxiReqd * eAccommoReqd" /> </tns1:TaskRefs> <tns1:Constraints> </tns1:Constraints> </tns1:BehaviorTerm> <tns1:BehaviorTerm id="bRepairing" isAbstract="false"> <tns1:TaskRefs> <tns1:TaskRef Id="GR.tAcceptRepairOrder" preEP="eRepairReqd" postEP="eDestinationKnown" /> <tns1:TaskRef Id="GR.tDoRepair" preEP="eTowSuccess" postEP="eRepairSuccess ^ eRepairFailed" /> <tns1:TaskRef Id="CO.tPayGR" preEP="eRepairSuccess" postEP="eGRPaid * eMMNotif" /> </tns1:TaskRefs> <tns1:Constraints> <tns1:Constraint Id="bRepairing_c1" Expression="(eRepairSuccess>0)->(eGRPaid>0)" enabled="true" soft="false" /> </tns1:Constraints> </tns1:BehaviorTerm> <tns1:BehaviorTerm id="bTowing" isAbstract="false"> <tns1:TaskRefs> <tns1:TaskRef Id="TT.tTow" preEP="eTowReqd * eDestinationKnown" postEP="eTowSuccess ^ eTowFailed" /> <tns1:TaskRef Id="CO.tPayTT" preEP="eTowSuccess" postEP="eTTPaid" /> </tns1:TaskRefs> <tns1:Constraints> <tns1:Constraint Id="bTowing_c1" Expression="(eTowSuccess>0)->(eTTPaid>0)" enabled="true" soft="false" /> </tns1:Constraints> </tns1:BehaviorTerm> <tns1:BehaviorTerm id="bTaxiProviding" isAbstract="false"> <tns1:TaskRefs> <tns1:TaskRef Id="CO.tPlaceTaxiOrder" preEP="eTaxiReqd" postEP="eTaxiOrderPlaced" /> <tns1:TaskRef Id="TX.tProvideTaxi" preEP="eTaxiOrderPlaced" postEP="eTaxiProvided" /> <tns1:TaskRef Id="CO.tPayTaxi" preEP="eTaxiProvided" postEP="eTaxiPaid" /> </tns1:TaskRefs> <tns1:Constraints> <tns1:Constraint Id="bTaxiProviding_c1" Expression="(eTaxiReqd>0)->(eTaxiPaid>0)" enabled="true" soft="false" /> <tns1:Constraint Id="bTaxiProviding_c2" Expression="(eTaxiOrderPlaced>0)->(eTaxiProvided>0)" enabled="true" soft="false" /> </tns1:Constraints> </tns1:BehaviorTerm> <tns1:BehaviorTerm id="bAccommodationProviding" isAbstract="false">
276
<tns1:TaskRefs> <tns1:TaskRef Id="CO.tBookHotel" preEP="eAccommoReqd" postEP="eAccommoReqested" /> <tns1:TaskRef Id="HT.tConfirmBooking" preEP="eAccommoReqested" postEP="eAccommoBookingConfirmed" /> <tns1:TaskRef Id="CO.tPayHotel" preEP="eAccommoBookingConfirmed" postEP="eHotelPaid" /> </tns1:TaskRefs> <tns1:Constraints> </tns1:Constraints> </tns1:BehaviorTerm> <tns1:BehaviorTerm id="bHandleAnyException" isAbstract="false"> <tns1:TaskRefs> <tns1:TaskRef Id="CO.tHandleException" preEP="eTowFailed ^ eRepairFailed" postEP="eExceptionHandled" /> </tns1:TaskRefs> <tns1:Constraints> </tns1:Constraints> </tns1:BehaviorTerm> <tns1:BehaviorTerm id="bTowingAlt" extends="bTowing" isAbstract="false"> <tns1:TaskRefs> <tns1:TaskRef Id="CO.tAlertTowDone" preEP="eTowSuccess" postEP="eMemberTowAlerted" /> </tns1:TaskRefs> <tns1:Constraints> </tns1:Constraints> </tns1:BehaviorTerm> <tns1:BehaviorTerm id="bTowingAlt2" extends="bTowing" isAbstract="false"> <tns1:TaskRefs> <tns1:TaskRef Id="TT.tTow" preEP="eTowReqd * eDestinationKnown * eTaxiProvided" postEP="" /> </tns1:TaskRefs> <tns1:Constraints> </tns1:Constraints> </tns1:BehaviorTerm> </BehaviorTerms> <!--Contract Definitions will be listed below --> <Contracts> <Contract id="CO_MM" type="permissive" ruleFile="CO_MM.drl"> <Abstract>false</Abstract> <State>Incipient</State> <LinkedFacts> </LinkedFacts> <Terms> <Term id="complain" name="complain"> <Operation name="complain"> <Parameters> <Parameter> <Type>String</Type> <Name>memId</Name> </Parameter> <Parameter> <Type>String</Type> <Name>complainDetails</Name> </Parameter> </Parameters> <Return>String</Return> </Operation> <Direction>BtoA</Direction> <Description>Term for complain</Description> </Term> </Terms> <RoleAID>CO</RoleAID> <RoleBID>MM</RoleBID> <Description>This is the contract btwn the CO and MM </Description> </Contract> <Contract id="CO_TT" type="permissive" ruleFile="CO_TT.drl">
277
<Abstract>false</Abstract> <State>Incipient</State> <LinkedFacts> <Fact name="TowCounter" /> <Fact name="ContractState" /> </LinkedFacts> <Terms> <Term id="orderTow" name="orderTow"> <Operation name="orderTow"> <Parameters> <Parameter> <Type>String</Type> <Name>pickupInfo</Name> </Parameter> </Parameters> <Return>String</Return> </Operation> <Direction>AtoB</Direction> <Description>Term for orderTow</Description> </Term> <Term id="payTow" name="payTow"> <Operation name="payTow"> <Parameters> <Parameter> <Type>String</Type> <Name>paymentInfo</Name> </Parameter> </Parameters> <Return>void</Return> </Operation> <Direction>AtoB</Direction> <Description>Term for payTow</Description> </Term> </Terms> <RoleAID>CO</RoleAID> <RoleBID>TT</RoleBID> <Description>This is the contract btwn the CO and TT </Description> </Contract> <Contract id="CO_GR" type="permissive" ruleFile="CO_GR.drl"> <Abstract>false</Abstract> <State>Incipient</State> <LinkedFacts> <Fact name="ContractState" /> </LinkedFacts> <Terms> <Term id="orderRepair" name="orderRepair"> <Operation name="orderRepair"> <Parameters> <Parameter> <Type>String</Type> <Name>repairInfo</Name> </Parameter> </Parameters> <Return>String</Return> </Operation> <Direction>AtoB</Direction> <Description>Term for orderRepair</Description> </Term> <Term id="payRepair" name="payRepair"> <Operation name="payRepair"> <Parameters> <Parameter> <Type>String</Type> <Name>paymentInfo</Name> </Parameter> </Parameters> <Return>void</Return> </Operation> <Direction>AtoB</Direction> <Description>Term for payRepair</Description> </Term>
278
</Terms> <RoleAID>CO</RoleAID> <RoleBID>GR</RoleBID> <Description>This is the contract btwn the CO and GR </Description> </Contract> <Contract id="CO_TX" type="permissive" ruleFile="CO_TX.drl"> <Abstract>false</Abstract> <State>Incipient</State> <LinkedFacts> <Fact name="ContractState" /> </LinkedFacts> <Terms> <Term id="orderTaxi" name="orderTaxi"> <Operation name="orderTaxi"> <Parameters> <Parameter> <Type>String</Type> <Name>taxiInfo</Name> </Parameter> </Parameters> <Return>String</Return> </Operation> <Direction>AtoB</Direction> <Description>Term for orderTaxi</Description> </Term> <Term id="payTaxi" name="payTaxi"> <Operation name="payTaxi"> <Parameters> <Parameter> <Type>String</Type> <Name>paymentInfo</Name> </Parameter> </Parameters> <Return>void</Return> </Operation> <Direction>AtoB</Direction> <Description>Term for payTaxi</Description> </Term> </Terms> <RoleAID>CO</RoleAID> <RoleBID>TX</RoleBID> <Description>This is the contract btwn the CO and TX </Description> </Contract> <Contract id="GR_TT" type="permissive" ruleFile="GR_TT.drl"> <Abstract>false</Abstract> <State>Incipient</State> <LinkedFacts> <Fact name="ContractState" /> </LinkedFacts> <Terms> <Term id="sendGRLocation" name="sendGRLocation"> <Operation name="sendGRLocation"> <Parameters> <Parameter> <Type>String</Type> <Name>addressOfGarage</Name> </Parameter> </Parameters> <Return>String</Return> </Operation> <Direction>AtoB</Direction> <Description>Term for sendGRLocation</Description> </Term> </Terms> <RoleAID>GR</RoleAID> <RoleBID>TT</RoleBID> <Description>This is the contract btwn the GR and TT </Description> </Contract> <Contract id="CO_HT" type="permissive" ruleFile="CO_HT.drl">
279
<Abstract>false</Abstract> <State>Incipient</State> <LinkedFacts> <Fact name="ContractState" /> </LinkedFacts> <Terms> <Term id="orderHotel" name="orderHotel"> <Operation name="orderHotel"> <Parameters> <Parameter> <Type>String</Type> <Name>guestName</Name> </Parameter> </Parameters> <Return>String</Return> </Operation> <Direction>AtoB</Direction> <Description>Term for orderHotel</Description> </Term> <Term id="payHotel" name="payHotel"> <Operation name="payHotel"> <Parameters> <Parameter> <Type>String</Type> <Name>paymentInfo</Name> </Parameter> </Parameters> <Return>void</Return> </Operation> <Direction>AtoB</Direction> <Description>Term for payHotel</Description> </Term> </Terms> <RoleAID>CO</RoleAID> <RoleBID>HT</RoleBID> <Description>This is the contract btwn the CO and HT </Description> </Contract> </Contracts> <!--Role Definitions will be listed below --> <Roles> <Role id="CO" name="CO"> <Tasks> <tns1:Task id="tNotify" isMsgDriven="false"> <tns1:Out deliveryType="push"> <tns1:Operation name="tNotify"> <Return>Return</Return> </tns1:Operation> </tns1:Out> <tns1:In isResponse="true"> </tns1:In> <tns1:ResultMsgs> <tns1:ResultMsg contractId="CO_MM" termId="complain" isResponse="true" transformation="CO_tNotify_CO_MM_complain_Res.xsl" /> </tns1:ResultMsgs> </tns1:Task> <tns1:Task id="tAnalyze" isMsgDriven="false"> <tns1:Out deliveryType="push"> <tns1:Operation name="tAnalyze"> <Return>Return</Return> </tns1:Operation> </tns1:Out> <tns1:In isResponse="true"> </tns1:In> <tns1:SrcMsgs transformation="CO_tAnalyze.xsl"> <tns1:SrcMsg contractId="CO_MM" termId="complain" isResponse="false" /> </tns1:SrcMsgs> <tns1:ResultMsgs> <tns1:ResultMsg contractId="CO_TT" termId="orderTow" isResponse="false" transformation="CO_tAnalyze_CO_TT_orderTow_Req.xsl" />
280
<tns1:ResultMsg contractId="CO_GR" termId="orderRepair" isResponse="false" transformation="CO_tAnalyze_CO_GR_orderRepair_Req.xsl" /> <tns1:ResultMsg contractId="CO_TX" termId="orderTaxi" isResponse="false" transformation="CO_tAnalyze_CO_TX_orderTaxi_Req.xsl" /> <tns1:ResultMsg contractId="CO_HT" termId="orderHotel" isResponse="false" transformation="CO_tAnalyze_CO_HT_orderHotel_Req.xsl" /> </tns1:ResultMsgs> </tns1:Task> <tns1:Task id="tPayGR" isMsgDriven="false"> <tns1:Out deliveryType="push"> <tns1:Operation name="tPayGR"> <Return>Return</Return> </tns1:Operation> </tns1:Out> <tns1:In isResponse="true"> </tns1:In> <tns1:SrcMsgs transformation="CO_tPayGR.xsl"> <tns1:SrcMsg contractId="CO_GR" termId="orderRepair" isResponse="true" /> </tns1:SrcMsgs> <tns1:ResultMsgs> <tns1:ResultMsg contractId="CO_GR" termId="payRepair" isResponse="false" transformation="CO_tPayGR_CO_GR_payRepair_Req.xsl" /> <tns1:ResultMsg contractId="CO_MM" termId="complain" isResponse="true" transformation="CO_tPayGR_CO_MM_complain_Res.xsl" /> </tns1:ResultMsgs> </tns1:Task> <tns1:Task id="tPayTT" isMsgDriven="false"> <tns1:Out deliveryType="push"> <tns1:Operation name="tPayTT"> <Return>Return</Return> </tns1:Operation> </tns1:Out> <tns1:In isResponse="true"> </tns1:In> <tns1:SrcMsgs transformation="CO_tPayTT.xsl"> <tns1:SrcMsg contractId="CO_TT" termId="orderTow" isResponse="true" /> </tns1:SrcMsgs> <tns1:ResultMsgs> <tns1:ResultMsg contractId="CO_TT" termId="payTow" isResponse="false" transformation="CO_tPayTT_CO_TT_payTow_Req.xsl" /> </tns1:ResultMsgs> </tns1:Task> <tns1:Task id="tPlaceTaxiOrder" isMsgDriven="false"> <tns1:Out deliveryType="push"> <tns1:Operation name="tPlaceTaxiOrder"> <Return>Return</Return> </tns1:Operation> </tns1:Out> <tns1:In isResponse="true"> </tns1:In> <tns1:ResultMsgs> <tns1:ResultMsg contractId="CO_TX" termId="orderTaxi" isResponse="false" transformation="CO_tPlaceTaxiOrder_CO_TX_orderTaxi_Req.xsl" /> </tns1:ResultMsgs> </tns1:Task> <tns1:Task id="tPayTaxi" isMsgDriven="false"> <tns1:Out deliveryType="push"> <tns1:Operation name="tPayTaxi"> <Return>Return</Return> </tns1:Operation> </tns1:Out> <tns1:In isResponse="true"> </tns1:In> <tns1:SrcMsgs transformation="CO_tPayTaxi.xsl"> <tns1:SrcMsg contractId="CO_TX" termId="orderTaxi" isResponse="true" /> </tns1:SrcMsgs>
281
<tns1:ResultMsgs> <tns1:ResultMsg contractId="CO_TX" termId="payTaxi" isResponse="false" transformation="CO_tPayTaxi_CO_TX_payTaxi_Req.xsl" /> </tns1:ResultMsgs> </tns1:Task> <tns1:Task id="tBookHotel" isMsgDriven="false"> <tns1:Out deliveryType="push"> <tns1:Operation name="tBookHotel"> <Return>Return</Return> </tns1:Operation> </tns1:Out> <tns1:In isResponse="true"> </tns1:In> <tns1:ResultMsgs> <tns1:ResultMsg contractId="CO_HT" termId="orderHotel" isResponse="false" transformation="CO_tBookHotel_CO_HT_orderHotel_Req.xsl" /> </tns1:ResultMsgs> </tns1:Task> <tns1:Task id="tPayHotel" isMsgDriven="false"> <tns1:Out deliveryType="push"> <tns1:Operation name="tPayHotel"> <Return>Return</Return> </tns1:Operation> </tns1:Out> <tns1:In isResponse="true"> </tns1:In> <tns1:SrcMsgs transformation="CO_tPayHotel.xsl"> <tns1:SrcMsg contractId="CO_HT" termId="orderHotel" isResponse="true" /> </tns1:SrcMsgs> <tns1:ResultMsgs> <tns1:ResultMsg contractId="CO_HT" termId="payHotel" isResponse="false" transformation="CO_tPayHotel_CO_HT_payHotel_Req.xsl" /> </tns1:ResultMsgs> </tns1:Task> <tns1:Task id="tHandleException" isMsgDriven="false"> <tns1:Out deliveryType="push"> <tns1:Operation name="tHandleException"> <Return>Return</Return> </tns1:Operation> </tns1:Out> <tns1:In isResponse="true"> </tns1:In> <tns1:ResultMsgs> <tns1:ResultMsg contractId="CO_MM" termId="complain" isResponse="true" transformation="CO_tHandleException_CO_MM_complain_Res.xsl" /> </tns1:ResultMsgs> </tns1:Task> </Tasks> <Description>This is a CaseOfficer role</Description> </Role> <Role id="GR" name="GR"> <Tasks> <tns1:Task id="tAcceptRepairOrder" isMsgDriven="false"> <tns1:Out deliveryType="pull"> <tns1:Operation name="tAcceptRepairOrder"> <Return>Return</Return> </tns1:Operation> </tns1:Out> <tns1:In isResponse="true"> </tns1:In> <tns1:SrcMsgs transformation="GR_tAcceptRepairOrder.xsl"> <tns1:SrcMsg contractId="CO_GR" termId="orderRepair" isResponse="false" /> </tns1:SrcMsgs> <tns1:ResultMsgs> <tns1:ResultMsg contractId="CO_TT" termId="sendGRLocation" isResponse="false" transformation="GR_tAcceptRepairOrder_CO_TT_sendGRLocation_Req.xsl" /> </tns1:ResultMsgs>
282
</tns1:Task> <tns1:Task id="tDoRepair" isMsgDriven="false"> <tns1:Out deliveryType="push"> <tns1:Operation name="tDoRepair"> <Return>Return</Return> </tns1:Operation> </tns1:Out> <tns1:In isResponse="true"> </tns1:In> <tns1:SrcMsgs transformation="GR_tDoRepair.xsl"> <tns1:SrcMsg contractId="GR_TT" termId="sendGRLocation" isResponse="true" /> </tns1:SrcMsgs> <tns1:ResultMsgs> <tns1:ResultMsg contractId="CO_GR" termId="orderRepair" isResponse="true" transformation="GR_tDoRepair_CO_GR_orderRepair_Res.xsl" /> </tns1:ResultMsgs> </tns1:Task> <tns1:Task id="tAcceptPayment" isMsgDriven="false"> <tns1:Out deliveryType="push"> <tns1:Operation name="tAcceptPayment"> <Return>Return</Return> </tns1:Operation> </tns1:Out> <tns1:In isResponse="true"> </tns1:In> <tns1:SrcMsgs transformation="GR_tAcceptPayment.xsl"> <tns1:SrcMsg contractId="CO_GR" termId="payRepair" isResponse="false" /> </tns1:SrcMsgs> </tns1:Task> </Tasks> <Description>This is a Garage role</Description> </Role> <Role id="TT" name="TT"> <Tasks> <tns1:Task id="tTow" isMsgDriven="false"> <tns1:Out deliveryType="push"> <tns1:Operation name="tTow"> <Return>Return</Return> </tns1:Operation> </tns1:Out> <tns1:In isResponse="true"> </tns1:In> <tns1:SrcMsgs transformation="TT_tTow.xsl"> <tns1:SrcMsg contractId="CO_TT" termId="orderTow" isResponse="false" /> <tns1:SrcMsg contractId="GR_TT" termId="sendGRLocation" isResponse="false" /> </tns1:SrcMsgs> <tns1:ResultMsgs> <tns1:ResultMsg contractId="CO_TT" termId="orderTow" isResponse="true" transformation="TT_tTow_CO_TT_orderTow_Res.xsl" /> <tns1:ResultMsg contractId="GR_TT" termId="sendGRLocation" isResponse="true" transformation="TT_tTow_GR_TT_sendGRLocation_Res.xsl" /> </tns1:ResultMsgs> </tns1:Task> <tns1:Task id="tAcceptPayment" isMsgDriven="false"> <tns1:Out deliveryType="push"> <tns1:Operation name="tAcceptPayment"> <Return>Return</Return> </tns1:Operation> </tns1:Out> <tns1:In isResponse="true"> </tns1:In> <tns1:SrcMsgs transformation="TT_tAcceptPayment.xsl"> <tns1:SrcMsg contractId="CO_TT" termId="payTow" isResponse="false" /> </tns1:SrcMsgs> </tns1:Task> </Tasks>
283
<Description>This is a TowTruck role</Description> </Role> <Role id="TX" name="TX"> <Tasks> <tns1:Task id="tProvideTaxi" isMsgDriven="false"> <tns1:Out deliveryType="push"> <tns1:Operation name="tProvideTaxi"> <Return>Return</Return> </tns1:Operation> </tns1:Out> <tns1:In isResponse="true"> </tns1:In> <tns1:SrcMsgs transformation="TX_tProvideTaxi.xsl"> <tns1:SrcMsg contractId="CO_TX" termId="orderTaxi" isResponse="false" /> </tns1:SrcMsgs> <tns1:ResultMsgs> <tns1:ResultMsg contractId="CO_TX" termId="orderTaxi" isResponse="true" transformation="TX_tProvideTaxi_CO_TX_orderTaxi_Res.xsl" /> </tns1:ResultMsgs> </tns1:Task> <tns1:Task id="tAcceptPayment" isMsgDriven="false"> <tns1:Out deliveryType="push"> <tns1:Operation name="tAcceptPayment"> <Return>Return</Return> </tns1:Operation> </tns1:Out> <tns1:In isResponse="true"> </tns1:In> <tns1:SrcMsgs transformation="TX_tAcceptPayment.xsl"> <tns1:SrcMsg contractId="CO_TX" termId="payTaxi" isResponse="false" /> </tns1:SrcMsgs> </tns1:Task> </Tasks> <Description>This is a Taxi role</Description> </Role> <Role id="HT" name="HT"> <Tasks> <tns1:Task id="tConfirmBooking" isMsgDriven="false"> <tns1:Out deliveryType="push"> <tns1:Operation name="tConfirmBooking"> <Return>Return</Return> </tns1:Operation> </tns1:Out> <tns1:In isResponse="true"> </tns1:In> <tns1:SrcMsgs transformation="HT_tConfirmBooking.xsl"> <tns1:SrcMsg contractId="CO_HT" termId="orderHotel" isResponse="false" /> </tns1:SrcMsgs> <tns1:ResultMsgs> <tns1:ResultMsg contractId="CO_HT" termId="orderHotel" isResponse="true" transformation="HT_tConfirmBooking_CO_HT_orderHotel_Res.xsl" /> </tns1:ResultMsgs> </tns1:Task> <tns1:Task id="tAcceptPayment" isMsgDriven="false"> <tns1:Out deliveryType="push"> <tns1:Operation name="tAcceptPayment"> <Return>Return</Return> </tns1:Operation> </tns1:Out> <tns1:In isResponse="true"> </tns1:In> <tns1:SrcMsgs transformation="HT_tAcceptPayment.xsl"> <tns1:SrcMsg contractId="CO_HT" termId="payHotel" isResponse="false" /> </tns1:SrcMsgs> </tns1:Task> </Tasks>
284
<Description>This is a Hotel role</Description> </Role> <Role id="MM" name="MM"> <Tasks> </Tasks> <Description>This is a Member role</Description> </Role> </Roles> <!--Player bindings will be listed below --> <PlayerBindings> <PlayerBinding id="copb_binding"> <Endpoint>http://127.0.0.1:8080/axis2/services/S6COService</Endpoint> <Roles> <RoleID>CO</RoleID> </Roles> <Description>This is the binding for CO</Description> </PlayerBinding> <PlayerBinding id="ttpb_binding"> <Endpoint>http://136.186.7.228:8080/axis2/services/S6TTService</Endpoint> <Roles> <RoleID>TT</RoleID> </Roles> <Description>This is the binding for TT</Description> </PlayerBinding> <PlayerBinding id="grpb_binding"> <Endpoint>http://136.186.7.228:8080/axis2/services/S6GRService</Endpoint> <Roles> <RoleID>GR</RoleID> </Roles> <Description>This is the binding for GR</Description> </PlayerBinding> <PlayerBinding id="txpb_binding"> <Endpoint>http://136.186.7.228:8080/axis2/services/S6TXService</Endpoint> <Roles> <RoleID>TX</RoleID> </Roles> <Description>This is the binding for TX</Description> </PlayerBinding> <PlayerBinding id="htpb_binding"> <Endpoint>http://136.186.7.228:8080/axis2/services/S6HTService</Endpoint> <Roles> <RoleID>HT</RoleID> </Roles> <Description>This is the binding for HT</Description> </PlayerBinding> </PlayerBindings> <OrganiserBinding>http://127.0.0.1:8080/axis2/services/rosas_organizer</OrganiserBinding> <Description>This is the descriptor for the organisation RoSAS, Generated by Serendip Modelling Tool</Description> </tns:SMC>
285
The operations exposed in the organiser interface allow an organiser player to monitor and adapt
regulate the service orchestration. These operations are exposed as WSDL operations in web
service environment. This interface could be accessed by URL,
http://[host]:[port]/axis2/services/[CompositeName]_Organizer?wsdl
Each of the operation results in a Result message which captures result of the operation. For
example, if an adaptation is carried out via an operation, the success/failure of adaptation and
the possible reasons if failed is captured in these operations. If the operation is a monitoring
operation the result message contains the observed input.
We categorise the operations into several sets (as captured by Table C-1 to C-5) depending on
their purpose as shown in Figure C-1.
Table C-1 presents the operations for batch mode adaptations that can be commonly used to
perform both structural and process (definition as well as instance level) adaptations. Table C-2
presents the operations for process instance-level adaptations. The operations for process
definition level adaptations are presented in Table C-3. The structure of the orchestration can be
adapted via the operations presented in Table C-4. Finally, the monitoring operations are
presented in Table C-5.
Figure C - 1. A Summary of Available Organiser Operations
We follow a CRUD (Create, Read, Update, and Delete) style API to define the operations.
For the currently supported operations we use prefixes add, read, update and remove to make it
clear what the operation is about.
Table C-1 (batch-mode)
Table C-2Table C-3Table C-4
Table C-5
Appendix C. Organiser Operations
286
All the parameter values are string values. Following acronyms (alphabetically ordered) are
used in naming parameters of operations. We list them in order to avoid a repetitive explanation.
Operation specific parameters are described in corresponding operations.
bId = Behaviour Unit Identifier
cExpression = A Constraint Expression
cId = Constraint Identifier
cos = Condition of Start
cot = Condition of Termination
ctId = Contract Identifier
descr = A Description
eId = Event Identifier
extend = The Extends Property
factId = A Fact Identifier
name = A Name
obligRole = Obligated Role
pbId = Player Binding Identifier
pdId = Process Definition Identifier
pId = Process Instance Identifier
postEP = Post-EventPattern
pp = Performance Property
preEP = Pre-EventPattern
prop = A Property
rid = Role Identifier
ruleFile = Rule File Name
ruleId = A Rule Identifier
status = A Status
tId= Task Identifier
tmid=An Interaction Term Identifier
val = A Value of a Property
287
Table C-1. Operations Commonly Used For Instance and Definition Level Adaptations
Operation Description
executeScript (scriptFileName) Executes an adaptation script immediately. A script (written in a
@scriptFileName, e.g., script.sas) allows performing number of
adaptations in a single transaction. A script is written using
Serendip Adaptation Script Language (AppendixD). The script
need to be in the file system of the server, e.g., send via FTP.
scheduleScript (scriptFileName,
conditionType, conditionValue)
Schedules and adaptation script (See command executeScript).
The @conditionType is either an EventPattern or Clock. The
condition value specified the value in which the adaptation
should trigger. If the condition is an EventPattern, the condition
value is used to specify a specific pattern of events. When the
condition type is Clock, the adaptation is started based on a
specific time of system clock.
EventPattern -> <pId>:<event_pattern>
Clock -> yyyy.MM.dd.HH.mm.ss.Z
applyPatch (patchFileName) Applies a patch file (*xml) to add new entities, e.g., roles,
contracts to a running composite. The patch file is same as a
ROAD deployable descriptor file, yet only specifies additional
entities that need to be added. The adaptation engine will read
the descriptions and adds the entities to the running composite.
No name conflicts are allowed with existing entities. This
command is useful in situations where adding complex entities
such as roles and contracts become tedious with individual
operations.
288
Table C-1. Operations for Process Instance Level Adaptations
Operation Description
updateStateOfProcessInst (pId,
state )
Updates the current status of a process instance.
@status=(pause/aborted/active)
addTaskToProcessInst (pId, bId,
tId, preEP, postEP, obligRole, pp)
Adds a new task to a process instance.
removeTaskFromProcessInst (pId,
tId)
Deletes a task from a process instance.
updateTaskOfProcessInst (pId, tId,
prop, val)
Updates a property such as pre-event pattern of a task of a process
instance. Here,
@prop =(preEP/postEP/pp/obligRole)
addConstraintToProcessInst (pId,
cId, cExpression)
Adds a new process instance level constraint.
removeContraintFromProcessInst
(pId, cId)
Deletes a constraint of a process instance.
updateContraintOfProcessInst (pId,
cId, cExpression)
Updates a constraint of a process instance.
updateProcessInst (pId, prop, val) Updates a property such as CoT of a process instance.
Here, @prop = (CoS)
*Updating CoS is not applicable for a process instance.
addEventToProcessInst (pId,eId,
expire)
Forcefully adds a new event to Event Cloud. Can be used to
support ad-hoc adaptations.
removeEventFromProcessInst (pId,
eId)
Forcefully removes an existing event from Event Cloud. Can be
used to support ad-hoc adaptations.
updateEventOfProcessInst (pId,
prop, val)
Forcefully update an event in Event Cloud to a specific instance.
Here, @prop=(Expire).
*Updating other properties are not permitted.
289
Table C-2. Operations For Process Definition Level Adaptations
Operation Description
addProcessDef (pdId, cos, cot) Adds a new process definition to the organization.
removeProcessDef (pdId) Removes an existing process definition from the organization.
updateProcessDef (pdId, prop, val) Updates a process definition. Here, @prop = (CoT/CoS).
addBehaviorRefToProcessDef (
pdId, bId)
Adds a new behaviour reference to a process definition.
removeBehaviorRefFromProcessDe
f ( pdId, bId)
Removes an existing behaviour reference form the process
definition.
addConstraintToProcessDef( pdId,
cId, cExpression, enabled)
Adds a new process level constraint to a process definition.
removeConstraintFromProcessDef(
pdId, cId)
Removes an existing constraint from a process definition.
addBehavior( bId, extendfrom) Adds a new behaviour to the organization. Optionally can use
extendfrom to specify the behaviour that gets extended. If
specialization is not required. leave extendfrom=null;
removeBehavior (bId) Removes an existing behaviour from the organization.
updateBehavior (bId, prop, val) Updates and existing behaviour. Here the only property is
extendfrom and the value is an identifier to a behaviour unit.
addConstraintToBehavior(bId, cId,
cExpression, enabled)
Adds a new behaviour level constraint to a behaviour unit.
removeConstraintFromBehavior(bI
d cId)
Removes an existing behaviour level constraint from a behaviour
unit.
updateConstraintOfBehavior (bId,
cId, prop, val)
Updates a constraint of behaviour unit. Here, @prop =
(cExpression/enabled)
addTaskToBehavior (bId, tId,
preEP, postEP, obligRole, pp)
Adds a new task dependency to a behaviour unit.
removeTaskFromBehavior ( bId,
tId)
Removes a task dependency from a behaviour unit.
updateTaskOfBehavior (bId, tId,
prop, val)
Updates a property such as pre-event pattern of a task of a
behaviour unit. Here, prop =(preEP/postEP/pp/obligRole)
290
addTaskToRole (rid, tId,
usingMsgs, resultingMsgs)
Adds a new task definition to a role.
removeTaskFromRole (rid, id) Removes an existing task definition from a role.
updateTaskOfRole( rid, tId, prop,
val)
Updates a task definition of a role. prop=( usingMsgs,
resultingMsgs)
Table C-3. . Operations for Structural Adaptations
Operation Description
addRole ( rid, name, descry, pbId) Adds a new role to the composite.
removeRole( rid) Removes an existing role from the composite.
updateRole( id, prop, val) Updates a role of the organization. Here,
prop=(name/descr/pbId)
addContract ( ctId, name, descr,
ruleFile, isAbstract, roleAId,
roleBId)
Adds a new contract to the organization. Here, ruleFile is the
name of the contractual rulefile, isAbstract=(true/false), RoleAId
and RoleBId are identifiers of Role A and B.
removeContract ( ctId) Removes a contract.
updateContract ( id, prop, val) Updates a contract. prop=( name/ descry/ ruleFile/ isAbstract/
roleAId/ roleBId)
addPlayerBinding (pbId, rid,
endpoint) *
Adds a new player binding for a role. Here, endpoint is the
address of the required interface.
removePlayerBinding (pbId )* Removes a player binding from the organization.
updatePlayerBinding (pbId, prop,
val)*
Updates a player binding. Here, prop=(rid, endpoint)
addITermToContract (ctId , tmid,
name, messageType, deonticType,
descr, direction)
Adds a new term to a contract.
removeITermFromContract (ctId ,
tmid)
Removes a term from a contract.
updateITermOfContract(ctId , tmid,
prop, val)
Updates a term. Here, prop=(
name/messageType/deonticType/descry/ direction)
addFactToContract(ctId, factId) Add a new fact to a contract.
291
removeFactFromContract(ctId,
factId)
Remove a fact from a contract.
updateFactOfContract(ctId, factId,
prop, val)
Update a property of a fact of a contract.
addCompositeRule(ruleFile) Adds a composite level rule.
removeCompositeRule(ruleFile) Removes a composite level rule.
addRuleToContract(ctId,ruleId,
ruleFile)
Adds a new contractual rule to an existing Contract.
updateRuleOfContract(ctId, ruleId,
ruleFile)
Updates (Replaces) the rule from the contract.
removeRuleFromContract(ctId,
ruleId)
Removes a specific rule from the an existing contract.
Table C-4. Operations for Monitoring
Operation Description
readPropertyOfProcessInst(pId, tId, prop ) To retrieve the current value of a property of a
process instance.
prop=(CoS/CoT)
readPropertyOfTaskOfProcessInst(pId, tId,
prop)
To retrieve the current value of a property of a Task
as specified in parameter.
prop =(PreEP/PostEP/PP/role)
readPropertyOfConstraintOfProcessInstanc
(pId, cId)
To retrieve the current expression value of a
constraint
getCurrentStatusOfProcessInst(pId) To retrieve the current status of a process instance.
getCurrentStatusOfTaskOfProcessInst (pId, tId) To retrieve the current status of a task of a process
instance.
takeSnapShot() Takes a snapshot of the current composite.
getNextManagementMessage() Pulls the next management message from the
composite.
subscribeToEventPattern (eventPattern) Subscribe to interested patterns of events. Here,
eventPattern is a pattern of events.
292
The Adaptation Scripting Language is used to perform batch mode adaptations in the Serendip
Orchestration Framework. The grammar of the Scripting language is given below. The grammar
is described in XTEXT format, which is an EBNF dialect. For more information about the
XTEXT’s EBNF dialect please refer to XTEXT documentation39.
Listing C-1. Grammar of Adaptation Scripting Langauge
grammar org.serendip.adaptScripting.AdaptScripting with org.eclipse.xtext.common.Terminals generate adaptScripting "http://www.serendip.org/adaptScripting/AdaptScripting" /* Script may contain multiple blocks*/ Script: (block+=Block)*; /* A block (a Scope) may contain multiple commands*/ Block: (scope=Scope ':' scopeId=ScopeId) '{' commands+=Command* '}' ; /* A command may contain multiple properties*/ Command: name=CommandName (properties+=Property)* ';'; /* Use EVOL for Phase 2 adaptations USE INST for Phase 3 adaptations*/ Scope : "EVOL" | "INST"; /* For Phase 3 adaptations, the ScopeId MUST be the process instance identifier*/ ScopeId:name=ID; /* A property is a key value pair*/ Property: key=ID'='value=STRING; //Allows freedom of having specific key value pairs /* A command is an atomic operation */ CommandName : ID ;
The commandName is the same as atomic operations listed in Appendix C. The property keys
are same as the property names of those operations. The property order does not matter.
e.g., to execute the command removeTaskFromProcessInst(pId, tId) using script language,
removeTaskFromProcessInst pId=pgGold037 tId=tPlaceTaxiOrder
39 http://www.eclipse.org/Xtext/documentation/
Appendix D. Adaptation Scripting Language
293
The schema definition of the deployable descriptor is given below. The schema definition is
captured by the files smc.xsd, serendip.xsd, contract.xsd, role.xsd, player.xsd, fact.xsd, term.xsd,
monitor.xsd. These files can be visualised and navigated via suitable software tool such as
Eclipse XSD Editor40.
smc.xsd
<?xml version="1.0" encoding="UTF-8"?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
targetNamespace="http://www.swin.edu.au/ict/road/smc"
xmlns:tns="http://www.swin.edu.au/ict/road/smc"
xmlns:role="http://www.swin.edu.au/ict/road/role"
xmlns:contract="http://www.swin.edu.au/ict/road/contract"
xmlns:player="http://www.swin.edu.au/ict/road/player"
xmlns:fact="http://www.swin.edu.au/ict/road/fact"
xmlns:ser="http://www.ict.swin.edu.au/serendip/types"
elementFormDefault="unqualified" version="1.3"
xmlns:Q1="http://www.ict.swin.edu.au/serendip/types">
<!-- schema imports -->
<xsd:import schemaLocation="serendip.xsd"
namespace="http://www.ict.swin.edu.au/serendip/types"></xsd:import>
<xsd:import namespace="http://www.swin.edu.au/ict/road/contract"
schemaLocation="contract.xsd" />
<xsd:import namespace="http://www.swin.edu.au/ict/road/role"
schemaLocation="role.xsd" />
<xsd:import namespace="http://www.swin.edu.au/ict/road/player"
schemaLocation="player.xsd" />
<xsd:import namespace="http://www.ict.swin.edu.au/serendip/types"
schemaLocation="serendip.xsd" />
<xsd:import namespace="http://www.swin.edu.au/ict/road/fact"
schemaLocation="fact.xsd" />
<!-- ROAD 2.0 self-managed composite -->
<xsd:element name="SMC">
<xsd:complexType>
<xsd:sequence>
<xsd:element name="ProcessDefinitions" type="ser:ProcessDefinitionsType"
maxOccurs="1" minOccurs="0">
</xsd:element>
<xsd:element name="BehaviorTerms" type="ser:BehaviorTermsType"
maxOccurs="1" minOccurs="0">
</xsd:element>
<xsd:element name="Messages" type="ser:MessagesType"
maxOccurs="1" minOccurs="0">
</xsd:element>
<xsd:element name="Events" type="ser:EventsType"
maxOccurs="1" minOccurs="0">
</xsd:element>
<xsd:element name="Facts" minOccurs="0" maxOccurs="1">
<xsd:complexType>
<xsd:sequence>
<xsd:element name="Fact" type="fact:FactType"
minOccurs="1" maxOccurs="unbounded" />
</xsd:sequence>
</xsd:complexType>
</xsd:element>
<xsd:element name="Roles" minOccurs="0" maxOccurs="1">
<xsd:complexType>
<xsd:sequence>
<xsd:element name="Role" type="role:RoleType"
minOccurs="0" maxOccurs="unbounded" />
40 http://wiki.eclipse.org/index.php/Introduction_to_the_XSD_Editor
Appendix E. Schema Definitions
294
</xsd:sequence>
</xsd:complexType>
</xsd:element>
<xsd:element name="Contracts" minOccurs="0" maxOccurs="1">
<xsd:complexType>
<xsd:sequence>
<xsd:element name="Contract" type="contract:ContractType"
minOccurs="0" maxOccurs="unbounded" />
</xsd:sequence>
</xsd:complexType>
</xsd:element>
<xsd:element name="PlayerBindings" minOccurs="0"
maxOccurs="1">
<xsd:complexType>
<xsd:sequence>
<xsd:element name="PlayerBinding" type="player:PlayerBindingType"
minOccurs="0" maxOccurs="unbounded" />
</xsd:sequence>
</xsd:complexType>
</xsd:element>
<xsd:element name="OrganiserBinding" type="xsd:string"
minOccurs="0" maxOccurs="1" />
<xsd:element name="Description" type="xsd:string"
minOccurs="0" maxOccurs="1" />
<xsd:element name="MessageAnalyzers" type="ser:MessageAnalyzersType"
maxOccurs="1" minOccurs="0"></xsd:element>
</xsd:sequence>
<!-- smc attributes -->
<xsd:attribute name="name" type="xsd:string" use="required" />
<xsd:attribute name="dataDir" type="xsd:string" use="required" />
<xsd:attribute name="routingRuleFile" type="xsd:string" use="required" />
<xsd:attribute name="compositeRuleFile" type="xsd:string" use="required" />
</xsd:complexType>
</xsd:element>
</xsd:schema>
serendip.xsd
<?xml version="1.0" encoding="UTF-8"?>
<xsd:schema targetNamespace="http://www.ict.swin.edu.au/serendip/types"
elementFormDefault="qualified" xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:tns="http://www.ict.swin.edu.au/serendip/types"
xmlns:ecore="http://www.eclipse.org/emf/2002/Ecore"
xmlns:Q1="http://www.swin.edu.au/ict/road/term">
<xsd:import schemaLocation="term.xsd"
namespace="http://www.swin.edu.au/ict/road/term"></xsd:import>
<xsd:complexType name="BehaviorTermType">
<xsd:sequence>
<xsd:element name="TaskRefs" type="tns:TaskRefsType"
maxOccurs="1" minOccurs="1"></xsd:element>
<xsd:element name="Constraints" type="tns:ConstraintsType"
maxOccurs="1" minOccurs="0">
</xsd:element>
</xsd:sequence>
<xsd:attribute name="id" type="xsd:string" use="required"></xsd:attribute>
<xsd:attribute name="extension" type="xsd:string" use="optional"></xsd:attribute>
<xsd:attribute name="extends" type="xsd:string" use="optional"></xsd:attribute>
<xsd:attribute name="isAbstract" type="xsd:boolean" use="optional"
default="false"></xsd:attribute>
</xsd:complexType>
<xsd:complexType name="EventType">
<xsd:attribute name="id" type="xsd:string" use="required"></xsd:attribute>
<xsd:attribute name="isInit" type="xsd:boolean"></xsd:attribute>
<xsd:attribute name="isError" type="xsd:boolean"></xsd:attribute>
<xsd:attribute name="descr" type="xsd:string"></xsd:attribute>
</xsd:complexType>
<xsd:simpleType name="EventPatternType">
<xsd:restriction base="xsd:string"></xsd:restriction>
</xsd:simpleType>
<xsd:complexType name="ProcessDefinitionType">
<xsd:sequence>
<xsd:element name="CoS" type="xsd:string" maxOccurs="1"
minOccurs="1">
</xsd:element>
<xsd:element name="CoT" type="xsd:string" maxOccurs="1"
295
minOccurs="1">
</xsd:element>
<xsd:element name="BehaviorTermRefs" type="tns:BehaviorTermRef"
maxOccurs="1" minOccurs="1">
</xsd:element>
<xsd:element name="Constraints" type="tns:ConstraintsType"
maxOccurs="1" minOccurs="0">
</xsd:element>
</xsd:sequence>
<xsd:attribute name="id" type="xsd:string" use="required"></xsd:attribute>
<xsd:attribute name="descr" type="xsd:string"
default="Description"></xsd:attribute>
</xsd:complexType>
<xsd:complexType name="TaskType">
<xsd:sequence>
<xsd:element name="Out" type="tns:OutMsgType" maxOccurs="1"
minOccurs="0">
</xsd:element>
<xsd:element name="In" type="tns:InMsgType" maxOccurs="1"
minOccurs="0">
</xsd:element>
<xsd:element name="SrcMsgs" type="tns:SrcMsgsType"
maxOccurs="1" minOccurs="0">
</xsd:element>
<xsd:element name="ResultMsgs" type="tns:ResultMsgsType"
maxOccurs="1" minOccurs="0">
</xsd:element>
</xsd:sequence>
<xsd:attribute name="id" type="xsd:string"></xsd:attribute>
<xsd:attribute name="msgAnalyser" type="xsd:string"></xsd:attribute>
<xsd:attribute name="isMsgDriven" type="xsd:boolean"
use="required"></xsd:attribute>
</xsd:complexType>
<xsd:complexType name="BehaviorTermRef">
<xsd:sequence>
<xsd:element name="BehavirTermId" type="xsd:string"
maxOccurs="unbounded" minOccurs="0"></xsd:element>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="EventsType">
<xsd:sequence>
<xsd:element name="Event" type="tns:EventType" maxOccurs="unbounded"
minOccurs="0">
</xsd:element>
</xsd:sequence>
</xsd:complexType>
<xsd:simpleType name="OutputEventType">
<xsd:restriction base="xsd:string"></xsd:restriction>
</xsd:simpleType>
<xsd:complexType name="ConstraintsType">
<xsd:sequence>
<xsd:element name="Constraint" type="tns:ConstraintType"
maxOccurs="unbounded" minOccurs="0">
</xsd:element>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="MessageType">
<xsd:attribute name="id" type="xsd:string"></xsd:attribute>
<xsd:attribute name="interactionTerm" type="xsd:string"></xsd:attribute>
<xsd:attribute name="type" type="xsd:string"></xsd:attribute>
<xsd:attribute name="contract" type="xsd:string"></xsd:attribute>
</xsd:complexType>
<xsd:complexType name="MessagesType">
<xsd:sequence>
<xsd:element name="Message" type="tns:MessageType"
maxOccurs="unbounded" minOccurs="0">
</xsd:element>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="PerformancePropertyType">
<xsd:sequence>
<xsd:any></xsd:any>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="ApplicationPropertyType">
296
<xsd:attribute name="key" type="xsd:string"></xsd:attribute>
<xsd:attribute name="value" type="xsd:string"></xsd:attribute>
</xsd:complexType>
<xsd:complexType name="ConstraintType">
<xsd:attribute name="Id" type="xsd:string"></xsd:attribute>
<xsd:attribute name="Expression" type="xsd:string"></xsd:attribute>
<xsd:attribute name="enabled" type="xsd:boolean"></xsd:attribute>
<xsd:attribute name="language" type="xsd:string"></xsd:attribute>
<xsd:attribute name="soft" type="xsd:boolean"></xsd:attribute>
</xsd:complexType>
<xsd:complexType name="BehaviorTermsType">
<xsd:sequence>
<xsd:element name="BehaviorTerm" type="tns:BehaviorTermType"
maxOccurs="unbounded" minOccurs="1">
</xsd:element>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="ProcessDefinitionsType">
<xsd:sequence>
<xsd:element name="ProcessDefinition" type="tns:ProcessDefinitionType"
maxOccurs="unbounded" minOccurs="1"></xsd:element>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="TasksType">
<xsd:sequence>
<xsd:element name="Task" type="tns:TaskType" maxOccurs="unbounded"
minOccurs="0"></xsd:element>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="TaskRefsType">
<xsd:sequence>
<xsd:element name="TaskRef" type="tns:TaskRefType"
maxOccurs="unbounded" minOccurs="0"></xsd:element>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="TaskRefType">
<xsd:attribute name="Id" type="xsd:string" use="required"></xsd:attribute>
<xsd:attribute name="preEP" type="xsd:string"></xsd:attribute>
<xsd:attribute name="postEP" type="xsd:string"></xsd:attribute>
<xsd:attribute name="performanceVal" type="xsd:string"></xsd:attribute>
<xsd:attribute name="type" type="xsd:string"></xsd:attribute>
</xsd:complexType>
<xsd:complexType name="OutMsgType">
<xsd:sequence>
<xsd:element name="Operation" type="Q1:OperationType"
maxOccurs="1" minOccurs="1">
</xsd:element>
</xsd:sequence>
<xsd:attribute name="deliveryType" type="xsd:string"></xsd:attribute>
<xsd:attribute name="isResponse" type="xsd:boolean" use="optional"
default="false"></xsd:attribute>
</xsd:complexType>
<xsd:complexType name="InMsgType">
<xsd:sequence>
<xsd:element name="Operation" type="Q1:OperationType"
maxOccurs="1" minOccurs="1">
</xsd:element>
</xsd:sequence>
<xsd:attribute name="isResponse" type="xsd:boolean" use="optional"
default="false"></xsd:attribute>
</xsd:complexType>
<xsd:complexType name="ResultMsgsType">
<xsd:sequence>
<xsd:element name="ResultMsg" type="tns:ResultMsgType"
maxOccurs="unbounded" minOccurs="1"></xsd:element>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="ResultMsgType">
<xsd:attribute name="contractId" type="xsd:string"></xsd:attribute>
<xsd:attribute name="termId" type="xsd:string"></xsd:attribute>
<xsd:attribute name="isResponse" type="xsd:boolean" use="optional"
default="false"></xsd:attribute>
<xsd:attribute name="transformation" type="xsd:string"></xsd:attribute>
</xsd:complexType>
<xsd:complexType name="MessageAnalyzerType">
297
<xsd:attribute name="id" type="xsd:string" use="required"></xsd:attribute>
<xsd:attribute name="class" type="xsd:string"></xsd:attribute>
</xsd:complexType>
<xsd:complexType name="Parameter">
<xsd:sequence>
<xsd:any></xsd:any>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="MessageAnalyzersType">
<xsd:sequence>
<xsd:element name="MessageAnalyzer" type="tns:MessageAnalyzerType"
maxOccurs="unbounded" minOccurs="0">
</xsd:element>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="SrcMsgsType">
<xsd:sequence>
<xsd:element name="SrcMsg" type="tns:SrcMsgType"
maxOccurs="unbounded" minOccurs="1">
</xsd:element>
</xsd:sequence>
<xsd:attribute name="transformation" type="xsd:string"></xsd:attribute>
</xsd:complexType>
<xsd:complexType name="SrcMsgType">
<xsd:attribute name="contractId" type="xsd:string"></xsd:attribute>
<xsd:attribute name="termId" type="xsd:string"></xsd:attribute>
<xsd:attribute name="isResponse" type="xsd:boolean" use="optional"
default="false"></xsd:attribute>
</xsd:complexType>
</xsd:schema>
contract.xsd
<?xml version="1.0" encoding="UTF-8"?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
targetNamespace="http://www.swin.edu.au/ict/road/contract"
xmlns:tns="http://www.swin.edu.au/ict/road/contract"
xmlns:term="http://www.swin.edu.au/ict/road/term"
xmlns:monitor="http://www.swin.edu.au/ict/road/monitor"
xmlns:ser="http://www.ict.swin.edu.au/serendip/types"
elementFormDefault="unqualified" version="1.4"
xmlns:Q1="http://www.ict.swin.edu.au/serendip/types">
<!-- schema imports -->
<xsd:import schemaLocation="serendip.xsd"
namespace="http://www.ict.swin.edu.au/serendip/types"></xsd:import>
<xsd:import namespace="http://www.swin.edu.au/ict/road/term"
schemaLocation="term.xsd" />
<xsd:import namespace="http://www.swin.edu.au/ict/road/monitor"
schemaLocation="monitor.xsd" />
<xsd:import namespace="http://www.ict.swin.edu.au/serendip/types"
schemaLocation="serendip.xsd" />
<xsd:complexType name="ContractLinkedFactType">
<xsd:attribute name="name" type="xsd:string" use="required" />
</xsd:complexType>
<!-- contract type -->
<xsd:complexType name="ContractType">
<xsd:sequence>
<xsd:element name="Types" minOccurs="1" maxOccurs="1">
<xsd:complexType>
<xsd:sequence>
<xsd:any />
</xsd:sequence>
</xsd:complexType>
</xsd:element>
<xsd:element name="LinkedFacts" minOccurs="0" maxOccurs="1">
<xsd:complexType>
<xsd:sequence>
<xsd:element name="Fact" type="tns:ContractLinkedFactType"
minOccurs="1" maxOccurs="unbounded" />
</xsd:sequence>
</xsd:complexType>
</xsd:element>
<xsd:element name="Abstract" type="xsd:boolean"
minOccurs="0" maxOccurs="1" />
<xsd:element name="State" type="xsd:string" minOccurs="1"
298
maxOccurs="1" />
<xsd:element name="Terms" minOccurs="0" maxOccurs="1">
<xsd:complexType>
<xsd:sequence>
<xsd:element name="Term" type="term:TermType"
minOccurs="0" maxOccurs="unbounded" />
</xsd:sequence>
</xsd:complexType>
</xsd:element>
<xsd:element name="Monitors" minOccurs="0" maxOccurs="1">
<xsd:complexType>
<xsd:sequence>
<xsd:element name="Monitor" type="monitor:MonitorRoleType"
minOccurs="0" maxOccurs="unbounded" />
</xsd:sequence>
</xsd:complexType>
</xsd:element>
<xsd:element name="RoleAID" type="xsd:string" minOccurs="1"
maxOccurs="1" />
<xsd:element name="RoleBID" type="xsd:string" minOccurs="1"
maxOccurs="1" />
<xsd:element name="Description" type="xsd:string"
minOccurs="0" maxOccurs="1" />
</xsd:sequence>
<!-- contract attributes -->
<xsd:attribute name="id" type="xsd:string" use="required" />
<xsd:attribute name="name" type="xsd:string" />
<xsd:attribute name="type" type="xsd:string" />
<xsd:attribute name="ruleFile" type="xsd:string" use="required" />
</xsd:complexType>
</xsd:schema>
role.xsd
<?xml version="1.0" encoding="UTF-8"?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
targetNamespace="http://www.swin.edu.au/ict/road/role"
xmlns:tns="http://www.swin.edu.au/ict/road/role"
xmlns:player="http://www.swin.edu.au/ict/road/player" elementFormDefault="unqualified"
version="1.3"
xmlns:Q1="http://www.ict.swin.edu.au/serendip/types">
<!-- schema imports -->
<xsd:import namespace="http://www.ict.swin.edu.au/serendip/types"
schemaLocation="serendip.xsd" />
<xsd:import namespace="http://www.swin.edu.au/ict/road/contract"
schemaLocation="contract.xsd" />
<xsd:complexType name="RoleLinkedFactType">
<xsd:sequence>
<xsd:element name="AcquisitionRegime">
<xsd:complexType>
<xsd:attribute name="mode" use="required">
<xsd:simpleType>
<xsd:restriction base="xsd:string">
<xsd:enumeration value="Active" />
<xsd:enumeration value="Passive" />
</xsd:restriction>
</xsd:simpleType>
</xsd:attribute>
<xsd:attribute name="SyncInterval" type="xsd:int"
use="required" />
</xsd:complexType>
</xsd:element>
<xsd:element name="ProvisionRegime">
<xsd:complexType>
<xsd:attribute name="mode" use="required">
<xsd:simpleType>
<xsd:restriction base="xsd:string">
<xsd:enumeration value="Active" />
<xsd:enumeration value="Passive" />
</xsd:restriction>
</xsd:simpleType>
</xsd:attribute>
<xsd:attribute name="SyncInterval" type="xsd:int" use="required" />
</xsd:complexType>
</xsd:element>
299
<xsd:element name="OnChange" type="xsd:boolean" />
</xsd:sequence>
<xsd:attribute name="name" type="xsd:string" use="required" />
<xsd:attribute name="monitor" type="xsd:boolean" use="required" />
<xsd:attribute name="provide" type="xsd:boolean" use="required" />
</xsd:complexType>
<!-- role type -->
<xsd:complexType name="RoleType">
<xsd:sequence>
<xsd:element name="Description" type="xsd:string"
minOccurs="0" maxOccurs="1" />
<xsd:element name="LinkedFacts" minOccurs="0" maxOccurs="1">
<xsd:complexType>
<xsd:sequence>
<xsd:element name="Fact" type="tns:RoleLinkedFactType"
minOccurs="1" maxOccurs="unbounded" />
</xsd:sequence>
</xsd:complexType>
</xsd:element>
<xsd:element name="ManagementResponsibilities" type="xsd:string"
minOccurs="0" maxOccurs="1" />
<xsd:element name="Tasks" type="Q1:TasksType" maxOccurs="1"
minOccurs="0"></xsd:element>
</xsd:sequence>
<!-- role attributes -->
<xsd:attribute name="id" type="xsd:string" use="required" />
<xsd:attribute name="name" type="xsd:string" />
</xsd:complexType>
</xsd:schema>
player.xsd
<?xml version="1.0" encoding="UTF-8"?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
targetNamespace="http://www.swin.edu.au/ict/road/player"
xmlns:tns="http://www.swin.edu.au/ict/road/player"
elementFormDefault="unqualified" version="1.3">
<!-- player type -->
<xsd:complexType name="PlayerBindingType">
<xsd:sequence>
<xsd:choice>
<xsd:element name="Endpoint" type="xsd:anyURI" />
<xsd:element name="Implementation" type="xsd:string" />
</xsd:choice>
<xsd:element name="Roles" minOccurs="0" maxOccurs="1">
<xsd:complexType>
<xsd:sequence>
<xsd:element name="RoleID" type="xsd:string"
minOccurs="1" maxOccurs="unbounded" />
</xsd:sequence>
</xsd:complexType>
</xsd:element>
<xsd:element name="Description" type="xsd:string"
minOccurs="0" maxOccurs="1" />
</xsd:sequence>
<!-- player attributes -->
<xsd:attribute name="id" type="xsd:string" use="required" />
<xsd:attribute name="name" type="xsd:string" />
</xsd:complexType>
</xsd:schema>
fact.xsd
<?xml version="1.0" encoding="UTF-8"?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
targetNamespace="http://www.swin.edu.au/ict/road/fact"
xmlns:tns="http://www.swin.edu.au/ict/road/fact"
elementFormDefault="unqualified" version="1.3">
<!-- monitor type -->
<xsd:complexType name="FactType">
<xsd:sequence>
<xsd:element name="Identifier" type="xsd:string"
minOccurs="1" maxOccurs="1" />
<xsd:element name="Attributes" minOccurs="1" maxOccurs="1">
<xsd:complexType>
300
<xsd:sequence>
<xsd:element name="Attribute" type="xsd:string"
minOccurs="1" maxOccurs="unbounded" />
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
<!-- player attributes -->
<xsd:attribute name="name" type="xsd:string" />
<xsd:attribute name="source" type="xsd:string" />
</xsd:complexType>
</xsd:schema>
term.xsd
<?xml version="1.0" encoding="UTF-8"?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
targetNamespace="http://www.swin.edu.au/ict/road/term"
xmlns:tns="http://www.swin.edu.au/ict/road/term"
elementFormDefault="unqualified" version="1.3">
<!-- term type -->
<xsd:complexType name="TermType">
<xsd:sequence>
<xsd:element name="Operation" type="tns:OperationType" maxOccurs="1"
minOccurs="1" />
<xsd:element name="Direction" type="tns:DirectionType" minOccurs="1"
maxOccurs="1" />
<xsd:element name="Description" type="xsd:string" minOccurs="0" maxOccurs="1"
/>
</xsd:sequence>
<!-- term attributes -->
<xsd:attribute name="id" use="required" type="xsd:string" />
<xsd:attribute name="name" type="xsd:string" />
<xsd:attribute name="messageType" default="push" use="optional">
<xsd:simpleType>
<xsd:restriction base="xsd:string">
<xsd:enumeration value="push" />
<xsd:enumeration value="pull" />
</xsd:restriction>
</xsd:simpleType>
</xsd:attribute>
<xsd:attribute name="deonticType" default="permission"
use="optional">
<xsd:simpleType>
<xsd:restriction base="xsd:string">
<xsd:enumeration value="permission" />
<xsd:enumeration value="obligation" />
</xsd:restriction>
</xsd:simpleType>
</xsd:attribute>
</xsd:complexType>
<!-- Simple type for direction -->
<xsd:simpleType name="DirectionType">
<xsd:restriction base="xsd:string">
<xsd:enumeration value="AtoB" />
<xsd:enumeration value="BtoA" />
</xsd:restriction>
</xsd:simpleType>
<xsd:complexType name="OperationType">
<xsd:sequence>
<xsd:element name="Parameters" type="tns:ParamsType"
maxOccurs="1" minOccurs="0">
</xsd:element>
<xsd:element name="Return" type="xsd:string" maxOccurs="1"
minOccurs="0">
</xsd:element>
</xsd:sequence>
<xsd:attribute name="name" type="xsd:string" use="required" />
</xsd:complexType>
<xsd:complexType name="ParamsType">
<xsd:sequence>
<xsd:element name="Parameter" minOccurs="1" maxOccurs="unbounded">
<xsd:complexType>
<xsd:sequence>
<xsd:element name="Type" type="xsd:string" maxOccurs="1"
301
minOccurs="1" />
<xsd:element name="Name" type="xsd:string" maxOccurs="1"
minOccurs="1" />
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:schema>
monitor.xsd
<?xml version="1.0" encoding="UTF-8"?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
targetNamespace="http://www.swin.edu.au/ict/road/monitor"
xmlns:tns="http://www.swin.edu.au/ict/road/monitor"
elementFormDefault="unqualified"
version="1.3">
<!-- monitor type -->
<xsd:complexType name="MonitorRoleType">
<!-- player attributes -->
<xsd:attribute name="id" type="xsd:string" use="required"/>
<xsd:attribute name="name" type="xsd:string"/>
</xsd:complexType>
</xsd:schema>
302
[1] M. Weske, Business Process Management: Concepts, Languages, Architectures: Springer, 2010.
[2] M. P. Papazoglou and W.-J. Heuvel, "Service oriented architectures: approaches, technologies and research issues," The VLDB Journal, vol. 16, pp. 389-415, 2007.
[3] G. K. Behara, "BPM and SOA: A Strategic Alliance " BPTrends, 2006. [4] F. Cummins, "BPM Meets SOA " in Handbook on Business Process Management1, J.
v. Brocke and M. Rosemann, Eds., ed: Springer Berlin / Heidelberg, 2010, pp. 461-479. [5] A. Barros and M. Dumas, "The Rise of Web Service Ecosystems," IEEE Computer
Society, vol. 8, pp. 31-37, 2006. [6] J. Sinur and J. B. Hill. (2010) Magic Quadrant for Business Process Management
Suites. Gartner Research. [7] W. M. P. van der Aalst, A. H. M. ter Hofstede, and M. Weske, "Business Process
Management: A Survey," Proc. 1st International Conference on Business Process Management, LNCS, 2003, pp. 1-12.
[8] B. Curtis, M. I. Kellner, and J. Over, "Process modeling," Commun. ACM, vol. 35, pp. 75-90, 1992.
[9] M. Hammer and J. Champy, "Reengineering the Corporation," Harper Business, New York., 1993.
[10] D. Simchi-Levi, P. Kaminsky, and E. Simchi-Levi, Designing and Managing the Supply Chain: Concepts, Strategies, and Case Studies: McGraw-Hill/Irwin, 2003.
[11] M. Havey, Essential Business Process Modeling: O'Reilly, 2005. [12] M. Klein and C. Dellarocas, "Designing robust business processes.," In Organizing
Business Knowledge: The MIT Process Handbook. MIT Press, Cambridge, MA. , pp. 434-438, 2003.
[13] OMG, "Business Process Modeling Notation (BPMN) Specification 1.1," in Object Management Group, , ed. Needham, MA, 2006.
[14] I. B. M. Odeh, S. Green and J. Sa, "Modelling Processes Using RAD and UML Activity Diagrams: an Exploratory Study."
[15] S. A. White, "Introduction to BPMN," IBM, 2004. [16] S. A. White, "Process Modeling Notations and Workflow Patterns," BPTrends, vol.
March, 2004. [17] A.-W. Scheer, Business Process Engineering: Reference Models for Industrial
Enterprises. Secaucus, NJ, USA: Springer-Verlag New York, Inc., 1994. [18] BEA, IBM, Microsoft, SAP, AG, Siebel Systems, Business Process Execution
Language for Web Services Version 1.1 2003, http://www.ibm.com/developerworks/library/specification/ws-bpel/
[19] XLANG Overview. (2001). Available: http://xml.coverpages.org/xlang.html [20] I. Foster, et al., "Cloud Computing and Grid Computing 360-Degree Compared," Proc.
Grid Computing Environments Workshop, 2008. GCE '08, 2008, pp. 1-10. [21] D. Peleg, Distributed Computing: A Locality-Sensitive Approach: Society for Industrial
and Applied Mathematics, 2000. [22] S. Weerawarana, et al., Web Services Platform Architecture: SOAP, WSDL, WS-Policy,
WS-Addressing, WS-BPEL, WS-Reliable Messaging and More: Prentice Hall PTR, 2005.
[23] P. S. Tan, et al., "Issues and Approaches to Dynamic, Service-oriented Multi-enterprise Collaboration," Proc. Industrial Informatics, 2006 IEEE International Conference on, 2006, pp. 399-404.
Bibliography
303
[24] A. Barros, M. Dumas, and P. Oaks, "A Critical Overview of the Web Services Choreography Description Language," BPTrends, http://www.bptrends.com, 2005.
[25] "Web Services Choreography Description Language Version 1.0," http://www.w3.org/TR/ws-cdl-10/, 2004.
[26] G. Diaz and I. Rodriguez, "Automatically Deriving Choreography-Conforming Systems of Services," Proc. IEEE International Conference on Services Computing, SCC 2009, pp. 9-16.
[27] J. Eder, M. Lehmann, and A. Tahamtan, "Choreographies as federations of choreographies and orchestrations," in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) vol. 4231 LNCS, ed. Tucson, AZ, 2006, pp. 183-192.
[28] J. Mendling and M. Hafner, "From Inter-Organizational Workflows to Process Execution: Generating BPEL from WS-CDL," ed, 2005.
[29] W. M. P. van der Aalst, et al., "Workflow Patterns," Distrib. Parallel Databases, vol. 14, pp. 5-51, 2003.
[30] WfMC, "Workflow Management Coalition Terminology & Glossary (WFMC-TC-1011)" Workflow Management Coalition, Winchester1999.
[31] M. Shi, et al., "Workflow management systems: a survey," Proc. International Conference on Communication Technology 1998, pp. 22-24
[32] H. Li, "XML and Industrial Standards for Electronic Commerce," Knowledge and Information Systems, vol. 2, pp. 487-497, 2000.
[33] C. Peltz, "Web services orchestration. a review of emerging technologies, tools, and standards " 2003.
[34] Business Process Modelling Language (BPML). (2003). Available: http://xml.coverpages.org/bpml.html
[35] WFMC, "Workflow Management Coalition Workflow Standard: Workflow Process Definition Interface – XML Process Definition Language (XPDL) (WFMCTC-1025). Technical report," Workflow Management Coalition, Lighthouse Point, Florida, USA,, 2002.
[36] F. Leymann, Web Services Flow Language (WSFL) Version 1.0,2001, http://itee.uq.edu.au/~infs7201/Assessments/Assignment%201%20Material/WSFL.pdf
[37] OASIS, Web Services Business Process Execution Language Version 2.0.,2006, http://docs.oasis-open.org/wsbpel/2.0/wsbpel-v2.0.html
[38] W3C, Web Services Description Language (WSDL)Version 2.0 2007, http://www.w3.org/TR/wsdl20/
[39] J. Nitzsche, T. van Lessen, and F. Leymann, "WSDL 2.0 Message Exchange Patterns: Limitations and Opportunities," Proc. Internet and Web Applications and Services, 2008. ICIW '08. Third International Conference on, 2008, pp. 168-173.
[40] E. M. Bahsi, E. Ceyhan, and T. Kosar, "Conditional workflow management: A survey and analysis," Scientific Programming, vol. 15, pp. 283-297, 2007.
[41] M. Papazoglou, et al., "Service-Oriented Computing Research Roadmap," International Journal of Cooperative Information Systems . , vol. 17, pp. 223-255, 2008.
[42] R. Mietzner and F. Leymann, "Generation of BPEL Customization Processes for SaaS Applications from Variability Descriptors," Proc. Services Computing (SCC), 2008, pp. 359-366.
[43] J. Sathyan and K. Shenoy, "Realizing unified service experience with SaaS on SOA," Proc. Communication Systems Software and Middleware and Workshops, COMSWARE '08., 2008, pp. 327-332.
[44] M. Campbell-Kelly, "Historical reflections. The rise, fall, and resurrection of software as a service," Communications ACM, vol. 52, pp. 28-30, 2009.
[45] P. A. Laplante, Z. Jia, and J. Voas, "What's in a Name? Distinguishing between SaaS and SOA," IT Professional, vol. 10, pp. 46-50, 2008.
304
[46] J. Rake, et al., "Communication-Enabled Business Processes as a Service," Proc. Intelligence in Next Generation Networks, 2009. ICIN 2009. 13th International Conference on, 2009, pp. 1-5.
[47] M. Kapuruge, A. Colman, and J. Han, "Achieving Multi-tenanted Business Processes in SaaS Applications," Proc. Web Information System Engineering (WISE), Sydney, Australia, 2011, pp. 143-157.
[48] A. Colman and J. Han, "Using role-based coordination to achieve software adaptability," Science of Computer Programming, vol. 64, pp. 223-245, 2007.
[49] M. P. Didier and H. L. Alexander, "Adaptation as a Morphing Process: A Methodology for the Design and Evaluation of Adaptive Organizational Structures," Comput. Math. Organ. Theory, vol. 4, pp. 5-41, 1998.
[50] R. Taylor, N. Medvidovic, and P. Oreizy, "Architectural styles for runtime software adaptation," Proc. 3rd European Conference on Software Architecture (ECSA), Cambridge, United Kingdom, 2009, pp. 171-180.
[51] S. Nurcan, "A Survey on the Flexibility Requirements Related to Business Processes and Modeling Artifacts," Hawaii International Conference on System Sciences, vol. 0, pp. 378-388, 2008.
[52] V. Monfort and S. Hammoudi, "Towards Adaptable SOA: Model Driven Development, Context and Aspect " in Service-Oriented Computing. vol. 5900, L. Baresi, et al., Eds., ed: Springer Berlin / Heidelberg, 2009, pp. 175-189.
[53] D. Karastoyanova, et al., "Extending BPEL for run time adaptability," Proc. EDOC Enterprise Computing Conference, 2005, pp. 15-26.
[54] V. Muthusamy, et al., "SLA-driven business process management in SOA," Proc. 2007 conference of the center for advanced studies on Collaborative research, Richmond Hill, Ontario, Canada, 2007.
[55] W. M. P. van Der Aalst, "Exterminating the Dynamic Change Bug: A Concrete Approach to Support Workflow Change," Information Systems Frontiers, vol. 3, pp. 297-317, 2001.
[56] W. M. P. van der Aalst and S. Jablonski, "Dealing with workflow change: identification of issues and solutions," International Journal of Computer Systems Science and Engineering, vol. 15, pp. 267-276, 2000.
[57] R. Fang, et al., "Dynamic Support for BPEL Process Instance Adaptation," Proc. IEEE International Conference on Services Computing, 2008, pp. 327-334.
[58] I. Bider and A. Striy, "Controlling business process instance flexibility via rules of planning," Int. J. Business Process Integration and Management, vol. 3, pp. 15-25, 2008.
[59] P. Heinl, et al., "A comprehensive approach to flexibility in workflow management systems," Proc. International joint conference on Work activities coordination and collaboration, San Francisco, California, United States, 1999, pp. 79-88.
[60] K. Kumar and M. M. Narasipuram, "Defining Requirements for Business Process Flexibility," Proc. Workshop on Business Process Modeling, Development, and Support (BPMDS'06) Requirements for flexibility and the ways to achieve it (CAiSE'06), 2006.
[61] D. Müller, M. Reichert, and J. Herbst, "Flexibility of Data-Driven Process Structures," in Business Process Management Workshops, ed, 2006, pp. 181-192.
[62] G. Regev, P. Soffer, and R. Schmidt, "Taxonomy of Flexibility in Business Processes," 2006.
[63] H. Schonenberg, et al., "Process Flexibility: A Survey of Contemporary Approaches," in Advances in Enterprise Engineering I, ed, 2008, pp. 16-30.
[64] M. H. Schonenberg, et al., "Towards a Taxonomy of Process Flexibility (Extended Version)." BPM Center Report vol. BPM-07-11, 2007.
[65] O. Ezenwoye and S. M. Sadjadi, "RobustBPEL2: Transparent Autonomization in Business Processes through Dynamic Proxies," Proc. International Symposium on Autonomous Decentralized Systems (ISADS), 2007, pp. 17-24.
305
[66] D. Karastoyanova and F. Leymann, "BPEL'n'Aspects: Adapting Service Orchestration Logic," Proc. Web Services, 2009. ICWS 2009. IEEE International Conference on, 2009, pp. 222-229.
[67] K. Michiel, et al., "VxBPEL: Supporting variability for Web services in BPEL," Information and Software Technology, vol. 51, pp. 258-269, 2009.
[68] A. Charfi, "Aspect-Oriented Workow Languages: AO4BPEL and Applications," PhD, Darmstadt University of Technology, Darmstadt, Germany, 2007.
[69] C. W. Guenther, M. Reichert, and W. M. P. van der Aalst, "Supporting Flexible Processes with Adaptive Workflow and Case Handling," Proc. IEEE 17th Workshop on Enabling Technologies: Infrastructure for Collaborative Enterprises, 2008, pp. 229-234.
[70] W. M. P. van der Aalst, M. Weske, and D. Gr¨unbauer, "Case Handling: A New Paradigm for Business Process Support," Data and Knowledge Engineering, vol. 53, pp. 129-162, 2005.
[71] W. M. P. van der Aalst and M. Pesic, "DecSerFlow: Towards a Truly Declarative Service Flow Language," in Web Services and Formal Methods, ed, 2006, pp. 1-23.
[72] J. Wainer and F. de Lima Bezerra, "Constraint-Based Flexible Workflows," in Groupware: Design, Implementation, and Use. vol. 2806, ed: Springer Berlin / Heidelberg, 2003, pp. 151-158.
[73] M. Pesic and W. M. P. van der Aalst, "A Declarative Approach for Flexible Business Processes Management," in Business Process Management Workshops, ed, 2006, pp. 169-180.
[74] G. Kiczales and E. Hilsdale, "Aspect-oriented programming," SIGSOFT Softw. Eng. Notes, vol. 26, p. 313, 2001.
[75] K. Czarnecki, "Generative Programming: Principles and Techniques of Software Engineering Based on Automated Configuration and Fragment-Based Component Models," Technical University of Ilmenau, 1998.
[76] G. Regev, I. Bider, and A. Wegmann, "Defining business process flexibility with the help of invariants," Software Process: Improvement and Practice, vol. 12, pp. 65-79, 2007.
[77] M. Kapuruge, A. Colman, and J. Han, "Controlled flexibility in business processes defined for service compositions," Proc. Services Computing (SCC) Washington DC., 2011, pp. 346-353.
[78] M. Kapuruge, J. Han, and A. Colman, "Support for business process flexibility in service compositions: An evaluative survey," Proc. Australian Software Engineering Conference- ASWEC. IEEE Computer Society. , Auckland, NZ, 2010, pp. 97--106.
[79] J. Han and A. Colman, "The Four Major Challenges of Engineering Adaptive Software Architectures," Proc. Computer Software and Applications Conference, 2007. COMPSAC 2007. 31st Annual International, 2007, pp. 565-572.
[80] C. Baier and J.-P. Katoen, Principles of Model Checking: The MIT Press, 2008. [81] A. Colman, "Role-Oriented Adaptive Design," Ph.D. PhD Thesis. , Swinburne
University of Technology, Melbourne, 2007. [82] A. Colman, "Exogenous management in autonomic service compositions," Third
International Conference on Autonomic and Autonomous Systems, pp. 19-25, 2007. [83] A. Colman and J. Han, "Roles, players and adaptable organizations," Applied Ontology,
vol. 2, pp. 105-126, 2007. [84] B. Weber, S. Rinderle, and M. Reichert, "Change Patterns and Change Support Features
in Process-Aware Information Systems," in Advanced Information Systems Engineering, ed, 2007, pp. 574-588.
[85] B. Weber, S. B. Rinderle, and M. Reichert, "Identifying and evaluating change patterns and change support features in process-aware information systems. Technical Report No. TR-CTIT-07-22" Univ. of Twente, The Netherlands2007.
306
[86] L. Chen and X. Lu, "Achieving Business Agility by Integrating SOA and BPM Technology," Proc. Information Technology and Applications, 2009. IFITA '09. International Forum on, 2009, pp. 334-337.
[87] L. Hyun Jung, C. Si Won, and K. Soo Dong, "Technical Challenges and Solution Space for Developing SaaS and Mash-Up Cloud Services," Proc. e-Business Engineering, 2009. ICEBE '09. IEEE International Conference on, 2009, pp. 359-364.
[88] A. Azeez, et al., "Multi-tenant SOA Middleware for Cloud Computing," Proc. IEEE 3rd International Conference on Cloud Computing, 2010, pp. 458-465
[89] G. Chang Jie, et al., "A Framework for Native Multi-Tenancy Application Development and Management," Proc. Enterprise Computing, CEC/EEE . , 2007, pp. 551-558.
[90] Architecture Strategies for Catching the Long Tail, MSDN Library. (2006). Available: http://msdn.microsoft.com/en-us/library/aa479069.aspx
[91] H. Kitagawa, et al., "A General Maturity Model and Reference Architecture for SaaS Service," in Database Systems for Advanced Applications. vol. 5982, ed: Springer Berlin / Heidelberg, pp. 337-346.
[92] R. Mietzner, F. Leymann, and M. P. Papazoglou, "Defining Composite Configurable SaaS Application Packages Using SCA, Variability Descriptors and Multi-tenancy Patterns," Proc. Internet and Web Applications and Services(ICIW), 2008, pp. 156-161.
[93] J. Jeston and J. Nelis, Business Process Management: Practical Guidelines to Successful Implementations: Butterworth-Heinemann, 2008.
[94] J. F. Chang, Business Process Management Systems: Strategy And Implementation: Auerbach Publications, 2006.
[95] R. K. L. Ko, "A computer scientist's introductory guide to business process management (BPM)," Crossroads, vol. 15, pp. 11-18, 2009.
[96] R. Milner, A calculus of communicating systems: Springer-Verlag, 1980. [97] R. Milner, Communicating and Mobile Systems: The [symbol for Pi]-calculus:
Cambridge University Press, 1999. [98] C. A. Petri, Kommunikation mit Automaten: Technische Hochschule, Darmstadt., 1962. [99] T. Murata, "Petri nets: Properties, analysis and applications," Proceedings of the IEEE,
vol. 77, pp. 541-580, 1989. [100] H. Smith and P. Fingar, Business Process Management: The Third Wave: Meghan-
Kiffer Press, 2006. [101] H. Reijers, et al., "BPM in Practice: Who Is Doing What? Business Process
Management." vol. 6336, R. Hull, et al., Eds., ed: Springer Berlin / Heidelberg, 2010, pp. 45-60.
[102] R. Ko, S. Lee, and E. Lee, "Business process management (BPM) standards: a survey," Business Process Management Journal, vol. 15, pp. 744-791, 2009.
[103] W. M. P. van der Aalst and A. H. M. ter Hofstede, "YAWL: yet another workflow language," Information Systems, vol. 30, pp. 245-275, 2005.
[104] T. Andrews, et al., "Business Process Execution Language for Web Services version 1.1," ed: BEA Systems, International Business Machines Corporation, Microsoft Corporation, SAP AG, Siebel Systems 2003.
[105] W. M. P. Van der Aalst, "Patterns and XPDL: A Critical Evaluation of the XML Process Definition Language," Department of Technology Management Eindhoven University of Technology, The Netherlands, 2003.
[106] H. Weigand, W. van den Heuvel, and M. Hiel, "Rule-Based Service Composition and Service-Oriented Business Rule Management," INFOLAB, Tilburg University, Warandelaan 2, The Netherlands., 2009.
[107] P. Baptiste, C. L. Pape, and W. Nuijten, Constraint-Based Scheduling: Applying Constraint Programming to Scheduling Problems: Kluwer Academic, 2001.
[108] I. Rychkova, G. Regev, and A. Wegmann, "High-level design and analysis of business processes the advantages of declarative specifications," Proc. Research Challenges in Information Science, 2008. RCIS 2008. Second International Conference on, 2008, pp. 99-110.
307
[109] OMG, "Object Constraint Language, Version 2.2," ed: , 2010. [110] M. Pesic, et al., "Constraint-Based Workflow Models: Change Made Easy," Proc. On
the Move to Meaningful Internet Systems, 2007, pp. 77-94. [111] R. Lu, et al., "Using a temporal constraint network for business process execution,"
Proc. The17th Australasian Database Conference, Hobart, Australia, 2006, pp. 157-166 [112] R. Hull, "Artifact-Centric Business Process Models: Brief Survey of Research Results
and Challenges," in On the Move to Meaningful Internet Systems: OTM 2008, ed, 2008, pp. 1152-1163.
[113] A. Nigam and N. S. Caswell, "Business artifacts: An approach to operational specification," IBM Systems Journal vol. 42, pp. 428 - 445 2003.
[114] Pallas Athena : Case Handling with FLOWer: Beyond workflow (2002). Available: http://www.pallas-athena.com/
[115] I. Vanderfeesten, H. A. Reijers, and W. M. P. van der Aalst, "Case Handling Systems as Product Based Workflow Design Support," in Enterprise Information Systems, ed, 2009, pp. 187-198.
[116] A. W. Swan, "The gantt chart as an aid to progress control," Production Engineers, Journal of the Institution of, vol. 21, pp. 402-414, 1942.
[117] G. Di Battista, et al., "Automatic layout of PERT diagrams with X-PERT," Proc. Visual Languages, 1989., IEEE Workshop on, 1989, pp. 171-176.
[118] N. L. Wu and J. A. Wu, Introduction to management science: a contemporary approach: Rand McNally College Pub. Co., 1980.
[119] A. Caetano, et al., "A Role-Based Framework for Business Process Modeling," Proc. The 38th Annual Hawaii International Conference on System Sciences, 2005, pp. 13-19.
[120] P. Balabko, et al., "The Value of Roles in Modeling Business Processes," Proc. CAiSE Workshops, Riga, Latvia, 2004, pp. 207-214.
[121] M. A. Ould, Business Processes: Modelling and Analysis for Re-Engineering and Improvement: John Wiley & Sons, 1995.
[122] T. Reenskaug, Working With Objects: The OOram Software Engineering Method: Manning Prentice Hall 1995.
[123] I. Vanderfeesten, H. Reijers, and W. M. P. van der Aalst, "Product Based Workflow Support: Dynamic Workflow Execution," in Advanced Information Systems Engineering, ed, 2008, pp. 571-574.
[124] B. Singh and G. Rein, " Role Interaction Nets (RINs): A process Description Formalism," MCC. Austin, TX, USA,Technical Report CT-083-92, 1992.
[125] L. Fischer, 2007 BPM & Workflow Handbook: Future Strategies, 2007. [126] L. Yang, H. Enzhao, and C. Xudong, "Architecture of Information System Combining
SOA and BPM," Proc. Information Management, Innovation Management and Industrial Engineering, 2008. ICIII '08. International Conference on, 2008, pp. 42-45.
[127] M. Fiammante, Dynamic SOA and BPM: Best Practices for Business Process Management and SOA Agility: IBM Press/Pearson, 2010.
[128] D. Krafzig, K. Banke, and D. Slama, Enterprise Soa: Service-Oriented Architecture Best Practices: Prentice Hall Professional Technical Reference, 2005.
[129] A. Guceglioglu and O. Demirors, "Using Software Quality Characteristics to Measure Business Process Quality
Business Process Management." vol. 3649, W. van der Aalst, et al., Eds., ed: Springer Berlin / Heidelberg, 2005, pp. 374-379.
[130] I. Vanderfeesten, H. A. Reijers, and W. M. P. v. d. Aalst, "Evaluating workflow process designs using cohesion and coupling metrics," Comput. Ind., vol. 59, pp. 420-437, 2008.
[131] W. Van Der Aalst , et al., Workflow Management: Models, Methods, and Systems 2002. [132] W. van der Aalst, "Don't go with the flow: Web services composition standards
exposed," ed, 2003. [133] IBM WebSphere MQ Workflow Available: http://www-
01.ibm.com/software/integration/wmqwf
308
[134] A. Arkin, S. Askary, and S. Fordin, "Web Service Choreography Interface (WSCI) 1.0. Technical report http://www.w3.org/TR/wsci/," 2002.
[135] ebXML Business Process Specification Schema 6 Version 1.01 2001, http://www.ebxml.org/
[136] W. M. P. van der Aalst, et al., "Life After BPEL?," in Formal Techniques for Computer Systems and Business Processes, ed, 2005, pp. 35-50.
[137] T. Andrews, et al., "Web Services Business Process Execution Language Version 2.0." OASIS 2007.
[138] I. Rychkova and A. Wegmann, "Refinement Propagation. Towards Automated Construction of Visual Specifications," ed, 2007.
[139] Y. Wu and P. Doshi, "Making BPEL Flexible – Adapting in the Context of Coordination Constraints Using WS-BPEL," IEEE International Conference on Services Computing, vol. 1, pp. 423-430, 2008.
[140] O. Ezenwoye and S. M. Sadjadi, "TRAP/BPEL: A Framework for Dynamic Adaptation of Composite Services.," Proc. WEBIST’2007, Barcelona, Spain, 2007.
[141] X. Yonglin and W. Jun, "Context-Driven Business Process Adaptation for Ad Hoc Changes," Proc. e-Business Engineering, 2008. ICEBE '08. IEEE International Conference on, 2008, pp. 53-60.
[142] G. Regev and A. Wegmann, "Why Do We Need Business Process Support? Balancing Specialization and Generalization with BPS Systems," Proc. 4th BPMDS Workshop on Requirements Engineering for Business Process Support, 2003.
[143] M. J. Adams, et al., "Worklets: A Service-Oriented Implementation of Dynamic Flexibility in Workflows," On the Move to Meaningful Internet Systems 2006: CoopIS, DOA, GADA, and ODBASE, pp. 291-308, 2006.
[144] K. Knoll and S. L. Jarvenpaa, "Information technology alignment or "fit"; in highly turbulent environments: the concept of flexibility," Proc. Computer personnel research conference on Reinventing IS : managing information technology in changing organizations Alexandria, Virginia, United States, 1994, pp. 1-14.
[145] S. W. Sadiq, W. Sadiq, and M. E. Orlowska, "Pockets of Flexibility in Workflow Specification," Proc. The 20th International Conference on Conceptual Modeling: Conceptual Modeling, 2001, pp. 513-526.
[146] M. J. Adams, "Facilitating Dynamic Flexibility and Exception Handling for Workflows," PhD, Faculty of Information Technology, QUT, Brisbane, 2007.
[147] N. Mulyar and M. Pesic, "Declarative and Procedural Approaches for Modelling Clinical Guidelines: Addressing Flexibility Issues," Informal Proceedings of ProHealth '07, Brisbane, Australia., 2007.
[148] P. Soffer, "On the Notion of Flexibility in Business Processes," Proc. Workshop on Business Process Modeling, Design and Support (BPMDS05), 2005, pp. 35-42.
[149] J. Yu, Q. Sheng, and J. Swee, "Model-Driven Development of Adaptive Service-Based Systems with Aspects and Rules," Proc. Web Information System Engineering (WISE), 2010, pp. 548-563.
[150] M. Pesic, "Constraint-Based Workflow Management Systems: Shifting Control to User," Eindhoven University of Technology, Eindhoven, 2008.
[151] S. A. Gurguis and A. Zeid, "Towards autonomic web services: achieving self-healing using web services," SIGSOFT Softw. Eng. Notes, vol. 30, pp. 1-5, 2005.
[152] O. Ezenwoye and S. M. Sadjadi, "Enabling Robustness in Existing BPEL Processes," Proc. 8th International Conference on Enterprise Information Systems (ICEIS), 2006, pp. 95-102.
[153] M. Blow, et al. (2004) BPELJ: BPEL for Java, Joint Whitepaper 1-24. [154] S. M. Sadjadi and P. K. McKinley, "Using Transparent Shaping and Web Services to
Support Self-Management of Composite Systems," Proc. The Second International Conference on Automatic Computing, 2005, pp. 76-87.
309
[155] S. M. Sadjadi, P. K. McKinley, and B. H. C. Cheng, "Transparent shaping of existing software to support pervasive and autonomic computing," SIGSOFT Softw. Eng. Notes, vol. 30, pp. 1-7, 2005.
[156] F. Casati, et al., "Adaptive and Dynamic Service Composition in eFlow," in Advanced Information Systems Engineering, ed, 2000, pp. 13-31.
[157] G. Canfora, et al., "A framework for QoS-aware binding and re-binding of composite web services," J. Syst. Softw., vol. 81, pp. 1754-1769, 2008.
[158] J. Cardoso, et al., "Quality of service for workflows and web service processes," Journal of Web Semantics, vol. 1, pp. 281-308, 2004.
[159] M. Weske, "Flexible modeling and execution of workflow activities," Proc. The Thirty-First Hawaii International Conference on System Sciences, 1998, pp. 713-722 vol.7.
[160] C. Medeiros, G. Vossen, and M. Weske, "WASA: A workflow-based architecture to support scientific database applications
Database and Expert Systems Applications." vol. 978, N. Revell and A. Tjoa, Eds., ed: Springer Berlin / Heidelberg, 1995, pp. 574-583.
[161] G. Vossen and M. Weske, "The WASA Approach to Work-flow Management for Scientific Applications," in Workflow Management Systems and Interoperability, NATO ASI Workshop. vol. 164, ed: Computer and Systems Sciences, Berlin: Springer 1997, pp. 145-164.
[162] M. Weske, G. Vossen, and C. B. Medeiros, "Scientific Workflow Management: WASA Architecture and Applications. Technical Report 03/96-I," Universit¨at M¨unster1996.
[163] C. Ellis and C. Maltzahn, "The Chautauqua workflow system," Proc. The Thirtieth Hawaii International Conference on System Sciences, 1997, pp. 427-436 vol.4.
[164] F. Casati, et al., "Workflow evolution," Data and Knowledge Engineering, vol. 24, pp. 211-238, 1998.
[165] F. Casati, et al., "WIDE Workflow model and architecture, Technical Report," University of Milano, Italy.1996.
[166] C. A. Ellis and G. J. Nutt, "Office Information Systems and Computer Science," ACM Comput. Surv., vol. 12, pp. 27-60, 1980.
[167] M. Reichert and P. Dadam, "ADEPTflex -Supporting Dynamic Changes of Workflows Without Losing Control," Journal of Intelligent Information Systems, vol. 10, pp. 93-129, 1998.
[168] M. Reichert, C. Hensinger, and P. Dadam, "Supporting Adaptive Workflows in Advanced Application Environments.," Proc. EDBT Workshop on Workflow Management Systems, 1998, pp. 100-109.
[169] P. Dadam, et al., "ADEPT2 - Next Generation Process Management Technology," Proc. Fourth Heidelberg Innovation Forum, Heidelberg, Germany, 2007.
[170] Y. Sanghyun, et al., "Rule-based Dynamic Business Process Modification and Adaptation," Proc. Information Networking, 2008. ICOIN 2008. International Conference on, 2008, pp. 1-5.
[171] R. G. Ross, Principles of the business rule approach: Addison-Wesley, 2003. [172] T. Morgan, Business rules and information systems: aligning IT with business goals:
Addison-Wesley, 2002. [173] D. Hay and K. A. Healy, " GUIDE Business Rules Project, Final Report – revision 1.2"
GUIDE International Corporation, Chicago1997. [174] JESS: The Java Expert System Shell. . Available: http://www.jessrules.com/ [175] JRules. Available: http://www-01.ibm.com/software/integration/business-rule-
management/jrules/ [176] P. Browne, JBoss Drools Business Rules: Packt Publishing, 2009. [177] L. Chen, M. Li, and J. Cao, "ECA Rule-Based Workflow Modeling and Implementation
for Service Composition," IEICE - Trans. Inf. Syst., vol. E89-D, pp. 624-630, 2006. [178] K. Geminiuc. (2007). A Services-Oriented Approach to Business Rules Development.
310
[179] F. Rosenberg and S. Dustdar, "Business rules integration in BPEL - a service-oriented approach," Proc. 7th IEEE International Conference on E-Commerce Technology (CEC), 2005, pp. 476-479.
[180] F. Rosenberg and S. Dustdar, "Design and implementation of a service-oriented business rules broker," Proc. E-Commerce Technology Workshops, 2005. Seventh IEEE International Conference on, 2005, pp. 55-63.
[181] T. Graml, R. Bracht, and M. Spies, "Patterns of business rules to enable agile business processes," Proc. IEEE International Conference on Enterprise Distributed Object Computing (EDOC), 2008, pp. 385-402.
[182] W3C, XSL transformations (XSLT) version 2.0. W3C working draft 2003, http://www.w3.org/TR/xslt20/
[183] M. Kay, XPath 2.0 Programmer's Reference (Programmer to Programmer): Wrox, 2004.
[184] M. Milanovic and D. Gasevic, "Towards a Language for Rule-Enhanced Business Process Modeling," Proc. Enterprise Distributed Object Computing Conference, 2009. EDOC '09. IEEE International, 2009, pp. 64-73.
[185] R2ML -- The REWERSE I1 Rule Markup Language. . Available: http://oxygen.informatik.tu-cottbus.de/rewerse-i1/?q=node/6
[186] J. Boyer and H. Mili, Agile Business Rule Development: Process, Architecture, and JRules Examples: Springer, 2011.
[187] C. Courbis and A. Finkelstein, "Towards aspect weaving applications," Proc. The 27th international conference on Software engineering, St. Louis, MO, USA, 2005, pp. 69-77.
[188] K. Sullivan, et al., "Information hiding interfaces for aspect-oriented design," Proc. The 10th European software engineering conference, Lisbon, Portugal, 2005, pp. 166-175.
[189] R. Laddad, Aspectj In Action: Practical Aspect-Oriented Programming: Dreamtech Press.
[190] A. Charfi and M. Mezini, "Hybrid web service composition: business processes meet business rules," Proc. 2nd International Conference on Service Oriented Computing (ICSOC), New York, NY, USA, 2004, pp. 30-38.
[191] C. Courbis and A. Finkelstein, "Weaving Aspects into Web Service Orchestrations," Proc. IEEE International Conference on Web Services, 2005, pp. 219-226.
[192] O. Moser, F. Rosenberg, and S. Dustdar, "Non-intrusive monitoring and service adaptation for WS-BPEL," Proc. Proceeding of the 17th international conference on World Wide Web, Beijing, China, 2008.
[193] M. Braem, et al., "Isolating Process-Level Concerns Using Padus," in Business Process Management, ed, 2006, pp. 113-128.
[194] A. Erradi, P. Maheshwari, and V. Tosic, "Policy-driven middleware for self-adaptation of web services compositions," Proc. Middleware 2006, Melbourne, Australia, 2006, pp. 62-80.
[195] J. Gradecki and N. Lesiecki, Mastering AspectJ: aspect-oriented programming in Java: Wiley, 2003.
[196] I. Jacobson, M. Griss, and P. Jonsson, Software reuse: architecture, process and organization for business success. New York, NY, USA: ACM Press/Addison-Wesley Publishing Co. , 1997.
[197] Active-BPEL / ActiveVOS. (2011). Available: http://www.activevos.com/ [198] A. Erradi, P. Maheshwari, and V. Tosic, "Policy-Driven Middleware for Manageable
and Adaptive Web Services Compositions," International Journal of Business Process Integration and Management vol. 2, pp. 187-202, 2007.
[199] V. Tosic, A. Erradi, and P. Maheshwari, "WS-Policy4MASC - A WS-Policy Extension Used in the MASC Middleware," Proc. Services Computing, 2007. SCC 2007. IEEE International Conference on, 2007, pp. 458-465.
[200] D. Robinson, Aspect-Oriented Programming With the E Verification Language: A Pragmatic Guide for Testbench Developers: Elsevier/Morgan Kaufmann, 2007.
311
[201] R. Toledo, et al., "Aspectizing Java Access Control," Software Engineering, IEEE Transactions on, vol. 38, pp. 101-117, 2012.
[202] U. Eisenecker, "Generative programming (GP) with C++ Modular Programming Languages." vol. 1204, H. Mössenböck, Ed., ed: Springer Berlin /
Heidelberg, 1997, pp. 351-365. [203] K. Geebelen, S. Michiels, and W. Joosen, "Dynamic reconfiguration using template
based web service composition," Proc. 3rd workshop on Middleware for service oriented computing, Leuven, Belgium, 2008, pp. 49-54.
[204] S. Ruby, D. Thomas, and D. H. Hansson, Agile Web Development with Rails: Pragmatic Bookshelf, 2009.
[205] A. Lazovik and H. Ludwig, "Managing Process Customizability and Customization: Model, Language and Process," Proc. WISE, 2007.
[206] A. Iyengar, V. Jessani, and M. Chilanti, Websphere Business Integration Primer: Process Server, Bpel, Sca, and SOA. vol. 1: IBM Press., 2007.
[207] OSOA, SCA Service Component Architecture: Assembly Model Specification,2007, http://docs.oasis-open.org/opencsa/sca-assembly/sca-assembly-1.1-spec.html
[208] A. Schnieders and F. Puhlmann, "Variability Mechanisms in E-Business Process Families," Proc. International Conference on Business Information Systems, 2006.
[209] D. Ardagna, et al., "PAWS: A Framework for Executing Adaptive Web-Service Processes," IEEE Software, vol. 24, pp. 39-46, 2007.
[210] D. Ardagna, et al., "A Service-Based Framework for Flexible Business Processes," Software, IEEE, vol. 28, pp. 61-67, 2011.
[211] X. Zan, et al., "Towards a Constraint-Based Framework for Dynamic Business Process Adaptation," Proc. Services Computing (SCC), 2011 IEEE International Conference on, 2011, pp. 685-692.
[212] W. Qinyi, P. Calton, and A. Sahai, "DAG Synchronization Constraint Language for Business Processes," Proc. E-Commerce Technology, 2006. The 8th IEEE International Conference on and Enterprise Computing, E-Commerce, and E-Services, The 3rd IEEE International Conference on, 2006, pp. 10-10.
[213] P. Mangan and S. Sadiq, " A constraint specification approach to building flexible workflows," Journal of Research and Practice in Information Technology, vol. 35, pp. 21-39, 2003.
[214] P. Mangan and S. Sadiq, "On building workflow models for flexible processes," Aust. Comput. Sci. Commun., vol. 24, pp. 103-109, 2002.
[215] I. Rychkova, G. Regev, and A. Wegmann, "Using Declarative Specifications in Business Process Design," International Journal of Computer Science & Applications, vol. 5, pp. 45-68, 2008.
[216] A. Wegmann, "On the Systemic Enterprise Architecture Methodology (SEAM)," Proc. International Conference on Enterprise Information Systems, 2003, pp. 483-490.
[217] D. Jackson, Software Abstractions: Logic, Language, and Analysis: The MIT Press, 2006.
[218] D. Jackson, Software Abstractions: Logic, Language and Analysis: Mit Press, 2006. [219] M. Pesic, H. Schonenberg, and W. M. P. van der Aalst, "DECLARE: Full Support for
Loosely-Structured Processes," Proc. Enterprise Distributed Object Computing Conference, 2007. EDOC 2007. 11th IEEE International, 2007, pp. 287-287.
[220] B. Mikolajczak and M. Shenoy, "Flexibility through case handling in careflow systems: A case study of cutaneous melanoma," Proc. Health Care Management (WHCM), 2010 IEEE Workshop on, 2010, pp. 1-6.
[221] D. Fahland, et al., "Declarative versus Imperative Process Modeling Languages: The Issue of Understandability " in Enterprise, Business-Process and Information Systems Modeling. vol. 29, ed: Springer Berlin Heidelberg, 2009, pp. 353-366.
[222] G. Kiczales, "Aspect-oriented programming," ACM Comput. Surv., vol. 28, p. 154, 1996.
312
[223] O. Ezenwoye and S. M. Sadjadi, "RobustBPEL-2: Transparent autonomization in aggregate web services using dynamic proxies, " Autonomic Comput. Res. Lab., Florida Int. Univ., Miami, FL;2006.
[224] J. N. Luftman, Competing in the Information Age: Align in the Sand: Oxford University Press, 2003.
[225] C. I. Barnard, The Functions of the Executive. Cambridge, MA: Harvard University Press, 1938.
[226] R. L. Daft, Organization Theory and Design: South-Western Cengage Learning, 2009. [227] D. Rollinson, Organisational Behaviour and Analysis: An Integrated Approach: FT
Prentice Hall, 2008. [228] H. R. Maturana and F. J. Varela, Autopoiesis and Cognition: The Realization of the
Living: D. Reidel Publishing Company, 1980. [229] S. Thatte. 2001, XLANG: Web Services for Business Process Design. Microsoft. [230] W3C, Web Services Choreography Description Language Version 1.0 2004,
http://www.w3.org/TR/2004/WD-ws-cdl-10-20041217/ [231] A. Mukhija and M. Glinz, "CASA – A Contract-based Adaptive Software Architecture
Framework," Proc. 3rd IEEE Workshop on Applications and Services in Wireless Networks, 2003, pp. 275-286.
[232] A. Beugnard, et al., "Making Components Contract Aware," Computer, vol. 32, pp. 38-45, 1999.
[233] N. Desai, A. K. Chopra, and M. P. Singh, "Business Process Adaptations via Protocols," Proc. Services Computing, 2006. SCC '06. IEEE International Conference on, 2006, pp. 103-110.
[234] P. Collet, et al., "A Contracting System for Hierarchical Components.," Proc. CBSE, 2005, pp. 187-202.
[235] G. Bernardi, et al., "A Theory of Adaptable Contract-Based Service Composition," Proc. Symbolic and Numeric Algorithms for Scientific Computing, International Symposium on, 2008, pp. 327-334.
[236] H. Chang and P. Collet, "Fine-grained contract negotiation for hierarchical software components," Proc. Software Engineering and Advanced Applications, 2005. 31st EUROMICRO Conference on, 2005, pp. 28-35.
[237] M. Michael zur and T. H. Danny, "Service Process Innovation: A Case Study of BPMN in Practice," Proc. 41st Annual Hawaii International Conference on System Sciences, 2008.
[238] U. Dayal, M. Hsu, and R. Ladin, "Organizing long-running activities with triggers and transactions," SIGMOD Rec., vol. 19, pp. 204-214, 1990.
[239] R. von Ammon, et al., "Existing and future standards for event-driven business process management," Proc. 3rd ACM International Conference on Distributed Event-Based Systems, Nashville, Tennessee, 2009, pp. 1-5.
[240] J. Atlee and J. Gannon, "State-based model checking of event-driven system requirements," Proc. Conference on Software for citical systems, New Orleans, Louisiana, United States, 1991, pp. 24-40
[241] P. Chakravarty and M. P. Singh, "Incorporating Events into Cross-Organizational Business Processes," IEEE Internet Computing, vol. 12, pp. 46-53, 2008.
[242] S. Dasgupta, S. Bhat, and Y. Lee, "An Abstraction Framework for Service Composition in Event-Driven SOA Systems," Proc. IEEE International Conference on Web Services, 2009, pp. 671- 678
[243] N. Leavitt, "Complex-Event Processing Poised for Growth," Computer, vol. 42, pp. 17-20, 2009.
[244] O. Levina and V. Stantchev, "Realizing Event-Driven SOA," Proc. 4th International Conference on Internet and Web Applications and Services, 2009. ICIW '09., 2009, pp. 37-42.
[245] M. Suntinger, et al., "Event Tunnel: Exploring Event-Driven Business Processes," IEEE Comput. Graph. Appl., vol. 28, pp. 46-55, 2008.
313
[246] W. M. P. van der Aalst, "Formalization and verification of event-driven process chains," Information and Software Technology, vol. 41, pp. 639-650, 1999.
[247] R. von Ammon, C. Emmersberger, and T. Greiner, "Event-Driven Business Process Management," Proc. 2nd International Workshop on Event-Driven Business Process Management, 2009.
[248] E. G. Walters, The Essential Guide to Computing: Prentice Hall, 2001. [249] H. Mössenböck, "Modular Programming Languages: ," Proc. Joint Modular Languages
Conference, Linz, Austria,, 1997, pp. 19-21. [250] M. Strohmaier and H. Rollett, "Future research challenges in business agility -time,
control and information systems," Proc. E-Commerce Technology Workshops, 2005. Seventh IEEE International Conference on, 2005, pp. 109-115.
[251] R. Y. Khalaf, Supporting Business Process Fragmentation While Maintaining Operational Semantics: A BPEL Perspective: dissertation.de, 2008.
[252] W. M. P. van der Aalst, M. Pesic, and H. Schonenberg, "Declarative workflows: Balancing between flexibility and support," Computer Science - Research and Development, vol. 23, pp. 99-113, 2009.
[253] G. Regev and A. Wegmann, "Where do Goals Come from: the Underlying Principles of Goal-Oriented Requirements Engineering," Proc. IEEE International Conference on Requirements Engineering 2005, pp. 353-362.
[254] H. Boucheneb and R. Hadjidj, "CTL* model checking for time Petri nets," Theoretical Computer Science, vol. 353, pp. 208-227, 2006.
[255] E. M. Clarke, O. Grumberg, and D. A. Peled, Model Checking. Cambridge, Massachusetts and London, UK: MIT Press, 1999.
[256] Romeo Petri-Net analyzer tool. . (2012). Available: http://romeo.rts-software.org/ [257] H. Boucheneb, G. Gardey, and O. H. Roux, "TCTL Model Checking of Time Petri
Nets," J. Log. and Comput., vol. 19, pp. 1509-1540, 2009. [258] G. Gardey, et al., "Romeo: A Tool for Analyzing Time Petri Nets," in Computer Aided
Verification, ed, 2005, pp. 418-423. [259] G. S. Welling, " Designing Adaptive Environmental-Aware Applications for Mobile
Computing. ," PhD thesis, Rutgers University, New Brunswick, 1999. [260] M. Salehie and L. Tahvildari, "Self-adaptive software: Landscape and research
challenges," ACM Trans. Auton. Adapt. Syst., vol. 4, pp. 1-42, 2009. [261] M. Kapuruge, A. Colman, and J. King, "ROAD4WS – Extending Apache Axis2 for
Adaptive Service Compositions," Proc. IEEE International Conference on Enterprise Distributed Object Computing (EDOC) Helsinki, Finland, 2011, pp. 183-192.
[262] W3C, W3C SOAP specifications, http://www.w3.org/TR/soap/ [263] C. Pautasso and E. Wilde, "RESTful web services: principles, patterns, emerging
technologies," Proc. 19th international conference on World wide web, Raleigh, North Carolina, USA, 2010, pp. 1359-1360.
[264] M. Allman, "An evaluation of XML-RPC," SIGMETRICS Perform. Eval. Rev., vol. 30, pp. 2-11, 2003.
[265] Apache Software Foundation. Orchestration Director Engine(ODE). Available: http://ode.apache.org/
[266] G. Li, V. Muthusamy, and H.-A. Jacobsen, "A distributed service-oriented architecture for business process execution," ACM Trans. Web, vol. 4, pp. 1-33, 2010.
[267] D. A. Chappell, Enterprise Service Bus: O'Reilly, 2004. [268] D. C. Luckham, The Power of Events. An Introduction to Complex Event Processing in
Distributed Enterprise Systems.: Addison-Wesley Longman Publishing Co., Inc., 2001. [269] R. von Ammon, et al., "Integrating Complex Events for Collaborating and Dynamically
Changing Business Processes," Event Processing Technical Society, 2009. [270] G. J. Nalepa and M. A. Mach, "Business rules design method for Business Process
Management," Proc. Computer Science and Information Technology, 2009. IMCSIT '09. International Multiconference on, 2009, pp. 165-170.
[271] P. Jackson, Introduction to expert systems: Addison-Wesley, 1999.
314
[272] B. Orriens, J. Yang, and M. Papazoglou, "A Rule Driven Approach for Developing Adaptive Service Oriented Business Collaboration," in Service-Oriented Computing - ICSOC 2005, ed, 2005, pp. 61-72.
[273] D. Richards, "Two decades of ripple down rules research," The Knowledge Engineering Review vol. 24, pp. 159-184, 2009.
[274] Defining Business Rules ~ What Are They Really? . (March 2012). Available: http://www.businessrulesgroup.org/first_paper/br01c0.htm
[275] S. Kabicher and S. Rinderle-Ma, "Human-Centered Process Engineering Based on Content Analysis and Process View Aggregation
Advanced Information Systems Engineering." vol. 6741, H. Mouratidis and C. Rolland, Eds., ed: Springer Berlin / Heidelberg, 2011, pp. 467-481.
[276] A.-W. Scheer, "Architecture of Integrated Information Systems (ARIS)," Proc. JSPE/IFIP TC5/WG5.3 Workshop on the Design of Information Infrastructure Systems for Manufacturing, 1993, pp. 85-99.
[277] G. Keller and T. Teufel, SAP R/3 process-oriented implementation: iterative process prototyping: Addison Wesley Longman, 1998.
[278] P. S. Santos Jr., J. P. A. Almeida, and G. Guizzardi, "An ontology-based semantic foundation for ARIS EPCs," Proc. ACM Symposium on Applied Computing, Sierre, Switzerland, 2010, pp. 124-130.
[279] ProM framework. (2009). Available: http://prom.win.tue.nl/tools/prom/ [280] P. Barborka, et al., "Integration of EPC-related Tools with ProM," Proc. 5th GI
Workshop on Event-Driven Process Chains, 2006, pp. 105--120. [281] B. F. van Dongen, W. M. P. van der Aalst, and H. M. W. Verbeek, "Verification of
EPCs: Using Reduction Rules and Petri Nets," in Advanced Information Systems Engineering, ed, 2005, pp. 372-386.
[282] V. T. Nunes, C. M. L. Werner, and F. M. Santoro, "Dynamic process adaptation: A context-aware approach," Proc. Computer Supported Cooperative Work in Design (CSCWD), 2011 15th International Conference on, 2011, pp. 97-104.
[283] M. Dumas, W. van der Aalst, and A. H. ter Hofstede, Process Aware Information Systems: Bridging People and Software Through Process Technology: Wiley-Interscience, 2005.
[284] C. Günther, et al., "Change Mining in Adaptive Process Management Systems," in On the Move to Meaningful Internet Systems 2006: CoopIS, DOA, GADA, and ODBASE, ed, 2006, pp. 309-326.
[285] R. Lu, et al., "Defining Adaptation Constraints for Business Process Variants," in Business Information Systems, ed, 2009, pp. 145-156.
[286] B. Weber, "Beyond rigidity - Dynamic process lifecycle support : A Survey on dynamic changes in process-aware information systems," Computer science, vol. 23, pp. 47-65, 2009.
[287] L. Chen, M. Reichert, and A. Wombacher, "Mining Process Variants: Goals and Issues," Proc. Services Computing, 2008. SCC '08. IEEE International Conference on, 2008, pp. 573-576.
[288] J. King and A. Colman, "A Multi Faceted Management Interface for Web Services," Australian Software Engineering Conference, IEEE Computer Society, vol. 0, pp. 191-199, 2009.
[289] T. Haerder and A. Reuter, "Principles of transaction-oriented database recovery," ACM Comput. Surv., vol. 15, pp. 287-317, 1983.
[290] K. Lee, et al., "Workflow adaptation as an autonomic computing problem," Proc. 2nd workshop on Workflows in support of large-scale science, Monterey, California, USA, 2007, pp. 29-34.
[291] S. Fritsch, et al., "Scheduling time-bounded dynamic software adaptation," Proc. International workshop on Software engineering for adaptive and self-managing systems, Leipzig, Germany, 2008, pp. 89-96.
315
[292] P. K. McKinley, et al., "Composing Adaptive Software," Computer, vol. 37, pp. 56-64, July 2004 2004.
[293] G. Gardey, O. H. Roux, and O. F. Roux, "State space computation and analysis of Time Petri Nets," Theory Pract. Log. Program., vol. 6, pp. 301-320, 2006.
[294] D. Lime and O. H. Roux, "Model Checking of Time Petri Nets Using the State Class Timed Automaton," Discrete Event Dynamic Systems, vol. 16, pp. 179-205, 2006.
[295] J. Dehnert and P. Rittgen, "Relaxed Soundness of Business Processes," in Advanced Information Systems Engineering, ed, 2001, pp. 157-170.
[296] D. Giannakopoulou and K. Havelund, "Automata-Based Verification of Temporal Properties on Running Programs," Proc. 16th IEEE international conference on Automated software engineering, 2001.
[297] Apache Axis2 - Next Generation Web Services Available: http://ws.apache.org/axis2/ [298] S. Perera, et al., "Axis2, Middleware for Next Generation Web Services," Proc.
International Conference on Web Services, 2006, pp. 833-840. [299] D. Jayasinghe, Quickstart Apache Axis2: Packt Publishing, 2008. [300] R. T. Fielding, "Architectural styles and the design of network-based software
architectures," University of California, Irvine, 2000. [301] S. Kumaran, et al., "A RESTful Architecture for Service-Oriented Business Process
Execution," Proc. IEEE International Conference on e-Business Engineering, 2008. [302] S. N. Srirama, M. Jarke, and W. Prinz, "Mobile web services mediation framework,"
Proc. 2nd workshop on Middleware for service oriented computing: held at the ACM/IFIP/USENIX International Middleware Conference, Newport Beach, California, 2007, pp. 6-11.
[303] Drools : Business Logic integration Platform. Available: http://www.jboss.org/drools/ [304] L. Haifeng and H. A. Jacobsen, "Modeling uncertainties in publish/subscribe systems,"
Proc. 20th International Conference on Data Engineering, 2004, pp. 510-521. [305] I. Burcea, et al., "Disconnected operation in publish/subscribe middleware," Proc. IEEE
International Conference on Mobile Data Management, 2004, pp. 39-50. [306] ANTLR parser generator v3. (2011). Available: http://www.antlr.org/ [307] R. Wiener and L. J. Pinson, Fundamentals of Oop and Data Structures in Java:
Cambridge University Press, 2000. [308] Drools Expert. Available: http://www.jboss.org/drools/drools-expert [309] Sun Developer Netwrok, Java Specification Requests - Java Rule Engine API - JSR 94:
,2004, http://www.jcp.org/en/jsr/detail?id=94 [310] C. Forgy, "Rete: a fast algorithm for the many pattern/many object pattern match
problem," pp. 324-341, 1990. [311] R. J. Schalkoff, Intelligent Systems: Principles, Paradigms, and Pragmatics: Jones and
Bartlett Publishers, 2009. [312] L. Amador, Drools Developer's Cookbook: Packt Publishing, 2012. [313] XSLT Tutorial - W3Schools Available: http://www.w3schools.com/xsl/ [314] M. Kay, XSLT 2.0 Programmer's Reference (Programmer to Programmer): Wrox,
2004. [315] D. Brownell, Sax2: O'Reilly, 2002. [316] World-Wide-Web-Consortium, XML Path Language (XPath) 1999,
http://www.w3.org/TR/xpath/ [317] Sun Developer Netwrok, JSR 222 Java Specification Requests - Java Architecture for
XML Binding (JAXB) 2.0,2006, http://jcp.org/en/jsr/summary?id=222 [318] XML Schema (2001). Available: http://www.w3.org/XML/Schema [319] ROADdesigner Available:
http://www.swinburne.edu.au/ict/research/cs3/road/roaddesigner.htm [320] Eclipse GMF. Available: www.eclipse.org/gmf/ [321] D. Parsons, Dynamic Web Application Development Using XML and Java: Cengage
Learning, 2008. [322] B. McLaughlin and J. Edelson, Java And Xml: O'Reilly, 2007.
316
[323] Apache Rampart - An Implementation WS-Security specificaitons Available: http://axis.apache.org/axis2/java/rampart/
[324] D. Steinberg, et al., EMF: Eclipse Modeling Framework: Pearson Education, 2008. [325] M. Weske, "Formal Foundation and Conceptual Design of Dynamic Adaptations in a
Workflow Management System," Proc. 34th Annual Hawaii International Conference on System Sciences, 2001, pp. 10-20.
[326] B. Weber, W. Wild, and R. Breu, "CBRFlow: Enabling Adaptive Workflow Management Through Conversational Case-Based Reasoning," in Advances in Case-Based Reasoning. vol. 3155, P. Funk and P. A. González Calero, Eds., ed: Springer Berlin / Heidelberg, 2004, pp. 89-101.
[327] M. Kradolfer and A. Geppert, "Dynamic Workflow Schema Evolution Based on Workflow Type Versioning and Workflow Migration," Proc. Fourth IECIS International Conference on Cooperative Information Systems, 1999.
[328] K. Rajaram and C. Babu, "Template based SOA framework for dynamic and adaptive composition of web services," Proc. Networking and Information Technology (ICNIT), 2010 International Conference on, 2010, pp. 49-53.
[329] D. L. Parnas, "On the criteria to be used in decomposing systems into modules," Commun. ACM, vol. 15, pp. 1053-1058, 1972.
[330] S. Rinderle, et al., "Integrating Process Learning and Process Evolution – A Semantics Based Approach," Proc. 3rd international conference on Business Process Management, 2005, pp. 252-267.
[331] W. van der Aalst, T. Weijters, and L. Maruster, "Workflow Mining: Discovering Process Models from Event Logs," IEEE Trans. on Knowl. and Data Eng., vol. 16, pp. 1128-1142, 2004.