a knowledge-based analysis and modelling of dell's supply chain

166
A Knowledge-Based Analysis and Modelling of Dell’s Supply Chain Strategies Areti Manataki Master of Science Artificial Intelligence School of Informatics University of Edinburgh 2007

Upload: others

Post on 09-Feb-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

A Knowledge-Based Analysis and Modelling of Dell’s Supply Chain Strategies

Areti Manataki

Master of Science Artificial Intelligence School of Informatics

University of Edinburgh 2007

Abstract Supply Chain Management is becoming more and more important for the success of

today’s business world. Dell has realized this trend from its very first steps and has

become one of the most successful PC companies in the world by putting emphasis

on its supply chain, orchestrating its build-to-order and direct sales strategies.

While most of the literature that covers Dell’s business and supply chain strategies is

too theoretical, we suggest an analysis of a lower level using knowledge-based

techniques. So, we have developed a business process model for Dell that captures its

supply chain strategies, and which is strategic, business-goal-oriented and

executable. In order to make this BPM executable we have designed and

implemented a workflow engine that simulates BPM execution and calculates the

related total time and cost. Using the workflow engine we have then run experiments

on Dell’s BPM improvement and on its comparison with a traditional PC company,

thus providing a useful framework for supply chain strategies’ comparison.

This work is expected to provide a good insight into the successful supply chain of

Dell on the one hand, and demonstrate whether knowledge-based techniques can

provide good analysis of business and supply chain strategies in general on the other

hand.

i

Acknowledgements I would first of all like to thank my supervisor, Dr. Jessica Chen-Burger, for her

guidance and help throughout this project, as well as for the support in difficult

moments of the project period. I would also like to thank Dimtrios Mavroeidis and

Ioanna Manataki for their participation in the evaluation procedure of the developed

business process model.

ii

Declaration I declare that this thesis was composed by myself, that the work contained herein is

my own except where explicitly stated otherwise in the text, and that this work has

not been submitted for any other degree or professional qualification except as

specified.

(Areti Manataki)

iii

Table of Contents 1 Introduction............................................................................................................. 1

1.1 Motivation.................................................................................................... 1

1.2 Gap, Aim and Objectives ............................................................................. 1

1.3 Thesis Outline .............................................................................................. 2

2 Background Information........................................................................................ 3

2.1 Supply Chain Management .......................................................................... 3

2.1.1 Importance of Supply Chain Management ...................................... 6

2.1.2 Supply Chain Management Approaches.......................................... 7

2.1.3 “Hot topics” in Supply Chain Management..................................... 8

2.2 Dell’s Supply Chain Strategies .................................................................. 10

2.2.1 General Information about Dell ..................................................... 10

2.2.2 Directs Sales................................................................................... 11

2.2.3 Build-to-order and integration with suppliers ................................ 12

2.2.4 Other interesting approaches.......................................................... 14

2.3 Business Process Modelling....................................................................... 15

2.3.1 General Information about BPM.................................................... 15

2.3.2 BPM Methods and Tools ............................................................... 17

2.4 Workflow Management ............................................................................. 18

2.4.1 Definition and General Information............................................... 18

2.4.2 Different Approaches and Trends in Workflow Management....... 19

2.5 Fundamental Business Process Modelling Language (FBPML) ............... 20

2.5.1 Notation in FBPML ....................................................................... 20

2.6 Three-Layered Business Process Modelling Approach ............................. 24

3 Dell’s Business Process Model ............................................................................. 25

3.1 Dell BPM – The MIT Process Handbook version ..................................... 28

3.2 Dell BPM – The sequenced MIT Process Handbook version ................... 32

3.2.1 Weaknesses of Dell’s BPM based on MIT Process Handbook ..... 32

3.2.2 The sequenced MIT Process Handbook version of Dell’s BPM ... 33

3.3 Dell BPM – The enriched version.............................................................. 41

iv

3.3.1 Weaknesses of Dell’s BPM sequenced version ............................. 41

3.3.2 Enriched-MIT Process Handbook version of Dell’s BPM ............ 42

4 Workflow Engine .................................................................................................. 56

4.1 Workflow engine design and assumptions................................................. 57

4.1.1 Aim & Objectives .......................................................................... 57

4.1.2 Design conceptualisation & Requirements .................................... 57

4.1.3 Design decisions & Assumptions .................................................. 61

4.2 Logical representation of executable BPM................................................ 65

4.2.1 Junction representation .................................................................. 66

4.2.2 Process representation.................................................................... 68

4.2.3 World state representation ............................................................. 69

4.2.4 Event representation....................................................................... 69

4.3 Workflow engine creation.......................................................................... 70

4.3.1 Workflow engine algorithm ........................................................... 70

4.3.2 Interpretation of workflow engine interesting code ....................... 76

4.4 Discussion and conclusions ....................................................................... 77

5 Experiments ........................................................................................................... 80

5.1 Dell’s BPM simulation............................................................................... 80

5.1.1 Dell’s BPM specification and representation................................. 81

5.1.2 Dell’s BPM simulation results ....................................................... 88

5.1.3 Discussion of Dell’s BPM simulation results ................................ 89

5.2 Experiments & Results............................................................................... 90

5.2.1 Experiment 1: Improve the actual BPM ........................................ 91

5.2.2 Experiment 2: Compare Dell with a traditional computer company

95

5.3 Discussion and conclusions ..................................................................... 103

6 Evaluation ............................................................................................................ 107

6.1 Evaluation Framework............................................................................. 107

6.2 Evaluation of developed BPM ................................................................. 107

6.2.1 Soundness evaluation................................................................... 108

6.2.2 Realism evaluation....................................................................... 108

v

6.2.3 Completeness evaluation.............................................................. 109

6.2.4 Evaluation of the level of detail ................................................... 110

6.3 Evaluation of developed workflow engine .............................................. 111

6.3.1 Soundness evaluation................................................................... 111

6.3.2 Completeness evaluation.............................................................. 112

6.3.3 Coverage evaluation..................................................................... 113

6.3.4 Ease of use ................................................................................... 113

7 Conclusions and Future Work........................................................................... 114

7.1 Overview.................................................................................................. 114

7.2 Conclusions.............................................................................................. 114

7.3 Future Work ............................................................................................. 115

Bibliography ........................................................................................................... 117

A Workflow Engine Decisions .............................................................................. 122

B Workflow Engine Code...................................................................................... 129

C Demo For Workflow Engine ............................................................................. 138

D Experiments’ Code............................................................................................. 147

vi

List of Figures Figure 1: Types of channel relations and flows across a supply chain ........................ 4 Figure 2: Analysis of SCM System [53] ...................................................................... 5 Figure 3: Cost-Responsiveness Efficient Frontier and Zone of Strategic Fit [10]....... 8 Figure 4: Distribution channel of Dell vs. a traditional company [31] ...................... 11 Figure 5: FBPML notation......................................................................................... 20 Figure 6: FBPML joint and split junctions ................................................................ 22 Figure 7: Combinations of FBPML junctions............................................................ 23 Figure 8: Three-layered BPM approach..................................................................... 24 Figure 9: Sample entry of the MIT Process Handbook for “buy” ............................. 27 Figure 10: Sample entry of the MIT Process Handbook for “Identify potential

customers in custom channel {Dell}”................................................................ 27 Figure 11: Decomposition of “Create computers to order” (MIT Process Handbook)

............................................................................................................................ 28 Figure 12: Decomposition of “Design product and process” (MIT Process

Handbook).......................................................................................................... 29 Figure 13: Decomposition of “Buy standard item to stock” (MIT Process Handbook)

............................................................................................................................ 29 Figure 14: Decomposition of “Sell using customized sales channel {Dell}” (MIT

Process Handbook) ............................................................................................ 31 Figure 15: Decomposition of “manage as a creator” (MIT Process Handbook) ....... 32 Figure 16: Decomposition of “Create computers to order” (Sequenced MIT Process

Handbook version) ............................................................................................. 34 Figure 17: Decomposition of “Design product and process” (Sequenced MIT Process

Handbook version) ............................................................................................. 35 Figure 18: Decomposition of “Develop product and process design” (Sequenced MIT

Process Handbook version)................................................................................ 35 Figure 19: Decomposition of “Buy standard item to stock” (Sequenced MIT Process

Handbook version) ............................................................................................. 36 Figure 20: Decomposition of “Manage suppliers” (Sequenced MIT Process

Handbook version) ............................................................................................. 36 Figure 21: Decomposition of “Sell using customized sales channel” (Sequenced MIT

Process Handbook version)................................................................................ 37 Figure 22: Decomposition of “Identify potential customers in custom channel”

(Sequenced MIT Process Handbook version).................................................... 38 Figure 23: Decomposition of “Manage as a creator” (Sequenced MIT Process........ 38 Figure 24: Decomposition of “Manage resources by type of resource” (Sequenced

MIT Process Handbook version) ....................................................................... 39 Figure 25: Decomposition of “Manage other external relationships” (Sequenced MIT

Process Handbook version)................................................................................ 39 Figure 26: Decomposition of “Manage regulatory relationships” (Sequenced MIT

Process Handbook version)................................................................................ 40 Figure 27: Decomposition of “Create computers to order” (Enriched version) ........ 43 Figure 28: Decomposition of “Develop product and process” .................................. 44 Figure 29: Decomposition of “Develop product and process design” (Enriched

version)............................................................................................................... 44

vii

Figure 30: Decomposition of “Buy standard item to order” (Enriched version) ....... 45 Figure 31: Decomposition of “Share info with supplier” (Enriched version) ........... 46 Figure 32: Decomposition of “Share real-time info via Value Chain” (Enriched

version)............................................................................................................... 46 Figure 33: Decomposition of “Get inventory from supplier” (Enriched version) ..... 47 Figure 34: Decomposition of “Manage supplier” (Enriched version) ....................... 47 Figure 35: Decomposition of “Build to order” (Enriched version)............................ 48 Figure 36: Decomposition of “Sell directly” (Enriched version) .............................. 49 Figure 37: Decomposition of “Sell directly to home and small business customers”

(Enriched version).............................................................................................. 49 Figure 38: Decomposition of “Manage home and small business customers”

(Enriched version).............................................................................................. 50 Figure 39: Decomposition of “Support home and small business customers”

(Enriched version).............................................................................................. 50 Figure 40: Decomposition of “Get feedback from home and small business

customer” (Enriched version) ............................................................................ 51 Figure 41: Decomposition of “Sell directly to large business and public sector

customers” (Enriched version)........................................................................... 51 Figure 42: Decomposition of “Identify potential corporate customers” (Enriched

version)............................................................................................................... 52 Figure 43: Decomposition of “Manage large business and public sector customers”

(Enriched version).............................................................................................. 52 Figure 44: Decomposition of “Support large business and public sector customers”

(Enriched version).............................................................................................. 53 Figure 45: Decomposition of “Support large business and public sector customers”

(Enriched version).............................................................................................. 53 Figure 46: Decomposition of “Manage as a creator” (Enriched version).................. 54 Figure 47: Decomposition of “Manage resources by type of resource” (Enriched

version)............................................................................................................... 54 Figure 48: Decomposition of “Manage other external relationships” (Enriched

version)............................................................................................................... 55 Figure 49: Decomposition of “Manage regulatory relationships” (Enriched version)

............................................................................................................................ 55 Figure 50: Relation between workflow engine mission, conceptualisation and

requirements, design decisions and assumptions ............................................... 60 Figure 51: Example BPM with a “process branch” ................................................... 64 Figure 52: An example BPM for the workflow engine ............................................. 66 Figure 53: Relation of workflow engine with model, process and entity specification

............................................................................................................................ 70 Figure 54: Flowchart of our workflow engine ........................................................... 72 Figure 55: Flow state of the execution of a simple BPM using our workflow engine

............................................................................................................................ 73 Figure 56: Graphical representation of workflow engine use.................................... 78 Figure 57: Decomposition of “Buy standard item to order” (Enriched version) ....... 81 Figure 58: Decomposition of “Sell directly to large business and public sector

customers” (Enriched version)........................................................................... 85 Figure 59: Original (sequenced) and parallelized part of “Buy standard item to order”

for experiment 1, version 1 ................................................................................ 92

viii

Figure 60: Original (sequenced) and parallelized part of “Buy standard item to order” for experiment 1, version 2 ................................................................................ 93

Figure 61: myCompany’s BPM for “Buy standard item to stock” ............................ 96 Figure 62: Comparison of simulation results of time between Dell’s “Buy standard

item to order” and myCompany’s “Buy standard item to stock” ...................... 98 Figure 63: Comparison of simulation results of cost between Dell’s “Buy standard

item to order” and myCompany’s “Buy standard item to stock” ...................... 98 Figure 64: myCompany’s BPM for “Sell via intermediary to business customers” . 99 Figure 65: Comparison of simulation results of time between Dell’s “Sell directly to

large business and corporate customers” and myCompany’s “Sell via intermediary to business customers”................................................................ 102

Figure 66: Comparison of simulation results of cost between Dell’s “Sell directly to large business and corporate customers” and myCompany’s “Sell via intermediary to business customers”................................................................ 102

Figure 67: Comparison of simulation results of time between Dell’s and myCompany’s processes.................................................................................. 104

Figure 68: Comparison of simulation results of cost between Dell’s and myCompany’s processes.................................................................................. 105

Figure 69: Level of detail of Dell’s BPM ................................................................ 111 Figure 70: Example BPM to illustrate that backward chaining is inappropriate for

start time estimation ......................................................................................... 122 Figure 71: Execution times of processes of Figure 70............................................. 122 Figure 72: Example BPMs to illustrate sophisticated treatment of process waiting

time................................................................................................................... 123 Figure 73: Example BPM to illustrate the need for prior knowledge of events’

occurrence ........................................................................................................ 125 Figure 74: Example BPM with a process branch..................................................... 126 Figure 75: Example BPMs to illustrate the need for conditions iii) and iv) of

execution completion ....................................................................................... 127 Figure 76: Example BPMs for a transforming a BPM containing a loop................ 128 Figure 77: Example BPM for simulation................................................................. 138 Figure 78: Graphical representation of workflow engine use framework ............... 139 Figure 79: Screenshot of myProcess.pl.................................................................... 140 Figure 80: Screenshot of myJunctions.pl ................................................................. 141 Figure 81: Screenshot of myWorld.pl...................................................................... 143 Figure 82: Screenshot of workflowEngine.pl .......................................................... 144 Figure 83: Screenshot of Sicstus Prolog environment in Windows......................... 145 Figure 84: Screenshot of run command of workflow engine in Sicstus Prolog ...... 145 Figure 85: Screenshot of BPM simulation output.................................................... 146

ix

List of Tables Table 1: Dell’s appropriate coupling of supply chain capabilities with processes and

people ................................................................................................................. 15 Table 2: Key principles for Dell’s business model according to Pearlson et al [41] . 15 Table 3: Common characteristics of Business Process definitions............................ 16 Table 4: Contribution of Business Process Modelling, according to Luo et al [33].. 17 Table 5: Model specification of junction-followed-by-junction in our workflow

engine ................................................................................................................. 67 Table 6: Process specification of “Buy standard item to order” ................................ 82 Table 7: Process specification of “Sell directly to large business and public sector

customers”.......................................................................................................... 86 Table 8: Simulation results of Dell’s “Buy standard item to order” .......................... 88 Table 9: Simulation results of Dell’s “Sell directly to large business and public sector

customers”.......................................................................................................... 89 Table 10: Simulation results of myCompany’s “Buy standard item to stock” .......... 97 Table 11: Simulation results of myCompany’s “Sell via intermediary to business

customers”........................................................................................................ 101 Table 12: Results of experiment 2 for Dell and myCompany ................................. 104 Table 13: Completeness Evaluation Checklist for Dell’s BPM............................... 109 Table 14: Completeness Evaluation Checklist for the developed workflow engine 112

x

Chapter 1

1Introduction

1.1 Motivation

The role of Supply Chain Management (SCM) is becoming more and more important

in today’s business world. From a purely operational approach to SCM of the 1960s

we have moved to a more integrated and strategic approach. Hence, supply chain

management is today considered as a source of competence and innovation. In the

modern business world, companies are competing not only through their product

range and customer relations, but also through their supply chains.

In this, Dell has been held as the “golden example” of Supply Chain Management.

Dell has achieved to become one of the most successful PC companies in the world,

by emphasizing and aligning its strategies with the design of its supply chain (SC).

The innovative ideas of its founder, Michael Dell, and their successful

implementation have turned Dell into the most quoted example of the Supply Chain

research community.

Therefore, the interest in investigating Dell’s SC strategies is great, as it is expected

to highlight more general and innovative issues of SCM.

1.2 Gap, Aim and Objectives

Even though several research efforts have examined Dell’s supply chain strategies,

most of the adopted approaches fall into the category of strategic and theoretical,

abstract view of the subject. On the other hand, the business world is “starving” for

examples and practical, realistic advice for strategies and operations. So, there seems

1

to be some gap between academia and the business world concerning the

treatment of the subject of SCM.

Our aim is to fill this gap by providing an analysis of a lower level, thus use

knowledge-based techniques to analyze and model Dell’s business and SC strategies.

After examining these strategies, we will develop a business process model (BPM)

for Dell that is strategic, business-goal-oriented and executable. To make the BPM

executable we will create a workflow engine for BPM simulation and calculation of

the total execution time and cost.

So, the primary objective of our work is to have an insight into Dell’s supply chain

strategies. The secondary objectives include: i) the development of a BPM for Dell

that illustrates its SC strategies, ii) the creation of a workflow engine for BPM

simulation that is business context sensitive, and iii) the simulation of the developed

BPM using the workflow engine for further analysis of Dell’s strategies.

1.3 Thesis Outline

The thesis has been divided into 7 chapters, starting from this introductory one. The

remaining chapters are organised as follows:

Chapter 2 gives an overview of literature that is related to our work, and

hence covers Supply Chain Management, Dell’s Supply Chain Strategies,

Business Process Modelling and Workflow Management

Chapter 3 describes the developed Business Process model for Dell and

explains the relevant decisions

Chapter 4 covers the development of the workflow engine and illustrates its

mission and objectives, some design decisions and assumptions we have

made

Chapter 5 presents the experiments we have conducted on Dell’s BPM using

our workflow engine and illustrates the relevant conclusions

Chapter 6 covers the evaluation of our work

Chapter 7 summarizes the work done, presents interesting conclusions and

suggests future extensions of this work

2

Chapter 2

2Background Information

In this chapter we will review literature that is relevant to our work and that will be

helpful to the reader to bear in mind throughout the report. Since our work combines

a business-oriented subject with a computer science methodology, it is meaningful to

review literature of both sciences. So, we will first present some general background

information about supply chain management, and then we will review literature that

handles Dell’s supply chain strategies. In the second half of this chapter we will give

an introduction to business process modelling and workflow management, and then

we will explain some topics that we will base our work on, thus FBPML and the

three-layered business process modelling approach.

2.1 Supply Chain Management

A supply chain consists of all parties involved, directly or indirectly, in fulfilling a

customer request [10]. In other words, a supply chain (SC) includes all organizations

that collaborate in order to produce and deliver a finished product to the final

customer, as well as the customer himself. An example of a simple, direct SC would

be the one for a bakery in Edinburgh, which contains one supplier, a distributor of

the materials, the bakery and a customer.

Supply chains can differ in size, complexity of relations between the members and

distribution of physical presence. In the following figure two different types of

channel relations can be seen: direct, where the SC consists of one supplier and one

customer of an organisation, and extended, where apart from the above, a supplier’s

supplier, a customer’s customer, etc. are included. In general, supply chains are

3

dynamic, and involve the flow of information, products and funds between different

stages [30], as shown in the following figure.

… …

Fund

Information

Products

Supplier Organization Customer

Supplier’s Supplier

Organization Customer Customer’s Customer Supplier

Direct Supply Chain

Extended Supply Chain

Figure 1: Types of channel relations and flows across a supply chain

Supply chain management has the objective to have the right products in the right

quantities at the right time at minimal cost [13], a situation that would guarantee

optimal service levels for the customer and optimal performance for the

organizations as a whole and separately. So, SCM involves the management of flows

between and among members of the supply chain in order to maximize total supply

chain profitability [10], hence maximize the total value generated throughout the SC.

Even though the term Supply Chain Management is popular in both academia and

business world, its meaning seems to be ambiguous, as Mentzer et al. [38] suggest.

Some authors view SCM as a management philosophy (with systems approach,

strategic orientation and customer focus as key features), others use the term to refer

to the set of activities to implement a management philosophy (with integrated

behaviour, mutually sharing information, risks and awards, cooperation and

integration of processes being the most important ones), while a third approach is in

terms of a set of management processes (the definition given above by Chopra et al.

[10] adopts this approach). In our work we will adopt the last definition of SCM,

4

while recognizing the existence and importance of the others – an organization needs

first to decide about its supply chain strategy and then translate it into actions and

processes that fulfil it.

Apart from this, there seems to be a confusion between the term of Supply Chain

Management and Logistics. In fact, the two terms are very closely related, and they

are sometimes used alternatively. However, logistics involves “the management of

order processing, inventory, transportation and warehousing” and the challenge

within a firm is to “coordinate functional competency into an integrated operation

focused on servicing customers”, while in the broader supply chain context,

operational synchronization is essential with customers and suppliers in order to link

internal and external operations as one integrated process” [4]. This is made clear by

the following figure by Zografos [53]: Logistics involves the operations of a single

enterprise, while supply chain management involves the interoperations between the

different supply chain members.

ORGANIZATION SALES

Supply ChainManagement Organization

LogisticsOperationTransportation

system

SERVICES RESOURCES

MANUFACTURING

LOGISTICSORGANIZATION ORGANIZATION

ORGANIZATION

INVENTORY

WAREHOUSING

TRANSPORTATION

ORDER PROCESSING

Figure 2: Analysis of SCM System [53] Bearing in mind the different dimensions of SCM, there are three types of SCM

decisions: strategic, tactical and operational [39]. Strategic SCM decisions involve

the design and configuration of the supply chain, capacity planning and facility

location; tactical decisions include supplier selection and evaluation, bidding and

5

contracts; operational decisions include inventory management, production planning

and scheduling, and replenishment policy. The research concerning SCM decisions is

wide, and it includes the five illustrative decision models by Narasimhan et al. [39],

thus buyer-supplier behaviour, sourcing, integrated operations and marketing and

logistics models.

2.1.1 Importance of Supply Chain Management

As we have already mentioned in Chapter 1, supply chain management is becoming

more and more important in today’s business world. As Harrison suggests [21],

“enterprises are currently competing through supply chains”, while Gattorna [18]

claims that “supply chains are the business”. But what is it that makes supply chains

so important? To answer this question one has to consider the modern business

environment and Porter’s value chain model [42].

What characterizes today’s business environment is globalization and rapid rhythms

of change, leading to a high degree of uncertainty and a need for flexibility. The

market is more demanding than in the past and companies compete more on the basis

of time and quality. Since logistics-related decisions have a great impact on time,

logistics has become important for a company’s competitive advantage. This is also

supported by Porter’s value chain analysis [42], which suggests that the primary

value chain activities of a firm are inbound and outbound logistics, marketing and

sales, service, and operations; if a firm succeeds to implement these activities in an

effective and efficient way, it can gain a considerable competitive advantage.

Additionally, globalization has led to closer relationships among suppliers and

customers, thus leading to a tightening of supply chains. A systems thinking is

gradually being adopted by companies, as they recognize the need for supply chain

cooperation and the importance of the success of the whole supply chain rather than

only their own. After all, this agrees with Porter’s value system [42], which suggests

that a firm’s value chain is part of a larger value system (including suppliers’ and

customers’ value chains), and as such, a firm’s success depends not only on its own

value chain but also on the success of the values system it belongs to.

6

Finally, we should take into account the emergence of Internet and E-Commerce.

Internet and new technologies, such as ERP, have facilitated the sharing of

information between firms, thus highlighting how much a company can benefit from

cooperation with other members of its supply chain. As Lawton et al. [29] suggest,

“Internet has extended the benefits of ERP from the value chain of an individual firm

to the entire value system of firms and their suppliers and customers”. Relevant

literature [11] deals with the other side of the same coin, i.e. how different companies

can use the strengths of Internet in order to improve their supply chain performance.

2.1.2 Supply Chain Management Approaches

Given the increasing importance of SCM for business success, there is a growing

interest in SCM in both academia and business world. Different approaches are

adopted in the analysis of the phenomenon and the problems to seek the best choice

of supply chain strategy.

One can tackle this problem through modelling – models of successful supply chains

are expected to give insight into SC theory and extend it. Beamon [2] provides us

with a focused review of multi-stage supply chain modelling, differentiating four

types of SC models: deterministic analytical, stochastic analytical, economic models

and simulation models. In the same article an analysis is given on the different SC

performance measures being used, divided into qualitative and quantitative

categories.

It is also common practice to examine and analyse important issues of supply chain

management. For example, it has been argued [10] that “competitiveness and supply

chain strategies must have the same goal”, in other words a company should achieve

a strategic fit by aligning its SC strategies with the customer priorities. Towards this,

three steps should be followed: understand the customer and supply chain

uncertainty, understand the supply chain capabilities and achieve the strategic fit.

As far as the second step is concerned, one should bare in mind the so-called “cost-

responsiveness efficient frontier”, which can be seen in the figure below.

7

Cost

Responsiveness

High

High

Low

Low

Zone of strategic fit

Figure 3: Cost-Responsiveness Efficient Frontier and Zone of Strategic Fit [10]

Many researchers are also concerned with the strategic dimension of SCM. Cohen

et al. [12] propose five disciplines for top performance: view your supply chain as a

strategic asset, develop an end-to-end process architecture, design your organization

for performance, build the right collaborative model and use metrics to drive

business success.

2.1.3 “Hot topics” in Supply Chain Management

A term that is very popular in the last years is the one of the “extended enterprise”,

which means that each member of the supply chain drops the “single company”

thinking and adopts a systems thinking focusing on the performance of the whole

supply chain, aiming at the satisfaction of the customer [21]. Such a situation

involves the close cooperation of the members of the SC, their coordination and

integration, facilitated by information sharing.

All these issues are often addressed in SCM literature, illustrating the importance of

the extended enterprise (sometimes also called virtual enterprise). For instance, Lee

[30] investigates the significance of integration in modern supply chains and how

value can be created, suggesting that “supply chain integration is neither an easy nor

a simple task – but the payoff can be handsome”. Similarly, Gattorna [18] takes us

from firms’ alignment and alignment in the supply chain to dynamic alignment,

introducing a four-level framework to achieve this: marketplace-, strategy-, culture-

and leadership-related. Thomas et al. [50] argue that “firms are moving from

decoupled decision making process toward more coordinated and integrated design

8

and control of their components in order to provide goods and services to the

customer at low cost and high service levels” and present three categories of

operational coordination (buy-vendor, production-distribution and inventory-

distribution), as well as strategic planning models that support SC coordination.

A related “hot topic” is the so-called bullwhip effect in supply chains. Bullwhip

effect is the phenomenon of the amplification of demand order variability as we

move up in the supply chain [31], which has as consequence excessive inventories,

poor forecasts and poor customer service. Even though different suggestions have

been made on how to deal with this problem, there are still many companies

suffering from bullwhip effect symptoms.

Important research is also done in the field of different SCM trends and strategies.

Mass production techniques, pioneered by Ford in the 1920s [29], have been

common practice in the twentieth century and are still seen as the most popular rule

for doing business. A challenge to this model has been the just-in-time (JIT) scheme

introduced by Toyota in the 1960s, which facilitated rapid product innovation,

flexible production and cost saving through lower level of inventory [29]. The term

JIT refers broadly to “a philosophy where the entire supply channel is synchronized

to respond to the requirements of operations or customers” [16], and it involves

manufacturing (with small lot sizes and short lead times) and purchasing (with

frequent deliveries of small lot sizes as central point). Build-to-order (sometimes also

referred as make-to-order) is another popular operations paradigm, which leads to a

flexible and responsive supply chain [19]. What is also common in today’s supply

chains is the vertical disintegration of production, meaning that companies tend to

focus on their core competence and outsource their logistics and secondary

operations (3PL) in order to save money. Postponement is another common practice,

where there is a delay in decision making and especially in manufacturing of a

product, resulting in better predictions about the end product demand over time [44].

9

2.2 Dell’s Supply Chain Strategies

2.2.1 General Information about Dell

Dell was founded by Michael Dell in 1984, while he was still a student at the

University of Texas in Austin. From its very first steps, the direct sales model was

adopted: At the beginning computers were sold over the phone and they were built

according to the customers specifications [28]. After a short break of using the retail

channel from 1990 to 1994, Dell returned to its direct model and grew rapidly in the

mid 1990s, thus becoming in 1999 the number one PC seller in the United States and

number two worldwide [28]. Dell’s success was phenomenal: from a student’s

personal company selling no more than 100 computers in its first years of existence,

it became a big company of more than 35.000 employees and over 25.000 million

dollars’ sales in 2000 [26], competing “giants” such as IBM and HP. No wonder that

Dell was the most admired company of US and third most admired company

worldwide in 2005 [58].

Dell’s success remained for the following years, however could not entirely avoid the

general crisis of the PC industry of the new millennium; Dell’s growth rate has

fallen, resulting in a fall in its stock price. However, Dell has managed to remain a

successful company, as its growth rate “continues to outpace the industry as a whole”

[28]. Apart from this, Dell has decided to enter new markets and, thus, expand its

product portfolio: servers, workstations, printers and PDAs, as well as flat-screen

TVs and digital cameras are the new challenge for the company. Under this scope,

Dell changed its name in 2003 from “Dell Computer Corporation” into “Dell Inc.”,

in order to “reflect the evolution of the company from a computer manufacturer to a

company that provides a wide array of technology-related services” [45].

Michael Dell’s strategic choices and his effective way of realizing them have played

a significant role in Dell’s success story. The key element of his successful business

model of the company is its supply chain management; hence, many theorists of

Supply Chain Management have tried to investigate Dell’s SC strategies, and several

companies have attempted to “copy” Dell’s business model, without success

however. This fact shows the complexity of Dell’s SC strategies and its unique way

10

of putting them into practice. The core elements of Dell’s business model are its

direct sales model, usually referred as “direct model”, and the build-to-order strategy.

2.2.2 Directs Sales

The direct model refers to the fact that Dell does not use the retails channel, but sells

its PCs directly to customers through its website, Dell.com, as Figure 4 shows. This

way the intermediary steps that may add time and cost are eliminated, and Dell is

directly linked to its customers.

Indirect Distribution Channel of the PC Industry

Dell’s Direct Distribution Channel

Suppliers PC

maker Distributors Retailers Resellers

Integrators Final

Customer

Suppliers Final Customer

Dell

Figure 4: Distribution channel of Dell vs. a traditional company [31]

In fact, Dell sells directly to all its customers, “from home-PC users to the world’s

largest corporations” [54]. This way it creates a direct relationship with each

individual customer, which turns out to be a great source of competitive advantage.

As Michael Dell has stated, this direct relationship “creates valuable information”

about the customer, thus Dell knows who the end users are, what they have bought

from Dell and what their preferences are, a fact that allows Dell to offer add-on

products and services, and stay, in general, closer to the customer [27]. As Lawton et

al [29] suggest, this “provides Dell with a wealth of marketing and product

development information”.

Dell distinguishes three rough customer segments: large organizations (large

companies or government institutions), small and medium businesses, and personal

consumers; the mix of customers served is wide (no customer represents more than

1-2% of Dell’s revenues) and there is a focus on large customers (70% of Dell’s sales

corresponds to them) [34]. It is also worth mentioning that segmentation is getting

11

finer and finer in order to better approach the customers. This fact, in combination

with the direct model, leads to the ability to better forecast demand [34].

Especially in the case of large customers, the above-mentioned direct relationship is

upgraded to virtual integration. With the help of information technology and

traditional face-to-face human contact, customers work with Dell as partners; this

means that “Dell is not going to be just their PC vendor anymore, but their IT

department for PCs”, as Michael Dell claims [34]. There are two main facilities that

bring Dell and its customers closer: Premier Pages and Platinum Councils. Premier

Pages, now called Premier.Dell.com, are customised IT procurement and support

sites for big clients, which let them decide and manage their purchases from Dell,

thus leaving to salespeople a more consultative role. Premier.Dell.com represents a

customised sales channel and as Dell has realised how beneficiary that is, it has

increased the number of Premier Pages from 1000 in 1998 to 50,000 in 2000 [36].

Platinum Councils are regional meetings of Dell’s largest customers, where

executives, salespeople and technicians discuss their experience with Dell and their

needs and expectations from technology. Additionally, Dell’s Customer Experience

Initiative, Dell Forums [55], the Direct2Dell blog [57] and the IdeaStorm [56]

illustrate the importance that Dell places on its customer relationships.

2.2.3 Build-to-order and integration with suppliers

The new business model that Dell has pioneered within the computer industry has

broken the “we-have-to-develop-everything” existing view of the world and has

managed to highlight component assembly as a respectful core activity of a computer

company, as Michael Dell states [34]. The build-to-order strategy takes this

achievement one step further.

Build-to-order SC as a strategy is defined as “a value chain that manufactures

quality products or services based on requirements of an individual customer or a

group of customers at competitive prices, within a short span of time, by leveraging

the core competencies of partnering firms or suppliers and information technologies,

such as the Internet and WWW, to integrate such a value chain” according to

Gunasekaran et al. [19]. Thus, in the case of Dell, a computer is built only after a

12

customer has placed an order; then lean manufacturing and just-in-time production

take place. This means that once an order is placed, configuration details are sent to

the manufacturing floor and the assembly begins; once the computer is built and the

requested software is downloaded, it is shipped by a 3PL to the customer.

The choice of a build-to-order and JIT manufacturing procedure has several

advantages for Dell. First, the level of inventories is very small, leading to low

inventory costs and faster response to demand changes – for instance, when a new

microprocessor comes out in the market, Dell can immediately order it from its

suppliers, as there is no excess inventory to get rid of first. Also, it is common that

customers pay for an order before Dell pays its suppliers for the product’s

components, thus letting Dell operate on a negative cash conversion cycle [27]. Not

to forget the fact that this way customized products are offered, and instead of

guessing, Dell knows exactly what its customers want before producing it.

What is special in the case of Dell is its relationship to its suppliers, which also

facilitates its build-to-order model. Dell fully adopts the approach of the extended

enterprise by viewing its suppliers as an integral part of doing business and a key

factor for its success. “The supplier effectively becomes our partner”, as Michael

Dell states [15].

Dell selects suppliers that have “expertise, experience and the ability to deliver

value” [51], and their performance is regularly evaluated against pre-agreed

measures. In fact, every quarter Dell meets with its suppliers to provide direct

feedback on performance and future expectations [17]. The performance is evaluated

through a scorecard that compares each supplier with its competitors based on cost,

quality, reliability and continuity of supply. As a reward, Dell’s well-performing

suppliers are provided with training and support in order to improve their processes.

Under the effort of minimizing its inventories, Dell demands from its suppliers to

provide them with goods in a “high speed” – so instead of orders such as “deliver

5000 to this warehouse every two weeks”, the form of orders is more like “tomorrow

morning we need 6.795 to be delivered at door A3 (of the warehouse) by 7 am” [51].

A new notion that Dell has introduced is the one of inventory velocity, and it focuses

13

on minimizing inventory and maximizing speed. It is worth mentioning that Dell

holds an average of less than 6 days of inventory, while the corresponding average of

its competitors is 6 weeks [36]. (This fact will be later factored in our model design

for simulation, in Chapter 5.) In order to deal with these high rhythms, the main

suppliers are required to maintain inventory near or in Dell’s plants; they can either

produce close or keep inventories in revolvers or supplier logistics centres (small

warehouses close to Dell’s assembly plants, that are shared by suppliers who pay the

corresponding rent) [24].

Of course, all the practices described above require close collaboration between Dell

and its suppliers – mutual trust and sophisticated data exchange are key factors to

achieve it. This wouldn’t have been possible without the use of Internet and IT: The

most important facility towards information sharing is the website

ValueChain.Dell.com which operates as an extranet between Dell and its suppliers.

Through ValueChain.Dell.com Dell’s suppliers can get informed about the level of

inventory in the supply chain, supply and demand data, component quality metrics

and new part transitions [24]. This way, Dell shares demand and production forecasts

with its suppliers, so they can themselves decide on production levels, avoiding the

bullwhip effect.

2.2.4 Other interesting approaches

Apart from the direct model and build-to-order supply chain strategies, analysts have

dealt with other issues that are said to contribute to Dell’s success. As Fugate et al.

[17] indicate, Dell’s secret recipe concerning its supply chain is “the appropriate

coupling of process and people elements” (See Table 1, which is taken from [17]).

This is obvious from Michael Dell’s statements that “our R&D focuses on process

and quality improvements in manufacturing” and that “one of our biggest challenges

is finding managers who can share and respond to rapid shifts” [34]. The above also

agrees with Cutler’s suggestion that the key ingredient of supply chains is people, as

they bring the SC into life [18].

14

Table 1: Dell’s appropriate coupling of supply chain capabilities with processes and people

SCM Capabilities Processes People Demand Management Direct Model / Build-to-order Maniacal about execution / Bias

for action Internal Collaboration Information Technology Culture of information sharing

Leverage Partners Linked partner planning and execution

Value of personal/business relationships

Business Fundamentals Balance sheet and P&L Rewarded for decreasing costs

According to Kraemer et al. [27], there are three central points in Dell’s value web

model: “Dell’s powerful role in coordination and control of the value network, its

close physical integration with its suppliers and business partners, and the

importance of information technology, the Internet and other electronic

communications”. Chopra et al. [11] have evaluated Dell by viewing it as an example

of e-business that has used Internet to align it with its supply chain strategies.

Pearlson et al. [41] view Dell as a zero-time organisation and identify four key

principles apart from build-to-order and direct model, which can be seen in the

following table and which we will not analyse any further.

Key principles for Dell’s business model

Build-to-order Direct sales Exchange inventory for information Velocity, value and volume Constant change Criticality of coordination

Table 2: Key principles for Dell’s business model according to Pearlson et al [41]

2.3 Business Process Modelling

2.3.1 General Information about BPM

There is a wide range of definitions for a business process. According to Davenport

[14], “a process is a structured, measured set of activities designed to produce a

specified output for a particular customer or market. Implying a strong emphasis on

how work is done, it is a specific ordering of work activities across time and place,

with a beginning, an end, and clearly identified inputs and outputs”. This is the

definition that we will adopt for a business process, emphasizing on the existence of

some inputs and outputs, and the creation of value for the customer. However, as

15

Lindsay et al. [32] suggest, business processes are not adequately defined, a fact that

leads to confusion in the academic and especially in the business sector. For

example, there seems to be some confusion about whether a process description

refers to the end product or not, and whether the analysis of a business process is

appropriate for decision-making modelling.

Even though there seems to be controversy concerning the definition of business

processes, there are some common characteristics of BP definitions in literature, as

Kavakli and Loucopoulos [25] suggest (see Table 3). Moreover, some important

issues about business processes are decomposition, specialization, the existence of

alternative processes and temporal relations between actors, objects and some

process [8].

Common characteristics of BP definitions

well identified products and customers goals several activities involved collaboration between organisational actors

Table 3: Common characteristics of Business Process definitions

Business Process Modelling (BPM) has captured the attention of the business world

in the mid 1990s and is becoming increasingly popular since then. As Aguilar-Savén

suggests [46], business process models are mainly used either to learn, make

decisions about the process or develop business process software. Kalpic et al. [23]

emphasise the importance of process modelling “as a tool that allows the capturing,

externalisation, formalization and structuring of knowledge about enterprise

processes”, thus enabling knowledge management. In other words, even though

business processes are nothing new to enterprises, their modelling makes their

existence explicit and provides a common ground for relative discussion.

On the other hand, it has been argued that process management has failed to fulfil its

promises. In fact, Benner et al. [3] have shown that “pressures towards process

management stunt a firm’s dynamic capabilities” and that “process management

activities are beneficial for organizations in stable contexts, but not in dynamic

innovation and change”. However, the contribution of business process modelling

and management is still widely recognised for the reasons mentioned in Table 4. Not

to forget that BPM is usually the first step towards Business Process Reengineering

16

(BPR), a very popular management approach aiming at the improvement of the

performance of business processes with respect to cost, quality, service and speed.

Contribution of BPM

Common process representation Common understanding of process Analysis of process behaviour and performance

Basis for process improvement and management

Process guidance and execution support automation

Process control

Table 4: Contribution of Business Process Modelling, according to Luo et al [33]

2.3.2 BPM Methods and Tools

There are several BPM methods and tools used in academia and the business world.

Popular methods can be found in Workflow Reference Model, PIF, IDEF3, UML

and Petri-Nets. As far as tools are concerned, they fall in two categories, according to

Chen-Burger and Robertson [8]. The first one deals with capturing and report-

generating of specific modelling methods (e.g. RBPL, Paradigm Plus and BP WIN)

and the second provides also simulation activities (e.g. BPSimulator, Simprocess and

ProSim/ProCap).

It is sometimes not clear which modelling method is appropriate for a project. Luo et

al. [33] suggest a framework for selecting BPM methods based on BPM objectives.

In this framework we start with the BPM objectives (communication, analysis or

control) and continue with the required perspectives of modelling methods (object,

activity or role) and their required characteristics (formality, scalability, enactability

and ease of use). The latter two are matched to the different modelling methods, thus

leading to the selection of the most appropriate one.

Business process modelling is a complex procedure, as it is a knowledge and social

activity: In the modern enterprises, different stakeholders have different views on the

business operations and the analyst needs to take these into account [1]. Apart from

the above mentioned BPM methods, it is worth mentioning Checkland’s soft systems

methodology [5], which is a human activity-based formalism. In addition, there have

been efforts towards a strategy-driven BPM approach, such as the article by Nurcan

et al. [40]. In this work, a goal-driven approach is adopted in order to “establish a

close relationship between the whys and the whats” and a relevant map

17

representation system is provided. Soffer et al. [49] also deal with the integration of

goals into process modelling by distinguishing goals from soft-goals or business

measures. Kavakli and Loucopoulos [25] suggest a different way of relating goals to

business process modelling: Under the larger framework of Enterprise Knowledge

Development, an enterprise goal submodel is created and this is linked to the

enterprise process submodel. The connection between the two has as following:

Goals related to a business process are presented in a hierarchical way, with the top

business goal being realized by the process, and the leaf node goals being realized by

some role (related to some actor of the process) of the business process.

2.4 Workflow Management

2.4.1 Definition and General Information

Workflow is concerned with the “automation of procedures where documents,

information or tasks are passed between participants according to a defined set of

rules to achieve, or contribute to, an overall business goal” [52]. Workflow

technology is thus the technology that facilitates this automation. It is obvious that

workflow management is closely related to BPM. According to Mentzas et al. [37],

Workflow Management involves process modelling, process reengineering, and

workflow optimization and automation. Workflow Management System (WfMS) is

a system that “completely defines, manages and executes workflows through the

execution of software whose order of execution is driven by a computer

representation of the workflow logic”.

Workflow Management Systems have been around since the early nineties and they

have become very popular in both academia and business world, as they support the

analysis and optimisation of business operations. It is a wide belief that the

application of WfMS improves organizational performance; in fact, it has been found

by Reijers and van der Aalst [43] that they decrease significantly the lead-, service-

and wait-time of business process execution, and increase the utilization of involved

human resources. Recognizing their increasing importance and the need for

standardisation, the Workflow Management Coalition was founded in 1993. Its

18

goal is to facilitate the use of workflow technologies across vendor products and to

develop standard architectures for workflow specification to allow the

interoperability by various WfMS [37].

2.4.2 Different Approaches and Trends in Workflow Management

According to Mentzas et al. [37], there are three basic categories of workflow

techniques: a) communication-based, b) activity-based, and c) hybrid techniques.

The first type is more human-oriented, and it assumes that the objective of the

workflow is to improve customer satisfaction. Activity-based techniques focus on

modelling the work instead of modelling the commitments among humans, and

hence model the tasks involved in a process and their dependencies. A combination

of the two is what we call hybrid techniques.

Current research topics in workflow management include object-oriented WfMS,

flexibility in workflow modelling and transactional WfMS. Furthermore, the

application of Artificial Intelligence is a new trend in workflow management and it is

believed that “an intelligent WfMS with self-learning capability will be able to

capture the information needed to construct or complete process definitions

automatically during enacting” [47]. The use of AI search techniques can also be

found in the work of Jaeger et al. [22], where a framework for automatic

improvement of workflows is suggested. Another hotspot in workflow management

research is web-based WfMS. Such a topic can be found in Han and Park [20], where

similarities between workflow and web services are addressed.

19

2.5 Fundamental Business Process Modelling Language (FBPML)

Fundamental Business Process Modelling Language (FBPML) is a visual modelling

language which is a merger of two recognised process modelling languages, PSL and

IDEF3. The combination of these two languages guarantees rich visual modelling

methods on the one hand and formal semantics (e.g. description of business

processes in logical sentences) on the other hand. This language is designed to

support both software and workflow system development, and it is characterised as

standard, accessible, collaborative, precise, executable and formal [7].

2.5.1 Notation in FBPML

Figure 5 depicts the notation of FBPML, as it is shown using KBST-EM (Knowledge

Based Support Tool for Enterprise Models) [6].

Figure 5: FBPML notation

The notation of FBPML consists of main nodes, junctions, links and annotations. We

will now briefly describe each one of these points:

20

Main nodes:

Activity: denotes the type of process that may be decomposed or specialised

into subprocesses

Primitive Activity: denotes a leaf node activity that may not be further

decomposed or specialised

Role: describes the “role” that an enabler plays on the context of described

activities

Time Point: indicates a particular point in time during the enactment of a

process model

Links:

Precedence Link: places a temporal constraint on process execution (e.g. in

Figure 5 activity b may not start execution before the execution of activity a

is finished)

Synchronisation Bar: places a temporal constraint between two time points

(e.g. in Figure 5, the begin time of activity d should be synchronised with the

end time of activity c)

Junctions:

Start and Finish junctions: indicate the logical starting and finishing points of

a process

And and Or junctions: these can be fan-in (many-to-one relationship) or fan-

out (one-to-many relationship), as Figure 6 shows, and they can be broken

down to:

o And Joint: indicates that all of the preceding activities that have been

triggered must finish execution before the following activity can be

executed. So, if from activities A, B and C of Figure 6-a, A and B are

triggered, D can start execution once both A and B have finished

execution.

o Or Joint: indicates that only one of the preceding activities is required

to be triggered and finished before the following activity can be

executed. So, if from activities A, B and C of Figure 6-b, A completes

execution first (e.g. at timepoint T), D can start execution at T.

21

o And Split: indicates that all of the following activities (e.g. all B, C

and D of Figure 6-c) must be completed after the preceding activity

(here, A) is finished

o Or Split: indicates that at least one of the following activities (e.g. all

B, C and D of Figure 6-d) must be completed after the preceding

activity (here, A) is finished

Figure 6: FBPML joint and split junctions

To make the above clear, we will provide an explanation of the combinational use of

branching junctions, as these are shown in Chen-Burger et al [7], and based on our

experience with FBPML. The four different junction combinations can be seen in the

following figure, where each process is assigned a symbol of type or , which

denotes whether the process has been triggered (i.e. √ if it has been triggered and x if

not) and the time its execution finishes (e.g. 3).

√ 3 X

√ 3

√ 3 √

1√ 1

X √ 5

√ 7

22

√ 3

√ 3 √

1 √ 1

√ 5 √

5

√ 7

X

Figure 7: Combinations of FBPML junctions

In Figure 7-a an AND-AND junction can be seen, which means that all

processes B, C, D must finish execution (because of the and-split junction),

and E starts execution when all have finished execution (because of the and-

joint junction). So, for the example provided where all B, C and D are

triggered, and the maximum completion time is 7, E can start execution at 7.

An OR-OR junction can be seen in 7-b, and it denotes that at least one of the

processes B, C, D must finish execution (because of the or-split junction), and

E starts execution at the minimum completion time of one of them (because

of the or-joint junction). So, for the example provided where only B and D

are triggered, and only B finishes execution at time 3, E can start execution at

3.

Figure 7-c gives an example of AND-OR junction, which means that all B, C

and D must finish execution (because of the and-joint junction), but E starts

execution at their minimum completion time (because of the or-joint

junction). So, for the example provided where the minimum completion time

is 3, E can start execution at 3.

An OR-AND junction can be seen in 7-d, and it denotes that at least one of

the processes B, C, D must finish execution (because of the or-split junction),

and E starts execution when all triggered processes among B, C and D have

finished execution (because of the and-joint junction). So, for the example

provided where only B and C are triggered, and the maximum completion

time is 5, E can start execution at 5.

23

Annotations:

Idea Note: records textual information that is relevant to, but beyond the

scope of, a process model

Navigation Note: records the relationships between diagrams in a model

2.6 Three-Layered Business Process Modelling Approach

The three-layered business modelling approach has been developed in the Artificial

Intelligence Applications Institute of the University of Edinburgh, and it supports the

development of workflow systems from business process models and provides the

means to describe higher level business processes, objectives and policies [9]. The

three layers of the approach that can be seen in Figure 8, are described as follows:

The Business Layer describes business requirements of an organisation,

processes that are to be carried out by the organisation and relevant needed

information. The related documentation is higher-level descriptions that may

be formal or informal.

The Logical Layer expresses a logical description of business processes, so it

is a semi-formal business process model that describes business operations in

ordered activities.

The Implementation Layer gives detailed step-by-step algorithmic

procedure for software modules that implement processes described in the

logical layer.

Business Layer

Logical Layer

Implementation Layer

Business Requirements

Logical Requirements

System Requirements

Figure 8: Three-layered BPM approach

24

Chapter 3

3Dell’s Business Process Model

In this section we will move from the business layer of the Three-Layered Business

Process Modelling Approach, thus the textual description of Dell’s SC strategies, to

the logical layer; hence, we will illustrate the business process model (BPM) that we

have developed for Dell and how we came up with this model. First we have to make

clear that the BPM created involves Dell and does not show directly the business

processes along the whole supply chain. However, one can easily identify some basic

activities that involve cooperation with suppliers and customers, thus supply chain

management activities. Our BPM it shows all activities – basic and supporting ones –

that take place when Dell creates a computer for some customer, hence the whole

process in which Dell “does business” in order to create and sell a computer. Note

that we focus only on the creation of computers and not other Dell products such as

PDAs or digital cameras.

We have based our business process model on the MIT Process Handbook [60],

which has provided us with a generic BPM for Dell, called: “Dell – Create computers

to order” [59]. The MIT Process Handbook was a challenging research project of the

Massachusetts Institute of Technology in the 90s that took about 10 years to be

completed. The project’s basic aim was to “develop a comprehensive framework for

organizing large amounts of useful knowledge about business”. With respect to this

approach there are three primary kinds of entries in the Handbook: (1) generic

models of typical business activities (e.g., buying, making, and selling) that occur in

many different businesses, (2) specific case examples of interesting things particular

companies have done, and (3) frameworks for classifying all this knowledge [35]. As

stated by the project members [35], the result of this work is an on-line “process

handbook” which can be used to help people: (1) redesign existing business

processes, (2) invent new processes (especially those that take advantage of

25

information technology), and (3) organize and share knowledge about organizational

practices. In our case, it is the last contribution of the Handbook that has been of use

to us.

As we have already mentioned, the MIT Process Handbook provides us with generic

business processes, as well as with specific processes of companies such as Dell. In

the figures below we can see a sample entry of a generic process, “buy”, as shown in

the Handbook, and a process specific for Dell, “Identify potential customers in

custom channel {Dell}”, respectively. As one can notice, every process entry in the

handbook includes the following information about the process:

name (or the title of the process)

description: this can be short or long, depending on the importance of

relevant information; it explains what the process involves as well as how

some processes are being executed; especially in the case example processes,

supplementary interesting information is included such as historical data,

sources and links to other web pages.

parts: the subprocesses are given, without however any defined sequence,

hence as a list. These parts may include subparts, some of which may in turn

include further subparts. The full decomposition can be extracted by

navigating through the process handbook.

properties: date of last modification of the process entry

related processes: these include specializations and bundles (processes that

show how a process can be done or what the object of the process is,

respectively, hence all other ways that the same processes can be done), uses

(all other processes that use the described process) and generalizations (all

other processes that are like the described process)

26

Figure 9: Sample entry of the MIT Process Handbook for “buy”

Figure 10: Sample entry of the MIT Process Handbook for “Identify potential customers in custom channel {Dell}”

27

3.1 Dell BPM – The MIT Process Handbook version

We will now briefly describe the BPM that the MIT Process Handbook has

developed for Dell. The full version can be found in �[59].

0 Create

computers to order

1 Design

product and process

2 Buy standard item to stock

3 Configure to order using

internet

4 Sell using

customized sales channel

5 Manage

as a creator

Figure 11: Decomposition of “Create computers to order” (MIT Process Handbook)

The above figure shows the decomposition of the basic Dell process, “Create

computers to order”. Note that no sequence is implied by the Handbook, but only the

parts of the parent process are given. Hence, in this case, the subprocesses of create

computers to order are: design product and process, buy standard item to stock,

configure using internet, sell using customized sales channel, and manage as a

creator, but no sequence is specified. So, according to the MIT Process Handbook, in

order to create a computer to order, Dell needs to have already designed the

corresponding product (as well as the manufacturing process) and bought the needed

components from its suppliers; the order is configured through Dell’s web site and

the computer is sold to the customer (here implied a corporate customer) via a

customized web site called Premier Page. Throughout the whole process, Dell

manages as a creator (this involves general business tasks, such as strategy, and

managing resources and relationships). We should make clear that the numbering of

the processes above does not imply any sequence between the processes, but it has

been introduced for matters of ease (especially for decomposition); the same holds

for the numbering of all the processes of the MIT Process Handbook version that

follow.

28

Further decomposition of each subprocess can be seen below:

1.1 Identify needs or

requirements

1.2 Identify product

capabilities

1.3 Develop

product and process design

1.3.1 Develop the

characteristics of a product/

service

1.3.2 Develop the process of

producing a product/service

Figure 12: Decomposition of “Design product and process” (MIT Process Handbook)

The process “design product and process” consists of 3 parts, as shown above, and

has as a result the identification of the product design and the corresponding

manufacturing process. What is interesting here is that there is integrated product and

process design, meaning that the process and the product are designed in parallel. As

far as the parts are involved, we should mention the following: The identification of

needs or requirements can be done from the view of either the consumer or the

producer, and it involves the specification of the usability parameters of a resource.

The product capabilities are identified such that the product will be usable by the

consumer. Also, the process “develop product and process design” is further

decomposed into 2 subparts, as shown in the figure above.

2.1 Identify potential sources

2.2 Identify

own needs

2.3 Place order

2.4 Receive

2.6 Pay

2.7 Manage suppliers

2.5 Select

supplier

2.7.1 Evaluate suppliers

2.7.2 Manage

supplier policies

2.7.3 Manage supplier

relationships

Figure 13: Decomposition of “Buy standard item to stock” (MIT Process Handbook)

29

According to the MIT Process Handbook, buying a standard item to stock is done

always “in advance of a particular need or custom requirement of a particular

instance” [60]. So, in the case of Dell, and according to the MIT Process Handbook

(note that in section 3.3 we will not agree with the Handbook for this process), the

company buys standard items (usually of low cost) and keeps them as inventory in

order to use them at the computer assembly procedure some time later. The process

“buy standard item to stock” consists of 7 parts, which can be seen above. (Note that,

as mentioned above, the numbering does not imply any ordering between the

processes, but it is set according to the order the processes are listed in the

Handbook.) So, in order to buy standard items to stock, Dell needs to identify a need

for some item, identify the potential sources for this item, choose a supplier, place an

order, receive it and pay for it; in the meantime, Dell also manages the supplier for

this item, meaning that it manages the supplier policies and their relationships and

evaluates them. Dell configures its customers’ orders using the internet, as all

relevant information is obtained by its web site, Dell.com, where customers place

their orders. The MIT Process Handbook does not suggest any further decomposition

for this process.

The MIT Process Handbook treats the next process, “sell using customized sales

channel” as of great importance in the case of Dell, thus Dell-specific, while all the

other processes we have already mentioned are generic processes that hold for the

Dell case. Dell uses the Internet as its basic sales channel, and it has created

customized web sites for its big clients, the Premier Pages (we have covered these in

section 2.2).

In Figure 14 we can see the parts of this process. So, in order to sell computers using

the Premier Pages, Dell needs to identify potential customers (here implied corporate

customers and individual customers within these organizations, such as employee

groups) and their needs, inform them about the different possible PC configurations

and prices, obtain an order from a customer and the corresponding payment and

deliver the ordered product; in the meantime Dell manages its customer

relationships. As we have mentioned in section 2.2, the use of Premier Pages, and

30

thus online customized sales and support, differentiates Dell from other PC

companies and provides a competitive advantage.

4.1 Identify potential

customers in custom channel

4.2 Identify potential

customers’ needs

4.3 Inform

potential customers

4.6 Receive payment

4.1.1 Identify potential

corporate customers

4.1.2 Identify potential

individual customers

4.4 Obtain order

4.5 Deliver

product or service

4.7 Manage

customer relationships

Figure 14: Decomposition of “Sell using customized sales channel {Dell}” (MIT Process Handbook)

As Dell creates computers to order, it needs to perform some general business

activities in order to “survive” and succeed. The related process has the name

“manage as a creator” and has 4 parts that are to be seen in the following figure. In

other words, Dell needs to develop its strategy, manage its resources and its external

relationships, as well as manage learning and change within the company, so as to

guarantee its sustainability. Some of these four parts are further decomposed, and in

one case we reach two levels of decomposition. A few words about some of the parts

of “manage as a creator”: Regulatory relationships include governments for taxation

or duties; the corresponding process (here numbered as 5.4.1) is further decomposed

into two processes, manage tax and duty compliance and manage legal compliance,

which involves regulations other than tax and duty. Examples of societal

relationships include charitable work, donation, etc.

31

5.1 Develop strategy

5.2 Manage

resources by type of resource

5.3 Manage

learning and change

5.4 Manage other

external relationships

5.2.1 Manage human

resources

5.2.2 Manage physical

resources

5.2.3 Manage financial resources

5.2.4 Manage

information resources

5.4.1 Manage

regulatory relationships

5.4.2 Manage

competitor relationships

5.4.3 Manage societal

relationships

5.4.4 Manage

environmental relationships

5.4.5 Manage

stakeholder relationship

5.4.1.1 Manage tax and duty

compliance

5.4.1.2 Manage legal compliance

Figure 15: Decomposition of “manage as a creator” (MIT Process Handbook)

3.2 Dell BPM – The sequenced MIT Process Handbook version

3.2.1 Weaknesses of Dell’s BPM based on MIT Process Handbook

As we have already mentioned, and as one can see from the figures of section 3.1,

the MIT Process Handbook provides us with a business process model for Dell

which has no sequence, thus we can only see the parts of each process, its

decomposition. This means that we do not know whether two parts of the same

process (siblings) are executed one after another or in parallel/concurrently. This is

32

important information, as it takes into account information dependency between the

processes, and has a great impact on the time and cost of the execution of the

processes. After all, two of the most widespread business process modelling

methods, UML Activity Diagram and IDEF3 Process Model, use precedence links.

Hence, if we wish to have an insight into Dell’s supply chain strategies using a BPM,

we get little/incomplete information from the given BPM from the MIT Process

Handbook. So, our next step is to use the model we have presented in the above

section in order to create a new version that incorporates sequence. The decision

on the sequence between processes will be made based on relevant literature

concerning Dell, as well as on known general business practices. However, it is

beyond the scope of this section to provide detailed justification for the sequence

between every business process pair.

3.2.2 The sequenced MIT Process Handbook version of Dell’s BPM

We have decided to use the Fundamental Business Process Modelling Language

(FBPML), since it is designed to be used to support both software and workflow

system development, as we have already mentioned in section 2.5. Therefore, it is

expected to be helpful for the development of our workflow engine in our project.

Before presenting our sequenced MIT Process Handbook version of Dell’s BPM, we

should mention that in some cases the numbering of the processes of the earlier

version has changed. This happens because in the last version we introduced

numbers in the processes (these were not part of the Handbook Dell case) for matters

of ease, hence these do not imply any sequence. However, in this section, where

process sequence is introduced, numbering does actually have some meaning

concerning sequence. Since in our previous version the numbers of the processes

were defined based on the order of the processes in the parts’ list, it is commonsense

that this order may not match the sequence of the processes in the BPM, resulting

into a change of numbering.

In Figure 16 we can see the decomposition of the basic Dell process, “Create

computers to order” after taking ordering into account. In order to decide on the

33

sequence between the five processes, we have considered the case of creating one

computer to order, thus design one product (one computer in our case) and process,

buy standard items needed for the assembly of this computer, configure one order for

the specific computer, sell it and manage as a creator (of this product). In order to

create a computer to order, the first step is to design the corresponding product and

process (process 1). Once the product and the manufacturing process is designed,

Dell can start the procedure for its physical creation and sale (processes 2, 3 and 4),

but also it has to start managing as a creator in order to succeed (process 5). Hence,

after process 1 we have an AND-junction that indicates that process 5 takes place

concurrently with processes 2, 3 and 4. After Dell buys the needed items to stock

(process 2), it configures an order for this product (process 3) and after the

completion of this procedure, Dell can sell it using its customized sales channel

(process 4). As already explained, Dell has to manage as a creator in the meantime

(process 5).

Figure 16: Decomposition of “Create computers to order” (Sequenced MIT Process

Handbook version)

The sequence of the parts of process 1, “Design product and process”, is quite trivial:

In order to design a product and its manufacturing process, Dell first has to identify

the needs or requirements that this product will fulfil. Then, and based on those

needs, the product capabilities are defined, and finally the product and process design

is developed by taking into account the previous two steps. Hence, the three

processes, 1.1, 1.2 and 1.3, are sequential.

34

Figure 17: Decomposition of “Design product and process” (Sequenced MIT Process

Handbook version)

As we have already mentioned in the previous section, what is interesting about

process 1 is that the product and process design is integrated, meaning that the

process and the product are designed in parallel. This is made obvious in the

sequenced version of the decomposition of “Develop product and process design”

(Figure 18), where the two processes, “develop the characteristics of a

product/service” (process 1.3.1) and “develop the process of producing a

product/service” (process 1.3.2) are parallel.

Figure 18: Decomposition of “Develop product and process design” (Sequenced MIT

Process Handbook version)

In Figure 19 we have the decomposition of process 2, “Buy standard item to stock”,

where the decision on the sequence is also quite trivial. The whole procedure begins

when Dell realizes it has a need on a standard item (process 2.1); the next step is to

look for potential sources (process 2.2), thus suppliers, and select one of them

(process 2.3). Then activities of two types take place at the same time: activities that

have to do with the specific order for the standard item (processes 2.4, 2.5 and 2.6),

and activities of managing the suppliers (process 2.7). Since these take place in

parallel, we have introduced two AND-junctions, one fan-out and one fan-in. As far

as processes 2.4, 2.5 and 2.6 are concerned, these are sequential for the following

35

reason: Once a supplier is selected for the needed item, an order is placed, then the

requested items are received and finally Dell pays for its order to the supplier (pay

could actually precede receive, but we have assumed that Dell first receives and then

pays for its orders, at least for the current version of our BPM).

Figure 19: Decomposition of “Buy standard item to stock” (Sequenced MIT Process

Handbook version)

Managing suppliers involves evaluating them (process 2.7.1), managing supplier

policies (process 2.7.2) and managing supplier relationships (process 2.7.3). All three

processes are executed at the same time, a fact that explains the AND-junctions of

the figure below.

Figure 20: Decomposition of “Manage suppliers” (Sequenced MIT Process Handbook

version)

Now we will explain the sequenced version of the decomposition of process 4, “Sell

using customized sales channel”, which can be seen in Figure 21. At the first glance

it reminds us to some extent the sequenced version of process 2, “Buy standard item

to stock”. Like in process 2, most of the parts of process 4 are sequential, except for

36

4.7, which involves managing (just like 2.7) and takes place at the same time with

some other parts. In a few words, the sequenced version of the decomposition of

process 4 has as follows: In order to sell using its customized sales channel, Dell

needs first to identify potential customers (process 4.1); once these are identified two

processes start execution: manage relationships with these customers (process 4.7)

and identify potential customers’ needs (process 4.2). Once potential customers’

needs are identified, they are informed about the product range and prices (process

4.3) and then orders are obtained from the customers (process 4.3). The next step is

to receive the payment from the customers and only after this is completed will Dell

deliver the ordered product.

Figure 21: Decomposition of “Sell using customized sales channel” (Sequenced MIT

Process Handbook version)

The first subprocess of “sell using customized sales channel”, “Identify potential

customers in custom channel” (process 4.1), is further decomposed, as the following

figure shows. The sequence of its two parts is sequential, as Dell first identifies

potential corporate customers (process 4.1.1), such as Boeing, and then identifies

potential individual customers within the corporate customers (process 4.1.2),

usually the employees of the customer organization (e.g. managers, purchasing

agents and end users in Boeing).

37

Figure 22: Decomposition of “Identify potential customers in custom channel”

(Sequenced MIT Process Handbook version)

The sequenced version of the decomposition of “Manage as a creator” can be seen in

Figure 23 and involves four subprocesses. As the figure shows, in order to manage as

a creator, Dell needs first to develop its strategy (process 5.1) – without a strategy the

organization cannot decide how to deal with managing resources and other business

issues. So, after the strategy is decided, Dell can concurrently manage resources by

type (process 5.2), manage learning and change (process 5.3) and other external

relationships (process 5.3).

Figure 23: Decomposition of “Manage as a creator” (Sequenced MIT Process Handbook version)

The process “Manage resources by type” is further decomposed in four processes

which are all parallel, as there is no dependency between them. The relevant

decomposition can be seen in the following graph.

38

Figure 24: Decomposition of “Manage resources by type of resource” (Sequenced MIT

Process Handbook version)

The decomposition of process 5.4, “Manage other external relationships”, is similar,

as all its subprocesses are parallel.

Figure 25: Decomposition of “Manage other external relationships” (Sequenced MIT Process Handbook version)

39

Similarly, the two subprocesses of “Manage regulatory relationships” (process 5.4.1)

are parallel, as there is no dependency between them.

Figure 26: Decomposition of “Manage regulatory relationships” (Sequenced MIT Process Handbook version)

40

3.3 Dell BPM – The enriched version

3.3.1 Weaknesses of Dell’s BPM sequenced version

Even though the second version of Dell’s business process model, which

incorporates sequence, is of better use than the original one, there are still some

negative points that make it inappropriate for extracting Dell’s supply chain

strategies. These will be discussed in the following paragraphs.

First it is still too generic for such a specific focus area as supply chain management.

As it is completely based on the MIT Process Handbook, many of the parts of “Dell

– Create computers to order” are not Dell-specific but generic, thus they are used in

many other case examples. For instance, the process “buy standard item to stock”

(process 2) is generic and does not reflect how Dell cooperates with its suppliers in

order to buy standard items; this is even more obvious with its part “receive”

(process 2.5), as it does not show any of the relevant interesting findings from our

literature review (e.g. the case where Dell does not receive the items at its plant, but

it gets them from the supplier’s plant). This fact shows another weakness of the

current BPM: it is sometimes too high-level, thus leaving out interesting information

for Dell’s supply chain strategies.

Third, considering the literature about Dell’s supply chain strategies, there seem to

be some mistakes in MIT Process Handbook’s Dell case. The most important one is

the fact that Dell does not, in general, buy to stock but it buys to order. This is a key

issue for Dell’s cooperation with suppliers and organization of assembly preparation,

and a very important point in Dell’s SC strategies. However, according to the MIT

Process Handbook (process 2), Dell buys standard items to stock, which seems to be

an important mistake.

There also seems to be an important gap in the MIT Process Handbook business

process model, thus the manufacturing/assembly process of computers is not

thoroughly treated, as it is not further decomposed. However, this is an important

process for Dell, and the way Dell goes about it demonstrates Dell’s ability towards

speed and combination and alignment of supply chain strategies with manufacturing;

therefore, it would preferred to present computers’ assembly in a more detailed way.

41

Another gap results from the focus on corporate customers that the MIT Process

Handbook has adopted; however, we believe that customer segmentation is an

important aspect and should be incorporated in the final BPM.

Last, because of the high-level approach of the MIT Process Handbook, the resulting

BPM is, in some cases, too simplistic. For example, it seems to consider only the

case where no errors occur (no exception handling) and it does not take into account

several decisions that may have to be met (e.g. what happens when a during some

product design it turns out that this product is not profitable for Dell?).

3.3.2 Enriched-MIT Process Handbook version of Dell’s BPM

All the above weaknesses of the current BPM have led us to the development of a

new business process model, which we will call “Enriched-MIT Process

Handbook BPM”, or “enriched BPM” in short. Even though we have described the

main weak points of the Dell BPM according to the MIT Process Handbook, we still

recognize the quality and value of this project; hence, our final version of Dell’s

business process model will still be based, in its biggest part, on the MIT Process

Handbook. Our aim with this new version of Dell’s BPM is to have a complete BPM

that shows in a high level the supply chain strategies of Dell. Towards this goal, we

will enhance the current BPM by:

introducing and decomposing the process “assemble to order”

specializing some generic processes on the Dell case

further decomposing some given processes

introducing new processes that reflect Dell’s supply chain strategies

substituting “buy standard item to stock” with the alternative process “buy

standard item to order”

dealing with exception handling

clarifying some titles of given processes

We will now move on to the presentation and explanation of Dell’s enriched business

process model, which is the final version for our project.

42

In Figure 27 we can see the decomposition of “Create computers to order”. If we

compare it with the corresponding figure of the sequenced-MIT Process Handbook

BPM (Figure 16), we will see that they differ. Even though the previous “Create

computers to order” decomposition may seem more logical and understandable, the

new version had to be changed because of the different decomposition of each

subprocess. This means that the processes 2, 3 and 4 are interleaved (e.g. Dell

receives an order from a customer (process 4), suppliers provide Dell with needed

inventory (process 2) and computers are assembled according to customer’s order

(process 3), then extra needed items, such as monitor, may be received from a

supplier (process 2), and finally the order is delivered to the customer (process 4)),

and thus they are executed in parallel.

Figure 27: Decomposition of “Create computers to order” (Enriched version)

The decomposition of “Design product and process” has been enriched and it now

contains a process for feasibility and profitability checking (process 1.4): If the

product to be designed seems unprofitable or not feasible to manufacture, then its

design should be abandoned (this explains the arrow to “finish” in the figure below).

Also we distinguish two alternative cases for needs or requirements identification:

Customers’ requirements may involve a new product (process 1.2) or an already

existing product (process 1.1).

43

Figure 28: Decomposition of “Develop product and process”

The composition of process 1.5 is the same as in the previous version of Dell’s BPM,

as the following figure shows.

Figure 29: Decomposition of “Develop product and process design” (Enriched version)

The process “Buy standard item to order” has replaced our previous “Buy standard

item to stock” and it is very important for Dell’s supply chain strategies. As Figure

30 shows, the procedure begins with the identification of Dell’s needs on some item

(process 2.1), then potential suppliers are identified (process 2.2), from which one is

selected (process 2.3), and then contracts and the replenishment environment are

44

negotiated (process 2.4). This is an important step, as Dell needs to make sure that its

cooperation with the supplier will be as wished, and that the supplier will agree on

Dell’s high expectations. After the completion of this step two processes start

execution: suppliers managing (process 2.8) and information sharing (process 2.5).

Process 2.5 has been introduced, as information sharing between Dell and its

suppliers is the cornerstone of their successful cooperation. Based on this

information sharing, suppliers provide inventory to Dell (process 2.6) based on

Dell’s demand forecasts and the level of inventory. Finally, the suppliers are paid

(process 2.7).

Figure 30: Decomposition of “Buy standard item to order” (Enriched version)

Below the decomposition of “Share info with supplier” is shown. Its three

subprocesses are executed concurrently and they involve: Sharing real-time info that

involves the level of inventory and real-time orders via Value Chain (process 2.5.1),

sharing demand forecasts (process 2.5.2) and sharing general business information

(process 2.5.3). The latter has to do with information of general business or product

interest, such as the existence of new trends in the market (e.g Sony could inform

Dell about high-selling monitors or Dell could let Intel know about the requirements

of its big customer from processors).

45

Figure 31: Decomposition of “Share info with supplier” (Enriched version)

As we have already mentioned, real-time info exchange involves inventory levels

and end customer orders. This can also be seen in the following figure that shows us

the decomposition of process 2.5.1, “Share real-time info via Value Chain”. Its two

subprocesses are executed in parallel.

Figure 32: Decomposition of “Share real-time info via Value Chain” (Enriched version)

The decomposition of process 2.6, “Get inventory from supplier”, which can be seen

in the Figure 33, is different from our previous version of Dell’s BPM. Here we

distinguish two alternatives: inventory may be received in Dell’s plant (process

2.6.1) (this is the case where Dell’s supplier delivers inventory without waiting for a

specific order from Dell) or Dell may place an order (process 2.6.2) and get the

46

inventory from the supplier’s plant (process 2.6.3). The latter covers the case of

standard items that are not required for plant assembly, such as monitors.

Figure 33: Decomposition of “Get inventory from supplier” (Enriched version)

“Manage supplier” is similar to our previous version of Dell’s BPM, with the only

difference that process 2.8.2, “Provide feedback and support” has been introduced

after “Evaluate supplier”. This is an important process of Dell’s cooperation with its

suppliers, as it helps them reach Dell’s high standards and improve their

performance. So, Dell gives its suppliers detailed feedback on their performance

according to a scorecard, and they are supported to overcome difficulties or further

improve themselves.

Figure 34: Decomposition of “Manage supplier” (Enriched version)

47

Now we will show how process 3, “Build to order”, is decomposed. First

components that are required for plant assembly are identified and get (process 3.1),

then the hardware is assembled in Dell’s plant (process 3.2), and standard software is

loaded (process 3.3). Then either customer specific software is loaded (e.g. some

helpdesk software specific for a British Airways) or we move on to product testing

(process 3.5). In the first case, the product is also tested after customer-specific

software loading. If the tested product is in good condition, we move on to packaging

(process 3.6); otherwise, we return to process 3.1 and start product assembly once

again. If the product that is being built contains items that do not need to be

assembled in Dell’s plant, such as monitors, Dell can get them from the supplier’s

plant (process 3.7), in order to match them later with all the other order components

(process 3.8). The whole procedure is completes after the different order components

are matched.

Figure 35: Decomposition of “Build to order” (Enriched version)

Another important process for Dell’s supply chain strategies is “Sell directly”

(process 4). In this version we distinguish two customer segments, and thus two

alternatives: selling directly to “small customers” (process 4.1), thus home and small

48

business customers, and selling directly to “big customers” (process 4.2), meaning

large business and public sector customers.

Figure 36: Decomposition of “Sell directly” (Enriched version)

Figure 37 shows the decomposition of process 4.1, “Sell directly to home and small

business customers”. The whole procedure begins with the identification of customer

segments (e.g. high-tech small business, students, etc) (process 4.1.1) and their needs

(process 4.1.2). Based on these needs, Dell identifies appropriate and valid computer

configurations (process 4.1.3) and informs the customers about them (process 4.1.4).

After this step is completed, two processes start execution: managing home and small

business customer relationships (process 4.1.5) and the ordering procedure, which

includes processes 4.1.6, 4.1.7 and 4.1.8 sequentially.

Figure 37: Decomposition of “Sell directly to home and small business customers”

(Enriched version)

49

Process 4.1.5 is further decomposed in two parallel subprocesses, “Support home and

small business customers” (process 4.1.5.1) and “Get feedback from home and small

business customers” (process 4.1.5.2), as Figure 38 shows.

Figure 38: Decomposition of “Manage home and small business customers” (Enriched

version)

These are further decomposed, as the following two graphs show. Home and small

business customers are offered technical support (process 4.1.5.1.1) and customer

service (process 4.1.5.1.2) via Internet and phone. Dell also gets feedback from them

about their customer experience (process 4.1.5.2.1) and general feedback about its

products and business performance (process 4.1.5.2.2) through forums and blogs,

such as Direct2Dell and IdeaStorm.

Figure 39: Decomposition of “Support home and small business customers” (Enriched

version)

50

Figure 40: Decomposition of “Get feedback from home and small business customer”

(Enriched version)

Now we will present how process 4.2, “Sell directly to large business and public

sector customers”, is decomposed. As the figure below shows, the decomposition is

very similar to the one of 4.1, “Sell directly to home and small business customers”,

so there is no need to explain it in detail.

Figure 41: Decomposition of “Sell directly to large business and public sector customers” (Enriched version)

Figure 42 shows how process 4.2.1 is decomposed. So, in order to identify potential

corporate customers, Dell needs first to identify key personnel of the potential

corporate customer (process 4.2.1.1), such as IT manager, and then identify

employee groups within the customer (process 4.2.1.2), such as helpdesk personnel,

managers and end users.

51

Figure 42: Decomposition of “Identify potential corporate customers” (Enriched version)

Management of large business and public sector customers involves offer of support

(process 4.2.8.1) and getting feedback from them (process 4.2.8.2), which take place

in parallel. These two processes are further decomposed, as the following figures

show. The support that is offered to “big clients” is distinguished into technical

support (process 4.2.8.1.1), customer service (process 4.2.8.1.2) and general business

support (process 4.2.8.1.3), and they are offered via Premier Pages, phone, Account

Team and Platinum Councils. As in the case of “small customers”, the feedback that

Dell gets from its “big clients” involves customer experience (process 4.2.8.2.1) and

Dell’s products and business practices (process 4.2.8.2.2).

Figure 43: Decomposition of “Manage large business and public sector customers”

(Enriched version)

52

Figure 44: Decomposition of “Support large business and public sector customers”

(Enriched version)

Figure 45: Decomposition of “Support large business and public sector customers”

(Enriched version)

The following figures show the decomposition of process 5, “Manage as a creator”.

Since it is exactly the same as in the sequenced-MIT Process Handbook BPM

version, we will not explain our decisions on the decomposition.

53

Figure 46: Decomposition of “Manage as a creator” (Enriched version)

Figure 47: Decomposition of “Manage resources by type of resource” (Enriched

version)

54

Figure 48: Decomposition of “Manage other external relationships” (Enriched version)

Figure 49: Decomposition of “Manage regulatory relationships” (Enriched version)

55

Chapter 4

4Workflow Engine

In this section we will describe the workflow engine that has been developed in

Prolog in order to create an executable version of Dell’s BPM. It is business context

sensitive, meaning that it calculates the total time of a business process’s execution,

as well as the total cost involved. Our workflow engine does not do any validation or

verification, as we assume that the BPM provided is correct; instead, it focuses on

measuring business goals expressed in terms of time and cost, as this information is

important in order to argue about supply chain strategies.

The idea behind the workflow engine implementation can be summarized in the

following three sentences: The workflow engine is initialized by the model and

process specification, and after the event and entity database is loaded, the BPM

execution begins. The BPM runs in a forward chaining manner and keeps an explicit

time record; so in each timepoint actions may be executed, junctions may be reached

and processed, and processes may start execution. When there are no more processes

to be executed, the BPM finishes execution and provides us with information about

the total cost and time.

In order to make the above description clearer, we will first describe the design

decisions and the assumptions we have made, as well as some interesting issues in

workflow engine creation. We will continue with the discussion about the logical

representation of the business process model for execution, such as model and

process specification, and the entity and event database representation. Then we will

explain the workflow engine algorithm and some important relevant points. Finally

we will draw interesting conclusions concerning the workflow engine creation and

use, and we will show examples of its actual use.

56

4.1 Workflow engine design and assumptions

4.1.1 Aim & Objectives

The aim of the workflow engine is to simulate a business process model in order to

check strategic decisions, in our case Dell’s supply chain strategies. The main

objective is to provide an executable version of a business process model, which will

give us an insight into the actual behaviour of the BPM, thus offering a more

complete and realistic view of the object of discourse. After all, the workflow engine

is the “medium” that takes us from the logical layer to the implementation layer of

the Three-Layered Business Process Modelling Approach [9], which we have

adopted for our work and which has been explained in the second chapter. Another

important objective of the workflow engine creation is to support the analysis and

reasoning about business strategies; through explicitly measuring time and cost that

is related to business process execution, the workflow engine will adopt a business

context sensitive approach and facilitate comparison between different strategies, and

hence different business process model conceptualisations.

4.1.2 Design conceptualisation & Requirements

As we have already mentioned, the main use of our workflow engine will be to

simulate a business process model in order to reason about the related business

strategies. Checking business strategies makes sense only when checking the

“normal case” of business process execution, as simulating an exceptional or wrong

business process model would have a poor contribution to arguing about strategic

decisions. So, “checking the average case” means mainly two things: First, that

everything is expected to “go right” in the business process execution, thus events

take place at the right/usual moment, the initial state is correct and guarantees the

process execution, actions execute at the pre-specified timepoint, etc. Second, this

means that the duration and cost assigned to each process is the average value of

them, thus the expected one.

57

Since we are only interested in the simulation of the usual and correct business

process model, there is no actual need for validation or verification. After all, why

check the correctness of a BPM if we already know it is correct? So, our workflow

engine does not provide validation or verification, as there is no need for this, under

the assumption that the provided BPM is correct. The logical path that led us to this

assumption can be seen in Figure 50, coloured in violet.

Additionally, reasoning about business strategies has another impact on the use and

design of the workflow engine: In order to analyze and compare different strategies

through BPM execution, one should “reduce” the business operations (and the

corresponding time and cost) to the “single case”. For example, if we want to

compare the computer assembling procedure of two companies, such as Dell and

IBM, then it makes more sense to compare the time and cost related with assembling

one computer. This design requirement has two implications: First, that the modeller

should already know the cost and time of each “single case” business process and

second, that there are no other needed variables for the workflow engine apart from

the time and cost of each process. Hence, other variables like number of suppliers, or

proportion of big and small customers, are beyond the scope of our workflow engine.

The biggest part of the workflow engine design conceptualisation involves general

and standard workflow engine issues. Since the workflow engine will be used to

make a BPM executable, it will have to conform to some general workflow engine

requirements. This means that it will have to be able to execute processes, keep

track of the workflow state (e.g. know which processes have been executed so far),

understand the current world state (e.g. know which entities and data hold at each

timepoint) and update it according to the actions executed, and give some feedback

to the user about the business process execution results. To make this clearer, some

general requirements for our workflow engine are the following:

i. Understand the business process model, hence understand and “execute” the

different junctions of the model.

ii. Understand the definition of business processes and execute them according

their special conditions (trigger conditions, preconditions, etc.) and the

current world state.

58

iii. Understand and update the current world state according to the actions and

processes executed.

iv. Keep track of the workflow state, thus “remember” which junctions have

been reached and which processes have been executed.

v. Inform the user about the business process execution status.

The first requirement means that junction definition has to be formally specified, so

that it is understandable by the workflow engine. Since we have used FBPML for

Dell’s business process model, our workflow engine will also be based on FBPML

for junction definition and execution. So, it should understand what the “start” and

“finish” junctions signify, and distinguish between “and-split”, “or-split”, “and-joint”

and “or-joint” junctions, and execute them according to their definition. (This topic is

covered thoroughly in 4.2.1.)

Similarly, the second requirement means that processes have to be formally defined.

This definition should include data important for their execution, such as trigger

conditions, preconditions and actions they invoke.

In order to make the executable version of a BPM realistic, we should incorporate the

description of the world in our workflow engine. Since the world changes according

to the workflow state we are in (e.g. what processes and actions have been executed),

our workflow engine should be able to update the world state accordingly.

The fourth requirement is an important “control mechanism” of a workflow engine,

as it guarantees that we correctly move from one process to another instead of getting

stuck in some workflow state or re-executing processes. Also, keeping track of which

processes have been executed is necessary for total cost measuring, and it is

interesting information to give to the user as feedback.

The last requirement is actually imposed from the user-side, as the users of a

workflow engine need to know real-time what is happening during business process

simulation. So, after starting BPM simulation, it would be useful to provide

information such as current timepoint and workflow state; it is also essential to

inform the user when the BPM execution is completed and the total cost involved.

59

MISSION Executable

-Implementation layer-Realistic, real-time behaviour

Business context -Measure time and cost -Support analysis about different strategies

DESIGN CONCEPTUALISATION-REQUIREMENTS

SPECIFIC

-Calculate the total time and cost for a BPM execution -Model only the normal case-Model only the single case

ASSUMPTIONS -The BPM provided is correct. -Junctions connect only processes between them, hence no junction is connected with another junction except for the case where a start or finish junction is involved. -Each process can execute only once. -There is prior knowledge about events’ occurrence. -The minimum process duration is 1 and the minimum cost is 0.

GENERAL -Model and execute junctions -Execute processes -Represent and update the world state -Control the workflow state -Give feedback to the user

DESIGN DECISIONS -Forward vs. backward chaining -Simplistic vs. sophisticated treatment of waiting time -Explicit vs. implicit time measurement -Implicitly vs. explicitly treat junctions -Which junction cases covered? -Deterministic vs. non-deterministic occurrence of events -Treat a process branch as a block vs. as a sequence of processes -Dynamically update vs. keep track of world state for each timepoint

Figure 50: Relation between workflow engine mission, conceptualisation and requirements,

design decisions and assumptions

60

All the above-mentioned requirements can be seen in Figure 50, which presents how

the purpose and mission of the workflow engine affects its design conceptualisation

and requirements, as well as some assumptions that are related to them.

4.1.3 Design decisions & Assumptions

Now that we have made clear what our workflow engine is expected to do, we will

discuss how to deal with some design issues and why relevant decisions have been

made. In addition, assumptions that are based on design decisions will be explained.

The framework that takes us from design decisions to the corresponding assumptions

is presented in Figure 50.

Forward chaining vs. backward chaining algorithm

One of the first design decisions of the workflow engine algorithm is whether to

adopt a forward chaining or a backward chaining approach. The backward chaining

approach, even though not very popular for workflow engine implementation, may

seem convenient for the chosen programming language, Prolog, because of its

recursive “nature”. So, the reasoning for a simple BPM composed by two processes

would be the following: The BPM execution is completed if the finish junction is

reached, which holds if the last process is executed, which requires the previous

process to have been successfully executed, and so on. However, our analysis and

experimentation with a backward chaining workflow engine algorithm has shown

that such a choice makes the estimation of process execution starting time quite

complicated, and even incorrect in some cases (see Appendix A). On the other hand,

a forward chaining algorithm is a more “natural” and correct approach, as it can help

us track the state of the BPM execution in each timepoint. Therefore, a forward

chaining algorithm is chosen for the creation of our workflow engine. The idea of a

forward chaining algorithm is the following: The execution of a BPM is completed

if, starting from the start-junction and by successfully executing the following

processes, we reach the finish-junction.

Simplistic vs. sophisticated treatment of process waiting time

In daily business life it is quite common that processes are triggered later than

expected or in not easily predicted timepoints, especially when the trigger condition

61

has to do with external factors. When such processes are executed in parallel with

other processes, then it may become difficult to calculate the starting time of some

process following them (see Appendix A for a thorough analysis). So, here arises the

question of how we want to deal with such waiting time – in a simplistic or a

sophisticated way? Since one of the basic requirements of our workflow engine is to

measure time, we have decided to adopt a more sophisticated and flexible

approach. This means that we will estimate process starting time by taking the

corresponding waiting time, if any, into account, and not by neglecting it, as the

simplistic approach would suggest.

Explicit time measurement and real time BPM execution vs. estimation of

start and finish time

Another crucial design decision is how to treat time in BPM execution: implicitly, by

estimating each process’s start and finish time, or explicitly, by representing the

world and workflow state in each timepoint? Even though the second option may be

more costly in the case of processes with long duration, it actually turns out that it

guarantees a more precise and correct process start time estimation, especially in case

we want to model the waiting time for some process (see Appendix A). Since we

have decided to treat waiting time in a more sophisticated way, we are obliged to

measure time explicitly throughout BPM execution.

Junction cases covered

As we have already mentioned, junction definition and differentiation in our

workflow engine will be based on FBPML. So, our workflow engine should be able

to deal with all six cases of Figure 10. In these cases a junction connects only

processes between them; in fact, this relation may be either one-to-many or many-to-

one, and our workflow engine is expected to model and execute both types. We have

decided not to cover the case where a junction is connected with another junction,

(e.g. an and-joint junction being followed by an or-split junction), as that would

make the workflow engine algorithm quite complicated. However, we have

recognized the need for modelling a start-junction followed by an and-split or an or-

split junction, and accordingly a finish-junction preceded by an and-joint or or-joint

junction, as they are to be seen in many of Dell’s BPMs. These cases will be dealt by

62

“inventing” a new junction which is actually a combination of the two, thus a

“start/and”, a “start/or”, a “finish/and” and a “finish/or” junction (this topic is

covered thoroughly in section 4.2.1). So, the corresponding assumption is that

junctions connect only processes between them, except for the case where a start or a

finish junction is connected to some other junction.

Process instantiation

Most workflow engines require a process to be instantiated in order to be eligible for

execution, and we will adopt this approach as well. We will regard a process to be

instantiated when it is reached through the workflow state, thus when the junction

preceding it has been reached and processed. Then, this process instance may be

checked for the special conditions (trigger conditions and preconditions) that specify

whether it can start execution. Note that for matters of ease we will assume that each

process can be instantiated and executed only once, thus our workflow engine will

not provide any loop-handling.

Prior knowledge of events’ occurrence

Like most workflow engines do, our workflow engine will relate trigger conditions

of processes with event occurrences. Even though in real business life it may not be

known when events may happen, in our workflow engine, for simplicity, there will

be complete prior knowledge about which events will take place and when, as

opposed to when such info is provided in real-time. This is an assumption that lets us

have some control over waiting time of processes (see Appendix A for a relevant

example). For simplicity, we will have prior information of all events that will occur

throughout the BPM execution, either internal or external, and even if some of them

are a “product” of some process execution. This means that we will not include event

occurrence as a post-condition (action) of processes, as such information will already

be known from the event-occurrence list.

Treat process branches as a block vs. as a sequence of processes

It is quite common in business operations that a post-process of a fan-out junction is

followed by another process, before a fan-in junction is met, thus creating some kind

63

of a “process branch”. Such an example BPM can be seen in the following figure,

where the process branch includes processes p1, p2 and p3.

Figure 51: Example BPM with a “process branch”

In such cases arises the question of how to treat the process branch: as part of a block

of processes, where they are either all executed or none, or as a sequence of

independent processes, where the execution of one process does not directly depend

on the execution of the other? The “block” approach would mean for the above

example that the or-joint junction is reached only if all processes p1, p2, p3 and p4

are executed, while the process sequence approach would mean that the or-joint

junction is reached when one of p3 or p4 finishes execution. In order to simplify the

description of the process model for our workflow engine, as well as the algorithm

itself, we will adopt the process sequence approach. However, one should keep in

mind that the processes of the same branch are not completely independent (see

Appendix A for further analysis), e.g. the trigger condition of the last process, here

p3, is related to the execution result of the preceding processes, here p1, p2.

Dynamically update world state vs. track world state for each timepoint

As we have already mentioned, a process can start execution at some timepoint T if it

is already instantiated (at some timepoint≤T), triggered (at some timepoint≤T) and its

preconditions hold at T. But then arises the question of how to check for

preconditions, hence how to treat the representation and update of the world state:

dynamically update it or keep track of the world state for each timepoint? The choice

of dynamic update means in concrete that the predicates that represent the world state

are dynamic, and hence they can be inserted or deleted from our database without

keeping track of the world state history. Such a choice seems to agree with the

explicit time measurement that we have decided. On the other hand, one could argue

64

that keeping track of the world state history (e.g. keeping a database where the world

state is given for each timepoint throughout BPM execution) would give us better

control over BPM simulation. However, such a decision could be costly, especially if

the duration of the BPM execution is long. Therefore, we will dynamically update

the world state instead.

Other design issues

Now that the most important design decisions have been made, we will discuss some

other design issues, of relatively minor importance. First, the world state for an

organization will mainly be described by two factors: physical entities that exist in

the world (e.g. a specific customer, or supplier or product) and data that the

organization keeps in its database about the world (e.g. needs on inventory,

information about companies that are regarded as potential suppliers, etc). The

difference between the two is that entities are actual objects of the world, while data

is only information about the objects world. Consequently, actions of processes

change the world state by creating new or deleting already existing entities and data.

In the same way, preconditions of processes have to do with the existence or not of

some entity or data in a certain world state. Last, we define as minimum process

duration 1 and as minimum process cost 0.

4.2 Logical representation of executable BPM

Now that the workflow engine design and assumptions have been decided and

explained, we can move on to the logical representation of the executable business

process model. This formal specification is necessary for the workflow engine to

“understand” the BPM and proceed to its execution. So, it can be seen as a

convention to which the potential user should conform whenever our workflow

engine is to be used for a BPM simulation. The specification of the logical

representation includes the following: junctions, processes, world state (data and

entities) and events.

65

4.2.1 Junction representation

When one decides to use the workflow engine in order to run a BPM, he first has to

identify the process model by specifying how the several processes are connected

with each other, thus describe the junctions of the BPM. The standard predicate for

describing a junction is the following: junction(JunctionType, PreProcesses, PostProcesses)

So, a junction is specified by its type (which can be “start”, “finish”, “link”, “and-

split”, “and-joint”, “or-split” or “or-joint”), the list of the processes preceding it and

the list of the processes following it, where the processes are specified by their ID

(this is covered in the next section). The process model below includes some

example junctions to be modelled, and its specification is following.

Figure 52: An example BPM for the workflow engine

junction(start, [], [p1]). junction(link, [p1], [p2]). junction(or_split, [p2], [p3,p4]). junction(or_joint, [p3,p4], [p5]). junction(and_split, [p5], [p6,p7]). junction(and_joint, [p6,p7], [p8]). junction(finish, [p8], []).

As we have already mentioned in 4.1.3, a junction is considered to be a one-to-many

or many-to-one relation between processes. However, our workflow engine also

supports the case of two junctions connected with each other, thus when a start- or a

finish-junction is involved. In this case the model specification is the following:

66

Model specification of junction-followed-by-junction in our workflow engine

junction(start_and, [], [p1,p2]).

junction(start_or, [], [p1,p2]).

junction(finish_and, [p1,p2], []).

junction(finish_or, [p1,p2], []).

Table 5: Model specification of junction-followed-by-junction in our workflow engine

To make things even clearer, let us repeat that the “finish_and” junction above,

behaves as a finish and as an and-joint junction at the same time, thus only when

both p1 and p2 finish execution is the junction hit, indicating the end of the BPM

execution.

67

4.2.2 Process representation

In order to use the workflow engine to run a BPM, all involved processes must first

be described, so that their characteristics (e.g. preconditions) are known. The process

specification should be included in a separate file where each process is described by

the following predicate:

process(Pid, PName, Trigger, Precond, Action, Duration, Cost)

The decided form of process specification is actually a simplified version of the

executable process predicate suggested by Chen-Burger et al [4], and it includes the

most important and needed information about a process. Pid defines the ID of a

process and it is unique within a BPM, while PName is the name of a process.

Trigger is a list of all the trigger conditions of the process. As we have already

mentioned in chapter 2, a trigger condition is matched with the occurrence of an

external or internal event, which invokes the process. If there are no trigger

conditions for a process, then this is specified by setting Trigger to [true].

Similarly, Precond is a list of all the process’s preconditions, which involve the

existence or not of some entity or data in the entity database. The different types of

preconditions can be seen in the following examples, including the case of no

preconditions, where the value of Precond is the empty list.

process(p1, createProcessor, [true], [], [create_entity(processor, [clockSpeed_3MB, cahce_4MB])], 2, 100).

process(s4, urgentlySendComputerInBag, [exist(event_occ(needCompBag)), exist(event_occ(urgentNeedComputer))], [exist(entity_occ(computer)), exist(entity_occ(computerBag)), exist(data(addressToShip))], [create_data(computerInBagSent,[time_today])], 8, 80). process(pr2, discussWithBoss, [true], [], [], 1, 0).

The Action variable specifies the list of actions that are fired by a process – as we

have already mentioned in 4.1.3, the different types of actions that are supported by

our workflow engine are create entity, create data, delete entity and delete data.

Duration is the (average expected) duration of the process and Cost is the (average

expected) cost of the process. We should note once again that each process has

duration of at least 1, while the minimum cost allowed is 0.

68

4.2.3 World state representation

When executing a BPM we somehow need to know what our current world looks

like at each timepoint. For instance, in the case of Dell, which are our suppliers, or

what information is there in our database about our customer X? Hence, we need to

know what entities exist in some state and what data there is in the organization’s

database in some timepoint. Since the current state may change (e.g. new entities or

data may be created or deleted), and bearing in mind that we have decided to update

the world state dynamically, the predicates describing entities and data are defined as

dynamic. :- dynamic entity_occ/3. :- dynamic data/3.

An entity occurrence is described by its name and ID, as well as its attributes. The

EnitityAttribute variable is a list of attribute names and values of the entity (in the

form “attrname_attrvalue”). Below a standard entity specification and an example is

provided.

entity_occ(EntityName, EntityId, EntityAttribute) entity_occ(supplierForProcessor, supp_proc1, [reput_good, cost_expens]).

The existence of data in the organization’s database is specified in a similar way.

Thus, data is described by its Subject, its SubjectID and a list of attributes (in the

form “attrname_attrvalue”, like in the entity_occ case).

data(SubjectID, Subject, Attributes) data(potSup1, potentialSupplier, [reput_good, cost_expens]).

4.2.4 Event representation

As we have thoroughly explained in section 4.1.3, our workflow engine should be

loaded with a file containing information about events, either internal or external.

This information is necessary in order to invoke a process via its trigger conditions.

As it can be seen below, an event occurrence is specified by its name, its ID and the

timepoint at which it takes place.

event_occ(EventId, EventName, T) event_occ(e1, needForProcessor, 3).

69

4.3 Workflow engine creation

Now that we have presented the workflow engine design and assumptions, and the

logical representation of processes, junctions, events and the world state, we can

move on to the explanation of the main points of the workflow engine creation.

4.3.1 Workflow engine algorithm

Modeller’s conceptualizations

Process model

Current state

Figure 53: Relation of workflow engine with model, process and entity specification

As the figure above shows, the workflow engine uses both the static modeller’s

conceptualization, thus junction and process specification, as well as the dynamic

world description, meaning workflow state and world state, in order to check through

all the possible processes for execution. In fact, the algorithm takes each process of

the BPM and by checking the current state (what processes have already been

executed, what processes are pending, which junctions have been reached, as well as

Workflow engine algorithm

Check model instance process

Check process preconditions

Execute process

Check trigger

data entity event

Processes executedJunctions reached

World state Workflow state

Notation explanation : static junction

and process model : instances for organization’s world state (Dell) : process

instances executed, junctions reached (dynamic)

70

what entities and data currently exist in the world) and bearing in mind the process’s

preconditions and trigger conditions (specified by the modeller), it decides which

processes can be executed at the current timepoint.

The main workflow engine algorithm is to be seen in Figure 54, and it can be

summarized as follows: In every step there are three main tasks carried out by the

algorithm: execute actions, execute junctions and execute processes. The execution

of actions modifies the current state of the world (data and entity_occ). Junctions are

executed if their type is satisfied (e.g. an or-junction is executed if there is at least

one preceding process that has been triggered and executed), and these executions

create the so-called model instances, which are actually instances of the post-

processes. Then, processes are executed if they already have a model instance

created, if they have already been triggered and if their preconditions hold. When a

process is fired for execution, its completion time is calculated and stored in an

agenda (CompleteProcessAgenda) and its actions are scheduled for its completion

time (ActionsAgenda). At the end of each step (where time is updated), it is checked

whether we have reached the end of the execution of the BPM. If yes, then we stop

and calculate the total cost involved. Otherwise, we move to the next step.

71

Figure 54: Flowchart of our workflow engine

Workflow Engine

Process Model specification

Process specification

Junction specification

World description

(data, entity, event)

Execute action(s)

Execute junction(s)

Execute process(es)

Model Instances

Update time

(Finish junction hit) AND (ProcessPending

ForCompletion=[ ]) AND (ModelInstanceToBeTriggered

=[ ]) AND (AndPostProcessPending=[ ])

Actions Agenda

Complete Process Agenda T=T+1

END

No

Yes

Notation explanation static specification

dynamic specification algorithm task algorithm decision

repository

72

To make the above clear, we will show the flowchart for the following simple BPM,

where:

process(p1, p1, [true], [], [create_entity(car,[colour_red])],1,100) process(p2, p2, [exist(event_occ(needForCar))], [exist(entity_occ(car)],[create_data(carMatch,[quality_good])], 1,50)

and event_occ(e1, needForCar, 2) and the initial entity database is empty.

p1

-

p2

-

Start

Finish

T=0 ActionsAgenda=[] Execute actions: -

SofarCompletedProcesses=[] Execute junctions: start ModelInstances=[p1] Execute processes: p1

CompleteProcessAgenda=[p1,1] ActionsAgenda=[create car, 1]

T=1 ActionsAgenda=[create car, 1]

Execute actions: create car SofarCompletedProcesses=[p1]

Execute junctions: link ModelInstances=[p2] Execute processes: -

CompleteProcessAgenda=[] ActionsAgenda=[]

T=2 ActionsAgenda=[] Execute actions: -

SofarCompletedProcesses=[p1] Execute junctions: -

ModelInstances=[p2] Execute processes: p2

CompleteProcessAgenda=[p2,3] ActionsAgenda=[create carMatch, 3]

T=3 ActionsAgenda=[create carMatch, 3]

Execute actions: create carMatch SofarCompletedProcesses=[p1,p2]

Execute junctions: finish ModelInstances=[]

Execute processes: - CompleteProcessAgenda=[]

ActionsAgenda=[]

END

Figure 55: Flow state of the execution of a simple BPM using our workflow engine

In the above figure we show what happens (what is executed, coloured in violet) and

what holds in our database (all the different agendas and lists that we keep in

73

memory for our convenience, coloured in grey) in each timepoint. The reader is

expected to be able to follow the flow state, so we will not explain it any further. We

should mention, however, that this is an abstract and quite simplified view of what

takes place in our workflow engine when it is loaded with such a BPM.

We will now present and explain some important parts of the workflow engine code.

The predicate execute_step is the one that controls the flow of the BPM execution. A

step corresponds to one time point, in which three things may happen: actions can

execute, junctions may be reached and executed or processes may start or finish

execution. The execute_step goal presented here is a simplified version of the actual

one, which can be found in Appendix B, and it reflects the flowchart of Figure 54.

execute_step(PreviousActAgenda, JunctionsPending, PreviousJunctionsExecuted, PreviousModelInstance, ProcessPending, PreviousProcessExecuted, PreviousCompleteProcessAgenda, T):- execute_actions_agenda(PreviousActAgenda, T), findall(P,(

member([P,CompletionTime],PreviousCompleteProcessAgenda), CompletionTime =< T), CompletedProcessTillNow),

execute_junction_pending(JunctionsPending,PreviousJunctionsExecuted, NowJunctionsExecuted, PreviousModelInstance, NowModelInstance, CompletedProcessTillNow, T),

execute_process_pending(ProcessPending, PreviousProcessExecuted, NowProcessExecuted, PreviousActAgenda, NowActAgenda, PreviousCompleteProcessAgenda, NowCompleteProcessAgenda, NowModelInstance, T),

update_time(T, NewT), difference(JunctionsPending, NowJunctionsExecuted, NewJunctionsPending), difference(ProcessPending, NowProcessExecuted, NewProcessPending), execute_step(NowActAgenda, NewJunctionsPending, NowJunctionsExecuted, NowModelInstance, NewProcessPending, NowProcessExecuted, NowCompleteProcessAgenda, NewT).

As the flowchart of Figure 54 illustrates, the BPM finishes execution at the timepoint

when all of the following hold:

i. the finish junction is hit

ii. there is no process on execution (no process that has already started execution

is now waiting to complete execution at some point later)

iii. we are not waiting for some event that will trigger a process of which we

already have a model instance

iv. all post-processes of reached and-split junctions have been successfully

triggered and executed

74

The first two conditions are trivial to understand, as we cannot say that a BPM has

successfully finished execution if we have not reached the finish junction or if a

process is still on execution (e.g. if a process that started execution at timepoint 3

will finish at timepoint 7, and we are currently at timepoint 5, we cannot say that the

BPM has completed execution). Condition iii) deals with the case where an event is

expected to occur at some later timepoint and this event will trigger a process of

which we have a model instance, and hence could start execution. The importance of

this case has to do with our sophisticated way of treating waiting time, and it is

further explained in Appendix A. Condition iv) has to do with the special case of

and-split junction specification, and how this affects the BPM execution completion

(for further analysis see Appendix A).

Bearing these conditions in mind, the base case of execute_step is easy to

understand:

execute_step(_ActionsAgenda, _JunctionsPending, PreviousJunctionsExecuted, PreviousModelInstance, _ProcessPending, PreviousProcessExecuted, PreviousCompleteProcessAgenda, T):- (member(junction(finish,_LastProcess,[]),PreviousJunctionsExecuted); member(junction(finish_and,_LustProcess,[]),

PreviousJunctionsExecuted); member(junction(finish_or, _LasstProcess, []),

PreviousJunctionsExecuted) ), findall(P,(member([P,CompletionTime],PreviousCompleteProcessAgenda),

CompletionTime >= T),[]), findall(NotYetTriggeredProcess,

(member(NotYetTriggeredProcess,PreviousModelInstance), gets_triggered(NotYetTriggeredProcess, TriggerT), TriggerT >= T), []),

findall(AndPostProcess, (member(AndPostProcess,PreviousModelInstance),

find_AllAndPostPr(X), member(AndPostProcess, X), \+ member(AndPostProcess,PreviousProcessExecuted) ), []),

findall(Cost, ( process(Pid, _PName, _Trigger, _Precond, _Action, _Duration, Cost), member([Pid,_CompletionTime],PreviousCompleteProcessAgenda)), CompletedCosts),

sum_list(CompletedCosts, TotalCost), write('Base case hit!'), nl, write('The BPM has finished execution

at time '), reduce_one(T, NewT), write(NewT), nl, nl, write('Results:'), nl, write('The junctions executed are: '), write(PreviousJunctionsExecuted), nl, write('The processes executed are: '), write(PreviousProcessExecuted), nl, write(' with finish times: '), write(PreviousCompleteProcessAgenda), nl, write(' and with total cost: '), write(TotalCost).

75

Even though our workflow engine is designed to be used only for correct BPMs, we

have decided to provide a more flexible option and model the case of an unsuccessful

BPM execution. So, in case the BPM we check does not seem to finish execution,

either because of an untriggered process or because of an unsatisfied process

precondition, then the workflow engine would keep updating time and never reach an

end. In order to avoid this situation, and at the same time provide some feedback to

the user, we have decided to stop BPM execution after some big time point (currently

arbitrarily set to 100, but easy to change), and inform the user that the workflow

engine has been “running” for too long, a situation that probably signifies some

model or entity error.

4.3.2 Interpretation of workflow engine interesting code

We will now comment on some parts of the code that may be of interest to the

reader, such as action definition, junctions handling, checking for process execution,

etc.

Actions

action_result(create_entity(EntityName, EntityAttribute)):- asserta(entity_occ(EntityName, _EntityId, EntityAttribute)).

The clause above defines the create_entity action, which asserts an entity_occ clause

in our world state database. The other actions (delete_entity, create_data,

delete_data) are defined in a similar way.

Junctions

junc_type_satisfied(junction(start,_Pre,_Post), _CompletedProcessTillNow, _T).

junc_type_satisfied(junction(and_joint, Pre, _Post), CompletedProcessTillNow, T):- find_all_PreTriggered(Pre, PreTriggered), all_triggered_completed(PreTriggered, CompletedProcessTillNow, T).

Junctions can be reached and processed only if their type is satisfied. Above we can

see two examples of junction satisfaction, an easy one (start) and a more complicated

one (and_joint). The latter is satisfied only if all its triggered pre-processes have

completed execution.

76

Process execution

The execute_process predicate is of the form

execute_process(Process,ActionsAgenda,CompletionAgenda,NowModelInstance,T)

and it is defined as follows:

execute_process(Process, [Actions,F], [Process, F], NowModelInstance, T):-

process(Process,_PName, _Trigger,Precond,Actions,Duration,_Cost), member(Process, NowModelInstance), findall(Proc, (gets_triggered(Proc, TriggerT), TriggerT =< T),

SofarTriggered), member(Process, SofarTriggered), precondition_holds(Precond), F is T+Duration.

This definition agrees with Figure 44 which indicates that a process may be executed

only if it has a model instance, if it is triggered and its preconditions hold.

4.4 Discussion and conclusions

The developed workflow engine is designed for BPM simulation, and its business

context sensitive approach (expressed in terms of time and cost) lets us check and

compare strategic decisions. Its implementation in Prolog, which adopts FBPML,

satisfies all design requirements that have been addressed in 4.1.2 and conforms to its

mission, as it is addressed in section 4.1.1.

We will now illustrate how the workflow engine can be used and what output it is

expected to give (See Appendix C for a relevant demo). By using the example BPM

of Figure 55, we will show the steps that the user has to follow when simulating a

BPM and the relevant output of the system. First, the processes of the BPM have to

be defined and stored in a file (let’s call it “myProcess”), the junctions of the BPM

have to be defined and stored in a file (let’s call it “myJunctions”), the initial world

state has to be described in terms of entity_occ and data and stored in a file (let’s call

it “myWorld”), and the event occurrence list has to be specified and stored (let’s

store it in “myWorld”).

77

Process

specification Junction

specification Initial world description

Event occurrences list

Workflow Engine

Results -Total time -Total cost -Real-time workflow state

Figure 56: Graphical representation of workflow engine use

So, for our example BPM the relevant files would look like this:

myProcess: process(p1, p1, [true], [],[create_entity(car,[colour_red])],1,100). process(p2, p2, [exist(event_occ(needForCar))], [exist(entity_occ(car))], [create_data(carMatch,[quality_good])], 1, 50).

myJunctions: junction(start, [], [p1]). junction(link, [p1], [p2]). junction(finish, [p2], []).

myWorld: event_occ(e1, needForCar, 2).

After these files are loaded, if we type run_bpm. then the BPM starts execution and

we get the following output:

-------------------------------------- Time=0 The completed processes till now are [] Junction junction(start,[],[p1]) hit Model instances of processes [p1] created The SofarTriggered processes are [p1] Process p1 starts now execution till timepoint 1 and actions [[create_entity(car,[colour_red])],1] are added to the ActionsAgenda -------------------------------------- Time=1 The following actions are executed: [create_entity(car,[colour_red])] Action create_entity(car,[colour_red]) executed The completed processes till now are [p1] Junction junction(link,[p1],[p2]) hit

78

Model instances of processes [p2] created The SofarTriggered processes are [p1] The SofarTriggered processes are [p1] -------------------------------------- Time=2 The following actions are executed: [] The completed processes till now are [p1] The SofarTriggered processes are [p1,p2] Process p2 starts now execution till timepoint 3 and actions [[create_data(carMatch,[quality_good])],3] are added to the ActionsAgenda -------------------------------------- Time=3 The following actions are executed: [create_data(carMatch,[quality_good])] Action create_data(carMatch,[quality_good]) executed The completed processes till now are [p2,p1] Junction junction(finish,[p2],[]) hit Model instances of processes [] created -------------------------------------- Base case hit! The BPM has finished execution at time 3 Results: The junctions executed are: [junction(finish,[p2],[]),junction(link,[p1],[p2]),junction(start,[],[p1])] The processes executed are: [p2,p1] with finish times: [[p2,3],[p1,1]] and with total cost: 150 yes

The above example shows that the workflow engine behaves the way it should and

gives meaningful feedback to the user. Let us now give an interesting advice to the

user that wishes to use our workflow engine for a BPM that involves a loop: Since

our workflow engine does not support loop handling, the BPM can be transformed

appropriately, so that it can be dealt with from our workflow engine. The BPM

transformation involves the creation of new copies of the processes of the loop. A

relevant example is given in Appendix A.

To sum up, we believe that the developed workflow engine serves its mission and

objectives, as addressed in 4.1.1, but also provides extra flexibility by tracking wrong

BPMs and dealing with loops to some extent.

79

Chapter 5

5Experiments

Now that we have developed the workflow engine, we can use it to create an

executable version of Dell’s business process model, and hence move to the

implementation layer of the Three-Layered Business Process Modelling Approach.

The goal of this version is to experiment with time and cost and reason about Dell’s

supply chain strategies. Under this scope, there is no point in executing every single

process model (out of the 20 in total!) of Dell’s BPM, but only the ones that illustrate

Dell’s supply chain strategies. So, we have decided to model and execute two

process models (of Dell’s enriched BPM version): “Buy standard item to order”

(process 2) and “Sell directly to large business and public sector customers” (process

4.2), as these reflect in the best way Dell’s main supply chain strategies, thus direct

sales and build-to-order. After the simulation of these two processes, experiments

will take place. Our experiments are expected to answer two questions: First,

whether we can improve Dell’s actual BPM by making some processes parallel, and

second, whether Dell’s BPM is actually better (in terms of time and/or cost) than the

BPM of a traditional computer company.

5.1 Dell’s BPM simulation

We have decided to simulate two of Dell’s processes, “Buy standard item to order”

(process 2) and “Sell directly to large business and public sector customers” (process

4.2). These processes have been chosen because of their close relation with Dell’s

basic supply chain strategies, thus direct sales and build-to-order. The simulation

consists of two steps: The first one is the specification and representation of the

involved processes and junctions, of the initial world state and the events’ list. The

second is the actual execution with the help of the workflow engine, and the related

results.

80

5.1.1 Dell’s BPM specification and representation

In this section we will present and explain the specification and representation of the

two processes to be executed, thus process 2 and 4.2. For each process we will first

provide the process specification and explain some relevant assumptions, and then

we will describe the initial world state and the events’ list. Note that we will focus on

the process specification, as this is actually the cornerstone of the execution

procedure; hence, a full analysis and representation will be provided here for the

process specification, while the other topics will be covered partly here and partly in

Appendix D.

Process 2: Buy standard item to order

Let us first provide once again the BPM for “Buy standard item to order”, as it was

presented in Figure 30.

Figure 57: Decomposition of “Buy standard item to order” (Enriched version)

The relevant process specification is presented in Table 6, and it is a more readable

version of the actual code, which can be found in Appendix D. We should point out

here that the relevant data (trigger conditions, preconditions, actions, duration and

cost) were not found in the literature, but are actually assumptions based on our

understanding of Dell’s operations and the business world in general. We have

decided to measure duration and cost in an abstract way rather than an absolute one,

thus duration values correspond to time units and cost is measured in cost units. Also

note that the predicates in green are process post-conditions that involve event

invocation; since our workflow engine does not support such actions, the relevant

information will be supplied to the workflow engine via the event occurrence list,

where every such event is scheduled for the finish time of the process that invokes it.

81

Process Specification of “Buy standard item to order” BPM Pid PName Trigger Precondition Action Dur

(time units)

Cost (money units)

p2_1 identifyOwnNeeds

- exist_entity(needsOnXinventory)

create_data(needs OnXinventory) create_data(currentXInventoryLevel) create_event(need ForNewXinventory)

1 200

p2_2 identifyPotentialSuppliers

needForNewXinventory

exist_data(needs OnXinventory) exist_data(relevantXsuppliers) not_exist_entity( supplierX)

create_data( potentialXsuppliers)

21 1000

p2_3 selectSupplier needForNewXinventory

exist_data(potentialXsuppliers) not_exist_entity( supplierX)

create_entity( supplierX)

21 1000

p2_4 negotiate needForNewXinventory

exist_entity(supplierX) exist_data(needs OnXInventory) not_exist_entity( contractXsupplier

create_entity( contractXsupplier) create_event( integrateXsupplier) create_entity(value ChainForX)

8 750

p2_5 shareInfo integrateXsupplier

exist_entity(value ChainForX) exist_data(currentXInventoryLevel) exist_data(demandForecast) exist_data(generalBusinessInfo)

create_data(sharedCurrentXInventoryLevel) create_data(sharedDemandForecast) create_data(sharedGeneralBusiness Info)

1 100

p2_6 getInventory lowXinventory integrateXsupplier

exist_entity( contractXsupplier) exist_data(shared CurrentXInventoryLevel) exist_data(shared DemandForecast)

create_entity( inventoryX) create_event(arriveInventoryX) delete_data(currentXInventoryLevel) delete_data(sharedCurrentXInventoryLevel) create_data(updatedXInventoryLevel)

1 1000

p2_7 paySupplier arriveInventoryX exist_entity(money)

create_data( supplierXpaid)

1 40

p2_8 manageSupplier integrateXsupplier

- - 20 1000

Table 6: Process specification of “Buy standard item to order”

82

Process p2 shows the whole procedure for buying a standard item to order, when

there is no supplier for this item yet, thus when the item is new. The specification of

process p2_1 is an excellent example to illustrate the difference between entity and

data in world description, that we have explained in 4.1.3. As the table above shows,

Dell is always in position to identify its own needs on a new item (the trigger

condition is true), a fact that reflects the high importance that Dell places on quickly

adapting to new situations. The process starts execution once there is an actual need

on some specific new item (let’s call it X), which has not been needed in the past.

This need is represented by an entity in Dell’s world representation. Once such a

need is identified, Dell keeps a record of this need in its database (hence the relevant

create_data action). So, even though there may be a need (as an entity) for some

item, it is only when it is identified by the company (and tracked as a fact, thus data)

that it is recognized and can fire the processes following p2_1 (hence create_event).

Process p2_1 is assumed to be short in duration (only 1 time unit) and not costly (100

money units).

Once the internal event needForNewXinventory takes place, processes p2_2, p2_3

and p2_4 are triggered. The identification of potential suppliers can start only if there

is no supplier already for this item and given that Dell has a list of suppliers which

are relevant with item X. Once the potential suppliers are identified, Dell stores

information about them in its database.

The supplier selection process (p2_3) requires the existence of such a list, so that

Dell can choose one supplier among all potential ones. Since Dell has high

expectations form its suppliers, the searching and selection procedure usually lasts

longer and costs more than in a traditional computer company, thus 21 time units and

1000 money units for each process.

If a supplier is found, the negotiation process can start execution, which is expected

to complete in about 8 time units with a signed contract from both sides, and with the

invocation of integration with the supplier. The internal event of integration fires the

collaboration between Dell and its suppliers, and thus triggers all the following

processes.

83

The info-sharing procedure begins (given that ValueChain is customized for the

supplier, and there is information to share) and since it is electronic, its duration and

cost are low. Dell keeps a record of the shared information in its database, and thus

the create_data for shared information are the relevant actions.

As we have seen in chapter 2, this information sharing allows Dell to place orders, if

any, late and demand fast delivery from its suppliers (usually the following day). In

fact, it is quite rare that orders are placed; instead the suppliers decide themselves

whether and how much inventory Dell would need at a certain point. So, if there is

an internal event that signifies a need on inventory X, and the supplier has

knowledge about this, then delivery starts execution; since most suppliers are obliged

to maintain inventory close to Dell’s plants, delivery lasts only one day (here: time

unit). Once inventory arrives to the assembly plant, the supplier’s payment is

triggered.

Process p2_8 is quite complex, and it involves supplier evaluation and policies and

relationships management. We assume that it lasts for about 21 time units and costs

around 1000 money units – both values are much higher than in a traditional

computer company because of Dell’s decision to integrate and collaborate closely

with its suppliers.

Now that we have described the specification of process 2, we can move on to the

initial world description and the events’ list. In the initial world state Dell has money

in its bank account and a new need for item X, while its database contains

information about suppliers relevant for X, demand forecast for some product and

some general business information: entity_occ(needsOnXinventory, ent1, [item_X]). entity_occ(money, ent2, [euros_3000]). data(d1, relevantXsuppliers, [item_X, [sup1_good, sup2_ok]]). data(d2, demandForecastX, [item_X, time_oneWeek, level_2000]). data(d3, generalBusinessInfo, [increasingImportance_edi]).

The event occurrence list is defined according to the expected finish times of the

related process, and hence we have:

84

event_occ(e1, needForNewXInventory, 1). event_occ(e2, integrateWithXsupplier, 51). event_occ(e3, lowXInventory, 52). event_occ(e4, arriveInventoryX, 53).

Process 4.2: Sell directly to large business and public sector customers

The figure below is a reproduction of Figure 41, which represents the BPM for “Sell

directly to large business and public sector customer”.

Figure 58: Decomposition of “Sell directly to large business and public sector

customers” (Enriched version)

The relevant process specification is presented in the following Table 12, and, like

Table 11, it is a more readable version of the actual code, which can be found in

Appendix D. Again, trigger conditions, preconditions, actions, duration and cost

were not found in literature, but are assumptions we have made.

Process Specification of “Sell directly to large business and public sector customers”

BPM Pid PName Trigger Precondition Action Dur

(time units)

Cost (money units)

p4_2_1 identifyPotentialCorporateCustomers

- exist_entity( potentialCorporateCustomer)

create_data( potentialCorporateCustomer) create_entity(corporateCustomer) create_data( customerAddress) create_event(new CorporateCustomer

25 750

p4_2_2 identifyCorporateCustomersNeeds

newCorporateCustomer

exist_entity( corporateCustomer)

create_data( corporateCustomerNeeds)

8 100

85

p4_2_3 identifyCorrespondingConfigurations

newCorporateCustomer

exist_entity(corporateCustomer) exist_data( corporateCustomerNeeds) exist_data(productSpecifications)

create_data(customerConfigurations)

3 100

p4_2_4 informCorporate CustomersViaPremierPages

newCorporateCustomer

exist_entity(corporateCustomer) exist_data(customerConfigurations)

create_entity( customerPremierPage) create_event( integrateCustomerViaPremierPage)

10 350

p4_2_5 obtainOrder integrateCustomerViaPremierPage customerNeedOnProduct

exist_entity( customerPremierPage)

create_data(customerOrder) create_event( customerOrder)

7 40

p4_2_6 receivePayment customerOrder customerPayment

exist_entity( customerPremierPage)

create_data(customerOrderPaid) create_event( paidCustomerOrder)

1 40

p4_2_7 deliverOrder paidCustomerOrder assembled CustomerOrder

exist_entity(orderedItems) exist_data(customerAddress)

create_data(customerOrderDelivered)

5 200

p4_2_8 manageBigCustomerRelationships

newCorporateCustomer

- - 2 300

Table 7: Process specification of “Sell directly to large business and public sector

customers”

Process p4_2_1 involves the identification of potential corporate customers and

individual customers (employees) within the organization (hence the

create_data(potentialCorporateCustomer) action), and it actually signifies whether a

potential customer is interested in buying from Dell, and thus whether he will

become a customer (this is represented by create_entity(corporateCustomer)). There

is no trigger condition, as Dell is constantly looking for customers, thus is always

ready to approach a potential customer. The process has long duration and high costs,

as persuading a potential corporate customer is not an easy job. Once a new customer

is identified, processes p4_2_2, p4_2_3, p4_2_4 and p4_2_8 are triggered.

86

So, once a customer is identified, Dell discusses and identifies the organization’s

needs on products (process p4_2_2, which takes about 8 time units and costs around

100 money units), and then specifies configurations that correspond to these needs.

This requires explicit description of the customer’s needs as well as explicit

description of each product’s specifications.

Customers are informed about the purchasing options they have through the

company’s customized Premier Page, which is a result of process p4_2_4, and takes

about 10 time units to be finished. Once a customized Premier Page is created, and

given that the customer is “ready” to order (thus the trigger condition

customerNeedOnProduct), the employees of the organization can choose the

products that suit them, while the purchasing team of the company monitors the

whole procedure, thus leading to duration of 10 time units for the ordering process. It

is worth mentioning here that the electronic form of ordering saves Dell and the

corporate customer time and money, a situation that holds for the payment process as

well. Hence, when a customer order is placed, and the customer pays in electronic

form, the whole monitoring procedure is accelerated by its electronic form, thus has

duration of 1 time unit.

The order delivery process is triggered at the moment when a customer has paid and

the ordered products have been assembled, and after its execution Dell updates its

database about the delivery success. Process p4_2_8 involves supporting and getting

feedback from the customer, and it is triggered once a new customer is found.

Now we can proceed to the initial world description and the events’ list. In the initial

world state Dell has some potential corporate customers, and its database contains

information about its product specifications. We should note here that we will

include the existence of items to be ordered in the initial world state (even though

this is not actually the case) because these entities are product of process 3, which

actually runs in parallel with 4.2, and our workflow engine cannot force the creation

of the items during the execution of 4.2. So the initial world state is described by the

following:

87

entity_occ(potentialCorporateCustomer, ent1, [potCC_custA]). entity_occ(orderedItems, ent2, [prodID_sk32, orderID_thre34, amount_5000]). data(d1, productSpecifications, [prodID_sk32, performance_medium,

media_good]).

The event occurrence list is defined according to the expected finish times of the

related processes, and hence we have: event_occ(e1, newCorporateCustomer, 25). event_occ(e2, integrateCustomerViaPremierPage, 46). event_occ(e3, customerNeedOnProduct, 46). event_occ(e4, customerOrder, 53). event_occ(e5, customerPayment, 53). event_occ(e6, paidCustomerOrder, 54). event_occ(e7, orderAssembled, 55).

5.1.2 Dell’s BPM simulation results

Now that the two processes have been specified and logically represented, we can

load the specification to the workflow engine to simulate their actual execution. In

this section we will present only the final results, thus the total time and cost related

to each BPM. The detailed output of the system can be found in Appendix D.

Process 2: Buy standard item to order

From the final output below we can see that the total execution time is 71 time units

and the involved cost is 5090 money units.

The BPM has finished execution at time 71

Results: The junctions executed are: [junction(finish_and,[p2_7,p2_8],[]), junction(link,[p2_6],[p2_7]),junction(link,[p2_5],[p2_6]), junction(and_split,[p2_4],[p2_5,p2_8]),junction(link,[p2_3],[p2_4]), junction(link,[p2_2],[p2_3]),junction(link,[p2_1],[p2_2]), junction(start,[],[p2_1])] The processes executed are: [p2_7,p2_6,p2_8,p2_5,p2_4,p2_3,p2_2,p2_1] with finish times: [[p2_7,54],[p2_6,53],[p2_8,71],[p2_5,52], [p2_4,51],[p2_3,43],[p2_2,22],[p2_1,1]] and with total cost: 5090

Table 8: Simulation results of Dell’s “Buy standard item to order”

88

Process 4.2: Sell directly to large business and public sector customers

According to the results of the simulation of process 4.2, selling computers

directly to a new big customer takes 60 time units and costs 1880 money units.

The BPM has finished execution at time 60

Results: The junctions executed are: [junction(finish_and,[p4_2_7,p4_2_8],[]), junction(link,[p4_2_6],[p4_2_7]),junction(and_split,[p4_2_5],[p4_2_6]), junction(link,[p4_2_4],[p4_2_5]),junction(link,[p4_2_3],[p4_2_4]), junction(link,[p4_2_2],[p4_2_3]), junction(and_split,[p4_2_1],[p4_2_2,p4_2_8]), junction(start,[],[p4_2_1])] The processes executed are: [p4_2_7,p4_2_6,p4_2_5,p4_2_4,p4_2_3,p4_2_8, p4_2_2,p4_2_1] with finish times: [[p4_2_7,60],[p4_2_6,54],[p4_2_5,53],[p4_2_4,46], [p4_2_3,36],[p4_2_8,27],[p4_2_2,33],[p4_2_1,25]] and with total cost: 1880

Table 9: Simulation results of Dell’s “Sell directly to large business and public sector

customers”

5.1.3 Discussion of Dell’s BPM simulation results

When someone tries to interpret the above results, it is important to keep in mind that

these involve buying a standard item from a new supplier and selling directly to a

new big customer. This information explains the relatively long duration and high

costs related to the execution of the two processes. In fact, Dell differs from

traditional computer companies in the fact that it aims to virtually integrate with its

supply chain partners, thus suppliers and customers. This has as a consequence that

building the relation with the SC partner takes longer and costs more than usually;

however, Dell profits from this situation on the long run, as the daily collaboration

with the existing suppliers (for buying items to order) or customers (for order receive

and product delivery) turns out to be faster and cheaper. Therefore, we have decided

to model separately the case of an existing supplier or customer, and we can illustrate

it with the following two examples:

Let’s suppose that Dell has been collaborating with Intel for processors supplying for

some years now. This means that their relationship is established, thus processes 2.1,

2.2, 2.3 and 2.4 have already finished execution (contracts have already been signed),

89

while process 2.8 is constantly “running”. Their daily collaboration for processors’

supply consists then of info-sharing, inventory receipt and payment (represented by

processes 2.5, 2.6 and 2.7), and their execution takes only 3 time units and costs

1140 money units (the corresponding time and cost of the three processes).

Similarly, we can suppose that Boeing is a customer of Dell some years now (thus

existing customer), and hence processes 4.2.1 4.2.2 and 4.2.3 have already been

executed, while 4.2.8 is constantly “running”. Dell keeps informing Boeing about

products that may be of interest (hence process 4.2.4, which now has shorter duration

of 1 time unit and lower cost of 50 money units, as the customized Premier Page

already exists and only needs to be updated), and Boeing may order from time to

time, thus processes 4.2.5, 4.2.6 and 4.2.7 may execute several times. So, the

purchasing collaboration between Dell and Boeing would actually take only 15 time

units and cost 330 money units.

Another point worth mentioning is the way we treat the “single case” requirement

mentioned in chapter 3. In process 2, the single case corresponds to buying from one

supplier items of one type in a fixed amount (let’s say 100 items). Similarly, in

process 4.2, the single case corresponds to selling directly to one big customer items

of one order in a fixed amount (let’s say 1000 products in average).

5.2 Experiments & Results

After the realistic versions of process 2 and process 4.2 have been simulated, we can

proceed to the experiments. The experiments are an important part of this work, as

they are expected to reflect issues about Dell’s supply chain strategies and

operations. We should, however, keep in mind that the realistic versions of the

processes are, in a big part, based on assumptions about time and cost; hence, one

should reason with respect to the magnitude of these business goals, rather than the

actual values (e.g. duration difference of 1 time unit is not important, in contrast to a

difference of seven time units).

90

The experiments we wish to conduct are designed to answer two questions for each

BPM:

-1st experiment: Can the actual BPM be improved (e.g. make two sequenced

processes parallel)?

-2nd experiment: How is Dell’s BPM different from the corresponding BPM of a

traditional computer company?

In the following sections we will try to answer the two questions for each business

process model with the use of our workflow engine, and then we will discuss the

relevant results.

5.2.1 Experiment 1: Improve the actual BPM

We are interested to see whether Dell has organized its supply chain operations in the

best way, and hence whether there are any recommendations for their improvement.

In other words, we will try to change each BPM by parallelizing two or more

sequenced processes. If no processes can be parallelized, then this means that there is

no further improvement for Dell’s actual BPMs.

We will not check every possible combination of processes within a BPM, but only

some that make sense to make parallel. In order to check the alternative BPM

conceptualizations, only the junctions of the model will be changed (sequenced

processes turned into parallel), while the process and initial world state will not be

altered. Hence, checking will actually involve testing whether the trigger conditions

and preconditions are satisfied for the new BPM conceptualization under the

previous process and world circumstances. Also, the checking procedure will mainly

be manual, but can also sometimes be supported by the workflow engine.

(Remember that the workflow engine may not be designed to provide validation and

verification, but its flexibility allows us to use it for some basic checking.)

Process 2: Buy standard item to order

If we have another look at Figure 57, we will see that there are already some parallel

processes in this BPM. So, this BPM could, theoretically, be improved in two ways:

91

Either make some of the first processes (2.1, 2.2, 2.3 and 2.4) parallel, or push the

last sequenced process, 2.4 into the and-split-and-joint block.

The first does not seem very plausible, as semantically such a transformation would

not make sense (hence potential suppliers cannot be identified if needs are not first

identified, a supplier cannot be selected if there is no potential supplier being

identified first, and contracts can be negotiated only if a supplier has already been

found). However, we will check the case of parallelizing processes 2.2 and 2.3, so

that we are completely certain about this claim. So, the difference in the BPM of

Figure 57 will concern the part shown in the figure below:

Figure 59: Original (sequenced) and parallelized part of “Buy standard item to order” for

experiment 1, version 1

In this experiment we will transform the junctions’ specification appropriately and

keep the same process and initial world state description as before. If we have a look

at the process description of 2.2 and 2.3 in Table 11, we will see that process 2.3 has

as precondition the existence of data about potential suppliers, which is actually a

post-condition of process 2.2. This means that 2.2 has to have completed execution

before 2.3 starts execution. Hence, the two processes have to be sequenced, and even

if we create a model where they are set as parallel, they will actually execute as

sequenced. This is verified by our workflow engine as well, as altering the junctions’

specification and executing the BPM gave us exactly the same results as before (total

time 71 and total cost 5090, and p2_3 starting execution at timepoint 22, after the

completion of p2_2). So, making processes 2.2. and 2.3 parallel does not improve

the BPM execution.

92

Another way to treat the same experiment would be to relax the BPM (e.g. relaxing

the preconditions of 2.3) by allowing 2.2 and 2.3 execute in parallel and in an

interactive way. This means that potential suppliers would be constantly identified

but for every identified potential supplier, the selection procedure would execute (in

parallel with the identification of the next potential supplier). Such an alternative

would be expected to have shorter duration than the original sequenced version but

also longer than the maximum duration of the two processes. Also, note that such a

BPM transformation raises questions of a managerial-business aspect, thus whether a

company would approve and be comfortable with such a choice.

The second experiment of the same type involves pushing process 2.4 into the and-

split-and-joint block, thus changing the part shown in the following figure in the way

presented.

Figure 60: Original (sequenced) and parallelized part of “Buy standard item to order” for

experiment 1, version 2

Once again, this experiment does not seem semantically very plausible, as processes

2.5 and 2.8 require that a contract with the supplier be signed, thus process 2.4 has to

have already finished execution before 2.5 and 2.8 begin. This is understandable, as

the collaboration between Dell and its supplier has to be official, when information

sharing and relationships management take place. This is also logically made clear,

as process 2.4 triggers processes 2.5 and 2.8. So, like in the previous experiment, the

BPM will behave as the sequenced one. This is verified by the simulation with our

workflow engine, which gives us exactly the same results as before (total time 71 and

total cost 5090, and p2_5 and p2_8 starting execution at timepoint 51, after the

93

completion of p2_4). So, making process 2.4 parallel with 2.5, 2.6, 2.7 and 2.8

does not improve the BPM execution.

Additionally, we could consider the case of relaxing the BPM, like before. However,

relaxing the BPM in this case would result into the creation of business risks, as

sharing information with a (potential) supplier with whom no contract has been

signed could cause problems of security to Dell. So, such an alternative is not

suggested.

From the above experiments it seems that Dell has organized its operations

regarding buying items to order in the best way.

Process 4.2: Sell directly to large business and public sector customers

We will now check whether process 4.2 can be improved by turning sequenced

processes into parallel ones. If we have a look at Figure two, we will see that there is

a big and-split-and-joint block, which involves processes 4.2.2-4.2.7 in the one

branch and 4.2.8 in the other. So, this BPM could, theoretically, be improved again

in two ways: Either push the 4.2.1 into the and-split-and-joint block or make some of

the processes of the longer branch parallel.

The first approach is once again semantically not plausible because only when a

customer is created, can processes 4.2.2 and 4.2.8 start execution. This is also

logically shown, as according to the process specification, 4.2.1 triggers both of

them. We will not analyse this deeper, as it is the same case as with process 2

(pushing 2.4 into the and-split-and-joint block). So, making process 4.2.1 parallel

with 4.2.2-4.2.7 and 4.2.8 does not improve the BPM execution.

The second approach would be to make some of the processes of the longer branch

parallel. We will try the case of making 4.2.2 and 4.2.3 parallel. As before, 4.2.3

requires 4.2.2 to have finished execution (and thus the precondition of existence of

data about customer’s needs, which is actually a post-condition of 4.2.2). So, the

BPM execution is not improved.

94

However, there is a relevant recommendation that could actually be helpful: the two

processes could be parallel in the sense that process 1, “Design Product and Process”

combines the processes “Design Product” and “Design Process”. So, we could create

a new process, namely “Identify corporate customer’s need and corresponding

configurations”, which would mean that Dell would talk with the customer to

identify his needs, but at the same time orientate the discussed needs around Dell’s

products. So, the customer’s needs would be identified at the same time as the

appropriate configurations. If this can be achieved and interests the business world

(and hence both sides, seller and buyer), then time and cost advantages could be

involved, as the parallel version of the two processes would reduce mistakes to a

great extent. However, this is something that should be tested in practice, and we

cannot argue about this option any further.

5.2.2 Experiment 2: Compare Dell with a traditional computer company

We are interested to see in which way Dell differs from a traditional computer

company, as far as buying inventory and selling computers to customers is

concerned, and what difference there is in the corresponding duration and cost. So,

for this experiment we will assume that a traditional company, named

“myCompany”, is representative of the average traditional computer company (e.g.

IBM, Compaq, HP, etc.) and we will create and simulate the BPMs of myCompany

that correspond to process 2 and process 4.2. Note here, that several assumptions will

be made based on our knowledge about the business industry and the reference of

Dell’s literature on its traditional competitors.

Dell’s “Buy standard item to order” vs. myCompany’s “Buy standard

item to stock”

We will now compare the buying process of standard items by Dell and a traditional

company in terms of time and cost. Most of the computer companies do not buy

items to order, but instead they buy items to keep as inventory. This tactic is very

costly on the one hand (there is claim that keeping and managing inventory results

into a rise of costs of 50%), but it provides flexibility to face demand instability on

95

the other hand. Unfortunately, our BPMs, which are based on the MIT Process

Handbook, do not reflect whether inventory is kept or not, so we will basically focus

on the very same buying process and the related time and cost.

The “Buy standard item to stock” BPM of myCompany is based on the suggestion of

the MIT Process Handbook for Dell’s corresponding process (see Figure 19 in

section 3.2), and we believe that it reflects the relevant procedure in a traditional

computer company. Its reproduction can be seen in the following figure.

Figure 61: myCompany’s BPM for “Buy standard item to stock”

Somebody could argue that it does not differ much from Dell’s corresponding BPM,

as the junctions’ specification is quite similar. However, in reality there are some

important differences that concern aspects other than the junctions’ specification.

This means that the details of the processes’ specification differ between the two

companies, especially time duration and cost. We will not define each process of

Figure 61 for myCompany here (the detailed description can be found in Appendix

D), but we will explain what duration and cost is assigned to each one.

As we have already mentioned, Dell differs from traditional computer companies in

the fact that it has very high expectations from its suppliers (fast delivery, close

collaboration, high quality standards, etc.), and hence it is difficult to find a new

supplier that will reach these high expectations. On the contrary, traditional PC

companies do not have so high standards, and hence do not spend as much time and

money in finding a new supplier, as Dell does. So, the identification of potential

suppliers for myCompany is assumed to last 15 time units and cost 700 money units,

while the supplier selection (which, for myCompany, incorporates the contracts’

negotiation procedure as well) is assumed to last around 13 time units (5 time units

for the selection and 8 for the contracts) and cost 1000 money units. On the other

96

hand, the collaboration between a traditional company and its (existing) supplier is

longer and more costly compared to Dell, as no virtual integration is aimed; let us not

forget the great advantages that Dell has from ValueChain. So, the order placement

procedure lasts 2 time units and costs 100 money units, and the payment procedure

takes another 2 time units and costs 40 money units, while the order takes about 10

time units to arrive and costs 1000 money units. Based on Dell-relevant literature, we

assume that process 2.1 is the same as for Dell (duration of 1 time unit and cost of

200 money units), while process 2.7 is assumed to be shorter and cheaper for

myCompany (about 10 time units and 400 money units), as Dell places unusually

great importance in its relationship with the suppliers.

After the process specification, the events’ list and the initial world state is specified

for myCompany (for further information see Appendix D), we can move on to the

simulation of the company’s “Buy standard item to stock” BPM. According to the

simulation results, the total execution time is 43 time units and the total involved cost

is 3440 money units. The more detailed simulation results can be seen below:

Base case hit! The BPM has finished execution at time 43

Results: The junctions executed are: [junction(finish_and,[p2_6,p2_7],[]), junction(link,[p2_5],[p2_6]), junction(link,[p2_4],[p2_5]), junction(and_split,[p2_3], [p2_4,p2_7]),junction(link,[p2_2],[p2_3]), junction(link,[p2_1],[p2_2]),junction(start,[],[p2_1])] The processes executed are: [p2_6,p2_5,p2_7,p2_4,p2_3,p2_2,p2_1] with finish times: [[p2_6,43],[p2_5,41],[p2_7,39],[p2_4,31],[p2_3,29],[p2_2,16],[p2_1,1]] and with total cost: 3440

Table 10: Simulation results of myCompany’s “Buy standard item to stock”

If we compare the above results with the corresponding ones of Dell (see the

following figures), we will see that buying standard items to order costs to Dell much

more time and money than buying to stock to a traditional computer company. This

fact was actually expected, as the results concern the case of a new supplier, and

since Dell places great importance on finding the right supplier and integrating with

him, the corresponding time and cost should be higher. As we have already made

clear, the required time and cost of the daily collaboration between Dell and a

97

supplier is actually only 3 time units and 1140 money units (for processes 2.5, 2.6

and 2.7). Similarly, the daily collaboration between myCompany and an existing

supplier is represented by processes 2.4, 2.5 and 2.6, and hence requires 14 time

units and 1140 money units.

0

20

40

60

80

Duration (in time units)

new customer existing customer

Dell's "Buy standard item to order" vs. myCompany's "Buy standard item to stock"

Dell

myCompany

Figure 62: Comparison of simulation results of time between Dell’s “Buy standard item

to order” and myCompany’s “Buy standard item to stock”

0100020003000400050006000

Cost (in money units)

new customer existing customer

Dell's "Buy standard item to order" vs. myCompany's "Buy standard item to stock"

Dell

myCompany

Figure 63: Comparison of simulation results of cost between Dell’s “Buy standard item

to order” and myCompany’s “Buy standard item to stock”

The conclusion from this experiment is that Dell spends more time and money in

order to establish a close relationship with its suppliers, but has an advantage of

speed in their “daily” collaboration, a fact that turns out to be a strong point on the

98

long run. This conclusion agrees with Dell’s strategic choice to work with only a

few, “elite” suppliers, with whom virtual integration is supported.

Dell’s “Sell directly to large business and public sector customers” vs.

myCompany’s “Sell via intermediary to business customers”

Dell’s strategic choice of direct sales contradicts with traditional PC companies’ use

of intermediaries for selling. This experiment aims to highlight this strategic and

operational difference and provide results about time and cost involved.

So, we will assume that myCompany sells indirectly to the customer, hence via

distributor or some other third party. This means that the contact with the customer,

as well as the sales procedure is driven mainly by the third party and only to some

extent from the manufacturer for big clients. The BPM for “Sell via distributor” of

myCompany can be seen below, and it is based on MIT Process Handbook’s case of

Compaq. Note that as “customer” is meant the final customer (here a business

customer), and since the sales procedure is driven by the third party, the related time

and cost is high (as there are always two steps incorporated: one between

myCompany and the intermediary, and one between the intermediary and the

customer).

Figure 64: myCompany’s BPM for “Sell via intermediary to business customers”

This BPM may seem to be very similar to Dell’s corresponding BPM, shown in

Figure 58 (except for the fact that the identification of corresponding configurations

is not explicitly represented here, as it is a much more simplified procedure than for

Dell). However, apart from the junctions’ specification, there are important

99

differences in the details of the processes’ specification, especially in time duration

and cost. We will not define each process of Figure 6 for myCompany here (the

detailed description can be found in Appendix D), but we will explain what duration

and cost is assigned to each one.

As we have already mentioned, many of the processes of the above BPM are

implemented in two steps, one that connects the computer company with the

intermediary and one that connects the intermediary with the business customer.

Such processes are 4.4, 4.5, 4.6 and 4.7. So, an order has to be placed first to the

intermediary, and the intermediary passes it to the computer company; also, the

ordering procedure from the customer to the intermediary is usually not electronic,

thus leading to higher costs and time (especially if we take into account the errors

that may arise throughout the procedure and the difficulty for the customer to

manage his order), thus 11 time units and 140 money units. The payment receipt is

also split in two steps, leading to a duration of 2 time units (the cost is assumed to be

40). We assume that order delivery takes 10 time units for myCompany, as

traditional companies do not place as much importance on delivery speed and

responsiveness as Dell does; the corresponding cost is however lower (150 money

units), as slower delivery usually means cheaper as well. It is also worth mentioning

that since traditional companies tend to keep stock of finished products, there is no

delay for the trigger of 4.6, unlike the case of Dell. Process 4.7 is much more

complex for a traditional company, as managing customer relationships is driven by

the third party and only partly from the PC company; also, the non-electronic

implementation of CRM (customer relationships management) raises time and cost

to 7 time units and 700 money units (thus much higher than Dell’s 2 time units and

300 money units). On the other hand, informing a new customer about products

(process 4.3) is assumed to be shorter and cheaper for myCompany, as it does not

involve customizing the information medium (for myCompany that is catalogues and

meetings with the customer), or customizing the products themselves. So, we assign

process 4.3 a duration of 7 time units and a cost of 250 money units. Process 4.1 is

fairly similar for myCompany and Dell (with the difference that it is initiated by

myCompany’s third party), and hence the assigned time and cost is the same (25 time

units and 750 money units). Last, the identification of potential customer’s needs

100

incorporates the identification of corresponding products for myCompany, and hence

has duration of 10 time units (8+2 time units) and cost of 150 money units (100+50

money units).

Now that we have specified the processes for myCompany’s “Sell via intermediary

to business customer”, we can move on to the simulation of the BPM (the needed

events’ list and initial world state description can be found in Appendix D). So,

according to the simulation results, the total execution time is 65 time units and the

total involved cost is 2180 money units. The more detailed simulation results can be

seen below:

Base case hit! The BPM has finished execution at time 65 Results: The junctions executed are: [junction(finish_and,[p4_6,p4_7],[]), junction(link,[p4_5],[p4_6]), junction(link,[p4_4],[p4_5]), junction(link,[p4_3],[p4_4]), junction(link,[p4_2],[p4_3]), junction(and_split,[p4_1],[p4_2,p4_7]),junction(start,[],[p4_1])] The processes executed are: [p4_6,p4_5,p4_4,p4_3,p4_7,p4_2,p4_1] with finish times: [[p4_6,65],[p4_5,55],[p4_4,53],[p4_3,42],[p4_7,32],[p4_2,35],[p4_1,25]] and with total cost: 2180

Table 11: Simulation results of myCompany’s “Sell via intermediary to business

customers”

If we compare the above results with the corresponding ones for Dell, we will see

that selling via intermediary costs more for a traditional company in time and

money, than it costs for Dell to sell directly through its website. A graphical

representation of the comparison is to be seen in the figures below. This conclusion

seems logical, as selling directly means bypassing the intermediaries and hence

saving the time it takes to coordinate and collaborate with them. Also, it is widely

accepted that intermediaries add extra cost to the supply chain (as they need to make

some profit as well), a fact that can be reflected from the above result. So, given that

our assumptions about process duration and cost are correct, selling directly

guarantees a faster and cheaper selling procedure.

101

010203040506070

Duration (in time units)

new customer existing customer

Dell's "Sell directly to large business and corporate customers" vs. myCompany's "Sell via intermediary to business customers"

Dell

myCompany

Figure 65: Comparison of simulation results of time between Dell’s “Sell directly to large

business and corporate customers” and myCompany’s “Sell via intermediary to business

customers”

0

500

1000

1500

2000

2500

Cost (in money units)

new customer existing customer

Dell's "Sell directly to large business and corporate customers" vs. myCompany's "Sell via intermediary to business customers"

Dell

myCompany

Figure 66: Comparison of simulation results of cost between Dell’s “Sell directly to large

business and corporate customers” and myCompany’s “Sell via intermediary to business

customers”

It is also interesting to see the relevant results for the more frequent collaboration

between myCompany and an existing business customer (via the intermediary). So,

we will assume that an already existing customer of myCompany wishes to order

once again, and we will check the related time and cost for such a transaction

(processes 4.5, 4.6 and 4.7). Note that process 4.3 is also part of the “daily”

collaboration between myCompany and the customer, as informing the already

existing customers is an important aspect of CRM, but now requires slightly less

102

time and money, as the communication media and codes have already been

established (thus a duration of 4 time units and a cost of 150 money units). So, this

gives us a “daily collaboration” where 27 time units and 480 money units are

required, in contrast to Dell’s corresponding need for 15 time units and 330 money

units; note here that this difference is proportionally much higher than the one

mentioned above, for the case of a new customer (this is more obvious in the relevant

figures above). Therefore, it is clear that direct sales provides a big advantage in

terms of time and cost on the long run.

The conclusion from this experiment is that Dell spends less time and money for its

direct sales (for both new and existing customers) than a traditional computer

company does for its sales via intermediaries.

5.3 Discussion and conclusions

This section aims to clarify even further issues that have to do with the conducted

experiments. Hence, we will first describe the aim of each experiment and then we

will provide the results in a graphical representation and as aggregated as possible.

We will sum up with the discussion of some interesting related points.

The aim of the first experiment is to check whether Dell’s two BPMs (for process 2

and process 4.2) can be improved in terms of time. For each process model we have

picked one or more combinations of sequenced processes and turned them into

parallel; then we checked the trigger conditions, preconditions and actions of these

processes to see whether such a transformation is legal, and we calculated the total

execution cost (manually but also with the help of our workflow engine). The

experiments’ result for both processes is that “the given BPMs cannot be improved

any further”.

The second experiment aims to compare Dell’s SC strategies with those of a

traditional computer company. So, we have invented an example company, named

myCompany, which represents a traditional PC company. Then we created BPMs for

103

myCompany which correspond to Dell’s process 2 and 4.2, and we simulated their

execution. Note here that, as in the Dell case, we have assumed the required time and

cost values based on our knowledge of the computer industry. The total time and cost

results can be seen in the following table and figures:

Results of experiment 2 for Dell and myCompany Dell myCompany

Time 71 / 3 43 / 14 Buy standard item to order/stock Cost 5090 / 1140 3440 / 1140

Time 60 / 15 65 / 27 Sell to corporate customer Cost 1880 / 330 2180 / 480

Table 12: Results of experiment 2 for Dell and myCompany

0102030

4050607080

duration (in time units)

newcustomer

existingcustomer

newcustomer

existingcustomer

process 2 process 4

Dell

myCompany

Figure 67: Comparison of simulation results of time between Dell’s and myCompany’s

processes

104

0

1000

2000

3000

4000

5000

6000

cost (in money units)

newcustomer

existingcustomer

newcustomer

existingcustomer

process 2 process 4

Dell

myCompany

Figure 68: Comparison of simulation results of cost between Dell’s and myCompany’s

processes

We should make clear that the original BPMs deal with the case of a new supplier or

a new customer. However, we believe that the “daily” collaboration between a

company and its already existing supply chain partners is also important, and hence

we have distinguished this case and estimated the corresponding time and cost in an

informal way (manually). The simulation results for this case can be seen after the

slash character “/” in the above table.

As the figures above show, Dell’s choice of buying to order and virtually integrating

with its suppliers results in higher cost and time values in the case of a new supplier,

but it leads to lower time from a traditional PC company on the long run, thus when

the relationship with the supplier has already been established. In addition, Dell’s

strategic choice of direct sales guarantees a faster and cheaper sales procedure for

both the case of a new or already existing customer, as Dell virtually integrates with

the customer by providing him an online customized sales, support and

communication channel, thus Premier Page.

We should point out here that throughout the experiments, our workflow engine has

been used as tool that supports analysis about different strategies; in other words, we

have used the workflow engine to calculate duration and cost for each BPM, and then

we have based our argumentation on these results. It is also highly important that the

simulation and comparison results are not a means of arguing whether one business

105

model is better than the other, as this is not our object of discourse. On the contrary,

the above results aim to distinguish the different supply chain strategies and show

how the related time and cost differ. After all, if we wanted to argue about the

correctness and/or the business advantages of each SC strategy, we would have to

take several other factors into account, such as quality, impact on demand, etc.

106

Chapter 6

6Evaluation

Now that we have completed the planned work, thus the development of an

executable business process model that illustrates Dell’s supply chain strategies, we

can continue to its evaluation. The framework that we will adopt for evaluation is

explained in section 6.1, while the evaluation results follow in sections 6.2 and 6.3.

6.1 Evaluation Framework

As we have made clear in Chapter 1, our work consists mainly of two parts: the

development of a business process model that provides an insight into Dell’s supply

chain strategies, and the creation of a workflow engine that is used to simulate the

developed BPM. We will adopt a theoretical evaluation approach (with some points

being evaluated in an empirical way as well) for each part of the project, thus:

Theoretical evaluation of Dell’s business process model

Theoretical evaluation of the developed workflow engine, based on its use

for the conducted experiments

6.2 Evaluation of developed BPM

We will now review the developed business process model for Dell (the enriched

version, which is the final one), which had the objective to illustrate Dell’s supply

chain strategies, and evaluate it along the following dimensions and corresponding

questions:

soundness: is the developed BPM correct according to FBPML specification?

107

realism: does the developed BPM correspond to Dell’s business and SCM

reality?

completeness: does the developed BPM cover all basic strategic choices of

Dell?

level of detail: is it abstract enough to provide an overall view of Dell’s SC

strategies, and detailed enough to provide interesting information?

6.2.1 Soundness evaluation

We will now check whether the developed business process model, as it is presented

in section 3.3.3, is sound, thus whether it has been designed correctly when

compared with FBPML specification, as this is addressed in [9] and [7], and whether

it behaves correctly and as expected during simulation. First, after reviewing all

figures of section 3.3.3 we have concluded that there is no error in the use of visual

presentation of FBPML process language. In other words, the FBPML notation used

(activity and primitive activity, precedence link, and start-, finish-,and-, or-junction)

and activity decomposition conforms to the specification of FBPML. Second, when

simulating processes 2 and 4.2 with the developed workflow engine, the behaviours

of the model were as expected: processes were executed in the right order and and-

junctions behaved according to their specification. However, one could argue that

correct behaviours can only be produced by a correct workflow engine; but our

workflow engine is proved to be correct in section 6.3.1. Therefore, we consider

Dell’s BPM to be sound.

6.2.2 Realism evaluation

After checking whether the developed BPM is correct in its syntax, we have to check

whether it is also correct in its semantics and in its representation of Dell’s reality.

First, the semantics seem to be correct, as the process order and the junctions used

“make sense” throughout Dell’s BPM and practice (e.g. in Figure 25, it would make

no sense if getting inventory from the supplier’s plant was before placing an order).

Note here that the correctness of the semantics has also been checked with the help

of another MSc student with a combined background of Business and Computer

Science, Dimitrios Mavroeidis, so that objectivity of the evaluation procedure is

108

guaranteed. Second, Dell’s BPM has been developed based on Dell-relevant

literature, as it is presented in 2.2, and based on the MIT Process Handbook, which

involved the study of the Dell case for the relevant BPM provided. We should also

mention that after the completion of Dell’s BPM development, we have

crosschecked it with the relevant literature. So, we can conclude from the above that

the developed BPM corresponds to the actual Dell’s business model and supply

chain strategies.

6.2.3 Completeness evaluation

It is very important to evaluate the developed business process model across

completeness, thus to see whether it covers all the important strategic decisions of

Dell about supply chain management. In order to facilitate the evaluation procedure

we have created a list of all interesting topics about Dell’s supply chain strategies, as

these have been addressed in 2.1, and have checked whether each topic is covered by

our BPM and in which process and figure specifically.

Key points of Dell’s SC strategies Covered by the BPM?

Where?

1 Direct sales yes Process 4 (Figure 28, 29, 33) 2 Customer segmentation yes Process 4 (Figure 28)

3 Virtual integration with customer yes Process 4.2.4 (Figure 33), Process 4.2.8 (Figure 35, 36, 37)

4 Assemble computers (buy

standard PC components instead of manufacturing all the needed parts)

yes (implied) Process 3 (Figure 27)

5 Build-to-order and JIT assembly (no inventory of finished products) yes (implied) Process 3 (Figure 27)

6 Virtual integration with supplier yes Process 2, Process 2.5, Process 2.8

7 No/low inventory of standard

items (small&frequent inventory delivery)

no (no) -

Table 13: Completeness Evaluation Checklist for Dell’s BPM

The decision of whether a SCM topic is covered by our BPM is taken based on the

answer of the following question: “Could a reader that has some knowledge of SCM

but is not familiar with Dell’s SC strategies conclude each of the above-mentioned

SCM topics after studying our BPM for Dell?” The above answers where the ones

109

we got from a PhD student with such a background, Ioanna Manataki. Most of them

are clear and positive, so we will only comment on the negative and “(implied)”

answers. The latter refer to the SC strategies which are in brackets in the table, thus

buying PC components instead of manufacturing them (key point 4), and not keeping

inventory of finished products (key point 5). The answers for these questions were of

the type “I don’t know; it is not clear from the BPM”. Even though we would prefer

a positive answer, it is understandable that the reader will not be sure, as the absence

of a process for inventory management could either be on purpose (which actually is

in our case) or it could just be an omission of the developed BPM. Similarly, the fact

that there is no process like “manufacture standard item”, does not make clear to the

reader that Dell only buys PC components and does not manufacture them (for key

point 4). Last, the fact that the BPM does not cover the fact that no/low inventory of

standard items is kept (key point 7) is considered as a weakness of the developed

BPM. However, we believe that Dell’s BPM is in general complete concerning SC

strategies.

6.2.4 Evaluation of the level of detail

Another important aspect that “accompanies” the completeness criterion is whether

the decided level of detail in our BPM is satisfactory. So, we are asked to answer the

question whether the developed BPM is abstract and general enough to give an

overview of Dell’s supply chain strategies, thus having a strategic character, but at

the same time provide interesting information and details of relevant operations.

Finding a good balance between the two approaches was a challenge for us. We

believe that the strategic approach has been covered successfully, as our BPM covers

most business and SC strategies in an abstract way. The detailed-operational

alternative is enriched and adopted in aspects where we have considered as

interesting regarding SCM; however, even in these cases it was only partly adopted

because of the lack of relevant information in literature, and this fact could be

regarded as a weakness of our BPM. So, if we had to classify the adopted level of

detail in one of the categories shown in the following figure, where 1 denotes a high-

110

level and strategic approach while 4 a more detailed and operational one, we would

assign it class 2.

1 2 3 4

Strategic Operational Dell’s BPM

Figure 69: Level of detail of Dell’s BPM

6.3 Evaluation of developed workflow engine

We will now evaluate the workflow engine that we have developed for BPM

simulation and for related time and cost calculation. We will follow a theoretical

evaluation of the workflow engine; hence we will we will test the aspects of

soundness, completeness, coverage and ease of use, and we will answer each

evaluation criterion question based on the behaviour of the workflow engine when

used for the experiments of Chapter 5. Let us remind the reader that the conducted

experiments involve Dell’s processes 2 and 4.2, and include sequenced and parallel

processes, with and-split-and-joint process branches.

6.3.1 Soundness evaluation

The use of the workflow engine for the experiments of Chapter 5 has been

successful, as the system has behaved correctly and as expected. As the results have

shown, the processes were executed in the right order and respecting the specified

trigger conditions and preconditions. Also, the junctions’ behaviour was correct,

meaning that their specification in FBPML has been closely followed. The world

state (entities and data) has been monitored and updated in a correct way, and the

workflow state has been carefully controlled. Moreover, the total time and cost of the

BPM execution is correctly measured. Therefore, we can say that the developed

workflow engine is sound.

111

6.3.2 Completeness evaluation

We will now evaluate the completeness of the workflow engine, thus we will check

whether it covers all of the necessary concepts and functionalities. To clarify the

evaluation procedure, we will provide a list of requirements against which our

system will be evaluated.

Requirement category

Requirement Covered?

Process representation Trigger Condition representation √ Precondition representation √ Action representation √ Role representation x

World representation √ Event representation √

Model representation Junction representation (start, finish, and-split, and-joint, or-split, or-joint) √

Precedence link representation √ Synchronisation-bar representation x

BPM execution Process execution √ Junction execution ~

Action execution and update of the world state √

Business context Measure execution time and cost √ Support analysis about different strategies √

Table 14: Completeness Evaluation Checklist for the developed workflow engine

As one can see, our workflow engine covers most of the above concepts and

functionalities. It does not cover the representation of roles and synchronisation bars,

which are a part of the formal FBPML specification, in order to keep the workflow

engine development procedure simpler and manageable within the project time. For

the same reasons, junction execution is partly supported: only junctions that are

followed by a process can be executed, and not junctions followed by another

junction (except for the start- and finish-junctions). However, as the above table

shows, our workflow engine seems to be adequately complete, at least as far as

our general and specific requirements are concerned, as they are illustrated in Figure

41.

112

6.3.3 Coverage evaluation

We will now check whether our workflow engine covers all possible scenarios.

Based on the conducted experiments, we can say that it can deal with the average

BPM and simulate it successfully. But it cannot deal with some cases that are

restrained by our assumptions, as addressed in Figure 41. So, the workflow engine

does not provide loop-handling and it does not deal with the case of a wrong BPM

through validation or verification (it just does not execute it, without suggesting what

the error might be). However, as we have shown in section 4.4 and in Appendix A,

we can transform a BPM with a loop so that it can be simulated by our workflow

engine, and there also is some feedback to the user in case of a wrong BPM. So, we

do not regard these scenarios which are not covered as too serious shortcomings of

our workflow engine.

6.3.4 Ease of use

We regard the workflow engine as relatively easy to use, even though there is no

graphical interface provided. As we have already mentioned in section 4.4, the first

step is the definition of the processes, junctions, the initial world state and the event

list, and is considered as the difficult part of the workflow engine’s use, as the user

needs to define all these in Prolog. In case of a user familiarized with first order logic

or Prolog, the rate of difficulty is low; in the opposite case the difficulty is bigger,

but we believe that if the user studies the examples in our thesis and the demo

provided in 11, he will not face big problems. The execution step is trivial for both

experienced and non-experienced users, as it involves loading the relevant files and

typing run_bpm.

113

Chapter 7

7Conclusions and Future Work

7.1 Overview

This project’s motivation comes from the increasing importance of Supply Chain

Management and the interest in the successful case of Dell on the one hand, and the

need for a relevant analysis of a lower-level, on the other hand. We have tackled this

problem by developing a business process model that illustrates Dell’s supply chain

strategies, and which is strategic and business goal-oriented. In order to make this

BPM executable, we have designed and implemented a workflow engine that

simulates BPM execution and calculates the related total time and cost. Furthermore,

we have simulated two processes of the developed BPM and we have conducted

experiments for their improvement and for their comparison with according

processes of a traditional computer company.

7.2 Conclusions

In general the project was completed successfully. The developed BPM for Dell was

evaluated as correct, and it was found to cover most of the interesting points of

Dell’s supply chain strategies, and to correspond to reality. We also believe that it

manages to provide an insight into Dell’s supply chain strategies avoiding a too

abstract and high-level approach, even though in some cases more depth of detail

would be helpful. As far as the workflow engine is concerned, it serves its mission

and objectives, as addressed in 4.1.1, and it provides us with a correct and accurate

BPM simulation, given that the relevant assumptions are respected.

The experiments of Chapter 5 involved the simulation of two SCM-relevant

processes of Dell’s BPM with our workflow engine, under the assumption of relevant

114

execution time and cost for each process part. These assumed time and cost values

were based on our knowledge of general practices in the business world and on the

literature that covers Dell’s SC strategies, and we regard them as assumptions of

medium strength. Therefore, the conclusions of experiment 2, which involves the

comparison of Dell with a traditional computer company, are based on the assumed

time and cost values, and hence they are not regarded as completely reliable. We

would rather suggest that one should look upon the second experiment as a good

framework for comparison of different SC strategies. On the other hand, the results

of the first experiment are reliable (i.e. that we cannot improve the two processes by

transforming a sequenced pat of them into parallel).

7.3 Future Work

The future work of this project is focused on two main approaches: the improvement

of Dell’s BPM and the enrichment of the developed workflow engine. As far as

Dell’s BPM enhancement is concerned, the following topics are interesting and

meaningful:

Further decompose the processes of the developed BPM, so that more

operational details are covered. This way the reader would be able to move

from the strategic view of the upper level to the operational one, from the

very same BPM.

Find and replace the assumed execution time and cost values with the actual

ones.

Extend the BPM for the whole supply chain, thus show the strategies and

operations throughout the whole supply chain, from a supplier to the final

customer, instead of focusing only on Dell.

Our suggestions for the improvement of the workflow engine include the

following:

Model and execute the case of a junction followed by another junction.

Provide validation and verification of the BPM to be executed, hence check

its correctness and suggest what the error might be, if any.

115

Provide loop-handling.

No prior knowledge about the event occurrences. Instead the user feeds on

real-time the workflow engine about external events, and internal events are

produced by the workflow engine.

Adopt a more SCM-friendly approach, thus model and calculate distance,

inventory volume, inventory velocity, etc.

Include a graphical interface for the definition of the BPM to be simulated, as

well as for the simulation procedure.

116

8Bibliography

[1] Adamides, D. E., Karacapilidis, Nikos (2006). "A knowledge centred framework for collaborative business process modelling." Business Process Management Journal 12(5): 557 - 575.

[2] Beamon, B. M. (1998). "Supply chain design and analysis: Models and

methods." International Journal of Production Economics 55(3): 281-294 [3] Benner, M. J., Tushman, Michael L. (2003). "Exploitation, Exploration, and

Process Management: The Productivity Dilemma Revisited." Academy of Management Review 28(2): 238-256.

[4] Bowersox, D. J., Closs, David. J., Cooper, M. Bixby (2002). Supply Chain

Logistics Management. New York, McGraw-Hill/Irwin. [5] Checkland, P. and J. Scholes (1990). Soft Systems Methodology in Action.

Chichester, Wiley. [6] Chen-Burger, Y.-H. (2001). "Knowledge sharing and inconsistency checking

on multiple enterprise models". International Joint Conference on Artificial Intelligence. Seattle, Washington, USA

[7] Chen-Burger, Y.-H., A. Tate, et al. (2002). "Enterprise Modelling: A

Declarative Approach for FBPML". European Conference of Artificial Intelligence, Lyon, France

[8] Chen-Burger, Y-H., Robertson, D. (2005). Automating Business Modelling: A

Guide to Using Logic to Represent Informal Methods and Support Reasoning. Springer Verlag.

[9] Chen-Burger, Y.-H., Stader, J. (2003). "Formal Support for Adaptive

Workflow Systems in a Distributed Environment". Workflow Handbook 2003. Layna Fischer, Workflow Management Coalition, Future Strategies Inc.

[10] Chopra, S., Meindl, P. (2003). Supply Chain Management: Strategy, Planning

and Operation, Prentice Hall. [11] Chopra, S., Van Mieghem J. A. (2000). "Which e-business is Right for Your

Supply Chain?" Supply Chain Management Review 4(3).

117

[12] Cohen, S., Roussel, J. (2005). Strategic Supply Chain Management. McGraw-Hill.

[13] Cutting-Decelle, A., Das, B., Young, R., Rahimifard, S., Anumba, C.,

Bouchlaghem, N. (2006). "Building Supply Chain Communication Systems: A Review of Methods and Techniques". Data Science Journal, 5: 1-23

[14] Davenport, T. H. (1993). "Process Innovation: Re-engineering Work through

Information Technology". Harvard Business School Press, Boston. [15] Dell, M., Fredman, C. (2006). Direct from Dell: Strategies that Revolutionized

an Industry, Collins. [16] Dong, Y., Carter, C. R., Dresner, M. E. (2001). "JIT purchasing and

performance: an exploratory analysis of buyer and supplier perspectives". Journal of Operations Management, 4(19): 471-483.

[17] Fugate, B. S., Mentzer, J. T. (2004). "Dell's Supply Chain DNA". Supply Chain

Management Review,p.20-24. [18] Gattorna, J. (2006). "Supply Chains are the Business". Supply Chain

Management Review, September 2006. [19] Gunasekaran, A., Ngai, E.W.T. (2005). "Build-to-order Supply Chain

Management: a Literature Review and Framework for development". Journal of Operations Management, 23: 423-451.

[20] Han, D. and S. Park (2006). "Exception-Based Dynamic Service Coordination

Framework for Web Services". Workflow Management Handbook, Layna Fischer: 81-96.

[21] Harrison, A., (2003). Lecture Notes of Supply Chain Management Course,

“Competing Through Supply Chains”, Department of Management Science and Technology, Athens University of Economics and Business.

[22] Jaeger, T., A. Prakash, et al. (1994). "A framework for automatic improvement

of workflows to meet performance goals". Proceedings, Sixth International Conference on Tools with Artificial Intelligence.

[23] Kalpic, B. and P. Bernus (2002). "Business process modelling in industry - the

powerful tool in enterprise management." Computers in Industry 47(3): 299-318.

[24] Kapuscinski, R., R. Q. Zhang, et al. (2004). "Inventory Decisions in Dell's

Supply Chain." Interfaces 34(3): 191-205. [25] Kavakli, V. and P. Loucopoulos (1999). "Goal-driven business process analysis

application in electricity deregulation." Information Systems 24(3): 187-207.

118

[26] Koehn, N. F, Michael Dell. (2001). "Winning on the Demand Side of the

Information Revolution", Harvard Business School Case 801 363. [27] Kraemer, K., Deddrick, J. (2001). "Dell Computer: Using E-commerce to

Support the Virtual Company", Center for Research on Information Technology and Organizations, Globalization of I.T., (June 1, 2001), paper 236, www.crito.uci.edu/git/publications/pdf/dell_ecom_case_6-13-01.pdf

[28] Kraemer K. L, Dedrick, J., Yamashiro, S. (2000). "Dell Computer: Refining

and Extending the Business Model with IT". The Information Society 16:5-21. [29] Lawton, T.C., Michaels, K.P. (2001). "Advancing to the virtual value chain:

learning from the Dell model". Irish Journal of Management 22:91-112, ISSN: 1649-248X.

[30] Lee, H. L. (2000). "Creating Value through Supply Chain Integration". Supply

Chain Management Review, September 2000. [31] Lee, H. L., Padmanabhan, V., Whang, S. (1997). "The Bullwhip Effect in

Supply Chains". Sloan Management Review 38 (3): 93-102. [32] Lindsay, A., D. Downs, et al. (2003). "Business processes - attempts to find a

definition." Information and Software Technology 45(15): 1015-1019. [33] Luo, W. and Y. A. Tung (1999). "A framework for selecting business process

modeling methods." Industrial Management & Data Systems 99(7): 312 - 319. [34] Magretta, J. (1998). "The power of virtual integration: An interview with Dell

Computer’s Michael Dell", Harvard Business Review, 76 (2): 73-84 [35] Malone, T. W., K. Crowston, et al. (2003). Organizing Business Knowledge:

The MIT Process Handbook, The MIT Press. [36] MIT Process Handbook, http://process.mit.edu/Info/CaseLinks.asp [37] Mentzas, G., C. Halaris, et al. (2001). "Modelling business processes with

workflow systems: an evaluation of alternative approaches." International Journal of Information Management 21(2): 123-135.

[38] Mentzer, J. T., W. DeWitt, et al. (2001). "Defining Supply Chain

Management." Journal of Business Logistics 22(2): 1-25. [39] Narasimhan, R., Mahapatra, S. (2004). "Decision models in global supply chain

management". Industrial Marketing Management 33 (1): 21-27. [40] Nurcan, S., A. Etien, et al. (2005). "A strategy driven business process

modelling approach." Business Process Management Journal 11(6): 628 - 649.

119

[41] Pearlson, K., Raymond Y. (1999), Dell Computer Corporation: A Zero-Time

Organization, flowallience.com 1999 [42] Porter, M. (1980). Competitive Strategy. New York, Free Press. [43] Reijers, H. A. and W. M. P. van der Aalst (2005). "The effectiveness of

workflow management systems: Predictions and lessons learned." International Journal of Information Management 25(5): 458-472.

[44] Rietze, S. M. (2004). Case Studies of Postponement in the Supply Chain.

Department of Civil and Environmental Engineering, Massachusetts Institute of Technology. MSc in Transportation.

[45] Rivkin, J. W., Giorgi, S. (2004). "Matching Dell (B): 1998-2003", Harvard

Business School Cases January 29, 2004, 9-704-476. [46] Aguilar-Savén, R. S. (2004). "Business process modelling: Review and

framework", International Journal of Production Economics, 90 (2): 129-149

[47] Shi, M., G. Yang, et al. (1998). "Workflow management systems: a survey". Communication Technology Proceedings, 1998. ICCT '98. 1998 International Conference on Communication Technology.

[48] Sinha, A., "Dell: From a Low-Cost PC Maker to an Innovative Company", ecch

case study, 2007, 307-023-1. [49] Soffer, P. and Y. Wand (2005). "On the notion of soft-goals in business process

modeling." Business Process Management Journal 11: 663-679. [50] Thomas, D. J. and P. M. Griffin (1996). "Coordinated supply chain

management." European Journal of Operational Research 94(1): 1-15. [51] Vedpuriswar, AV, (2004). "Business Model Innovation at Dell". ICFAI Center

for Management Research. BSTA058 [52] Workflow Management Coalition. (1994). "The Workflow Reference Model". [53] Zografos, K. G., (2003). Lecture Notes of Supply Chain Management Course,

Lecture 1, Department of Management Science and Technology, Athens University of Economics and Business.

[54] http://www.dell.com/ [55] http://www.dellcommunity.com/ [56] http://www.dellideastorm.com/

120

[57] http://direct2dell.com/one2one/default.aspx [58] www.fortune.com [59] http://process.mit.edu/Activity.asp?ID=990929112221UF15790 [60] http://process.mit.edu/Default.asp

121

Appendix A

9Workflow Engine Decisions

Why backward chaining may cause problems to process start time

estimation

Let’s suppose that we have the four processes of the figure below, p1, p2, p3 and p4,

with durations 7, 2, 8 and 2 respectively, and where all processes are triggered at

time 0 except for p2, which is triggered at time 7. Supposing that p1, p2 and p3 have

no preconditions but p4 has a precondition which is a postcondition (action) of p1,

then the process execution times are the ones shown in the figures below.

Figure 70: Example BPM to illustrate that backward chaining is inappropriate for start time

estimation

Figure 71: Execution times of processes of Figure 70

5 - 12

12 - 14

7 - 9 5 9

5 - 13

p4

p1

9

p2 0

75 12 p3 13 14

122

However, if we tried to calculate the start time of p4 through backward chaining then

it would either be set to 9 (because the or-joint junction would be found to be

reached at timepoint 9) instead of 12 or it would not execute at all, because of the

following rationale: A simple way to calculate a process’s start time is by picking the

maximum value between its trigger time and the timepoint that the junction

preceding it is reached; according to this, p4 should start execution at timepoint 9.

However, before starting execution we should check whether its preconditions are

satisfied at this timepoint, which are actually not in the above case, and so p4 would

not execute. In order to find the actual start time of p4, the algorithm should go back

in the workflow state, before the or-joint junction is reached and check through its

preceding processes. However, this makes the whole procedure much more

complicated, as it should take into account many different factors apart from the last

junction.

Why choose sophisticated vs. simplistic treatment of process waiting time

Let’s suppose that we have two processes, p1 and p2, which have no preconditions

and with durations 2 and 1, respectively. If p1 is triggered at time 0 but p2 is

triggered at time 4 (probably because of some external event), then the actual process

and junction execution times for an and-split-and-joint and an or-split-or joint BPM

are the ones shown in the BPMs of the following figure. So, treating the waiting time

of p2 in a sophisticated and precise way would give us the results below.

0 - 2 0 - 2

0 2 2 5 50 0 0

4 - 5 4 - 5

Figure 72: Example BPMs to illustrate sophisticated treatment of process waiting time

123

However, if we wanted to simplify the above case we could assume that there is no

waiting time for p2. This assumption could be based on the fact that we check

business strategies, and hence the normal case, where no errors occur. In this case,

both p1 and p2 start execution at time 0, and the and-joint junction is reached at the

maximum of their durations, thus 2, while the or-joint junction is reached at the

minimum duration of the two, thus at time 1. Note that the time difference of this

case under the above-mentioned assumption from the actual case is quite big.

On the other hand, the assumption that we based this simplistic approach on does not

seem very plausible for real-life business. This means that an organization cannot

control everything that affects its operations, as external events also take place (e.g.

some activity from the supplier or the customer side). So, the normal case

incorporates waiting time as well, and therefore it should be treated in a more

sophisticated way.

Why explicit time measurement guarantees a more precise estimation of

process execution start time

Let’s suppose we have once again the BPM of the first example of this appendix,

with processes p1, p2, p3 and p4. A forward chaining workflow engine algorithm

that does not treat time in an explicit way would estimate the start time of p4 in the

following way: After the or-joint junction is reached at timepoint 9, p4 becomes

eligible to execute, and since it’s already triggered at timepoint 0, its preconditions

are checked and found not to be satisfied. So, the algorithm should go back to the

processes before the or-junction, check the finish times of the other two that have not

finished execution at time 9 (p1, p3) and pick the one with the smallest finish time

(here p1). Then, current time would be set at 12, for which p4 would be checked for

execution. Even though such an algorithm looks correct, things may get complicated

when we have several such cases in a BPM and if the algorithm would have to

“move forwards and backwards” in the BPM. What actually makes things this

complex is the fact that the actions of a process change the current world state, and it

may be difficult to monitor the correct ordering and schedule for actions, and thus

control the current state of the world. On the other hand, treating time in an explicit

way provides us with complete control over the world state and the workflow state,

124

as at each timepoint we check which actions are to be executed (and thus update the

current world state), and which junctions and processes have been and are to be

executed (precise workflow state). So, in the above example the algorithm would do

the following: When timepoint 9 is reached, p2 would finish execution, the or-

junction would be reached, and p4 would become eligible for execution but not

execute because of its unsatisfied precondition. Then time would be updated and set

to 10, the same actions-junctions-processes checking procedure would take place and

so on, until we reach timepoint 12, where the actions of p1 are executed and hence

the precondition of p4 is satisfied. This example shows that measuring time in an

explicit way is a more “natural” approach for a workflow engine, as it is closer to

reality, thus ensuring correct start time estimation and world state control.

Why prior knowledge of events’ occurrence is necessary for the

workflow engine

Let’s suppose that we have the BPM of the Figure 73, where the two processes, p1

and p2, have no preconditions and have durations 2 and 1, respectively. Let’s also

suppose that p1 is triggered at timepoint 0 and p2 at 4, but there is no prior

knowledge about whether and when p1 and p2 are triggered. Then at timepoint 2 of

the BPM execution p1 would finish execution, and since p2 would not have yet been

triggered, the and-joint junction would be reached, thus signifying a BPM execution

finish time 2, which would be wrong (as the figure shows, the correct one would be

5).

0 - 2

4 - 5

0 0 5 5

Figure 73: Example BPM to illustrate the need for prior knowledge of events’ occurrence

125

However, if we had prior knowledge that p2 will be triggered at 4, then the and-joint

junction would wait for p2 (remember the definition of and-joint junctions), and

hence be reached at the correct timepoint 5. So, having an event occurrence list

before the BPM is loaded to the workflow engine for simulation is essential for a

correct simulation.

Why the process members of a process branch are not completely

independent

Let’s suppose that we have the BPM of the figure below, with processes, p1, p2, p3

and p4.

Figure 74: Example BPM with a process branch

The preceding processes of a fan-in junction are defined as the last processes of each

branch, hence for our example p3 and p4. So, in the case of an and-joint junction,

like in our example, if p3 is triggered then it must be executed before the and-joint

junction is reached. Baring in mind that we have complete prior information about

event occurrences, either internal or external, and if, according to the event-list, p3 is

triggered, then it must be executed; hence the processes preceding it, p1 and p2, must

also be executed. So, if p1 or p2 is not executed, then the and-junction cannot be

reached, therefore there is some dependence between processes p1, p2 and (the

trigger condition of) p3. Since such information is complicated to incorporate in the

workflow engine algorithm, and in order to avoid confusion, we will expect the

trigger condition of p3 to be relevant to the execution of p1 and p2 (e.g. the relevant

event is a post-condition of p1 or p2).

126

Why conditions iii) and iv) are essential in order to check BPM execution

completion

Condition iii) says that if we are waiting for some event that will trigger a process of

which we already have a model instance, then the BPM cannot complete execution.

This is made clear in the following figure, in the example case 1, where the following

holds: Supposing that process p1 is triggered and executed at timepoint 0 till 2, then

the finish junction is reached at timepoint 2. But if process p2 is triggered at

timepoint 4 and starts execution, then we have to wait for p2 to start and complete (at

timepoint 5) in order to say that the whole BPM has finished execution, thus just

hitting the finish junction is not enough.

Example Case 1 Example Case 2

4

0 0

22 220 0 0 0

Figure 75: Example BPMs to illustrate the need for conditions iii) and iv) of execution

completion

Condition iv) says that a BPM cannot be considered to have completed execution, if

some post-process of a reached and-split junction has not been triggered or executed.

The example case 2 illustrates the need for condition iv) in the following way:

Supposing that p2 is not triggered (or its preconditions do not hold) then the and-split

specification about post-processes is not satisfied, and hence the BPM is not

successful.

127

How to transform a BPM that involves a loop in order to simulate it with

our workflow engine

Even though our workflow engine does not support loop handling, transforming the

BPM by creating new copies of the processes involved in the loop lets us deal with

loops in an indirect way. The following figure illustrates how to transform such a

BPM.

BPM with loop

Transformed BPM

Figure 76: Example BPMs for a transforming a BPM containing a loop

128

Appendix B

10Workflow Engine Code

% clear old database :-retractall(event_occ(_EventId, _EventName, _T)). :-retractall(entity_occ(_EntityName, _EntityId, _EntityAttribute)). :-retractall(data(_SubjectID, _Subject, _Attributes)). % initialize: load the process&junction specification and the % initial world state and event's list :- use_module(library(lists)). :- ['myWorld.pl']. :- ['myProcess.pl']. :- ['myJunctions.pl']. % ******** EXECUTION CODE ********************** % --------------------------------------------------- % run_bpm/0 %---------------------------------------------------- /* Predicate run_bpm: finds all junctions and processes of the model and fires BPM simulation for time 0 */ run_bpm:-

all_junctions(JunctionsList), all_processes(ModelProcessesList),

execute_step([], JunctionsList, [], [], ModelProcessesList, [], [], 0).

% --------------------------------------------------- % execute_step/8 %---------------------------------------------------- /* Predicate execute_step(+ActionsAgenda, +JunctionsPending, +PreviousJunctionsExecuted,+PreviousModelInstance,+ProcessPending, +PreviousProcessExecuted,+PreviousCompleteProcessAgenda,+T) executes a step that corresponds to time T, firing actions', processes' and junctions' execution */ % BPM halts execution if it has been running for a long time (here % arbitrarily set to 100) execute_step(_ActionsAgenda,_JunctionsPending, _PreviousJunctionsExecuted,_PreviousModelInstance, _ProcessPending, _PreviousProcessExecuted, _PreviousCompleteProcessAgenda, 100):-

nl, write('The BPM cannot finish execution, either because of some unsatisfied process precondition or because of a necessary process not being triggered.').

129

% Base case: BPM stops execution when the finish junction is reached % and there is no process pending for execution completion and no % reached process will be triggered and no and-split post process is % pending execute_step(_ActionsAgenda, _JunctionsPending, PreviousJunctionsExecuted, PreviousModelInstance, _ProcessPending, PreviousProcessExecuted, PreviousCompleteProcessAgenda, T):-

( member(junction(finish,_LastProcess,[]), PreviousJunctionsExecuted);

member(junction(finish_and, _LastProcess, []), PreviousJunctionsExecuted);

member(junction(finish_or, _LastProcess, []), PreviousJunctionsExecuted) ),

findall(P, (member([P,CompletionTime], PreviousCompleteProcessAgenda), CompletionTime >= T),[]), findall(NotYetTriggeredProcess,

(member(NotYetTriggeredProcess,PreviousModelInstance), gets_triggered(NotYetTriggeredProcess, TriggerT), TriggerT >= T), []),

findall(AndPostProcess, (member(AndPostProcess, PreviousModelInstance), find_AllAndPostPr(X), member(AndPostProcess, X), \+ member(AndPostProcess,PreviousProcessExecuted) ), []),

findall(Cost, (process(Pid, _PName, _Trigger, _Precond, _Action,

_Duration, Cost), member([Pid,_CompletionTime],PreviousCompleteProcessAgenda) ),CompletedCosts),

sum_list(CompletedCosts, TotalCost), nl, write('-------------------------------------- '), nl, write('Base case hit!'), nl,

write('The BPM has finished execution at time '), reduce_one(T, NewT), write(NewT), nl, nl, write('Results:'), nl, write('The junctions executed are: '), write(PreviousJunctionsExecuted), nl, write('The processes executed are: '), write(PreviousProcessExecuted), nl, write(' with finish times: '), write(PreviousCompleteProcessAgenda), nl,

write(' and with total cost: '), write(TotalCost). execute_step(PreviousActAgenda, JunctionsPending, PreviousJunctionsExecuted, PreviousModelInstance, ProcessPending, PreviousProcessExecuted, PreviousCompleteProcessAgenda, T):- nl, write('-------------------------------------- '), nl, write('Time='), write(T), nl, execute_actions_agenda(PreviousActAgenda, T), findall(P,

(member([P,CompletionTime],PreviousCompleteProcessAgenda), CompletionTime =< T), CompletedProcessTillNow), write('The completed processes till now are '), write(CompletedProcessTillNow), nl, execute_junction_pending(JunctionsPending, PreviousJunctionsExecuted, NowJunctionsExecuted, PreviousModelInstance, NowModelInstance, CompletedProcessTillNow, T),

execute_process_pending(ProcessPending,

130

PreviousProcessExecuted, NowProcessExecuted, PreviousActAgenda, NowActAgenda, PreviousCompleteProcessAgenda, NowCompleteProcessAgenda, NowModelInstance, T),

update_time(T, NewT), difference(JunctionsPending, NowJunctionsExecuted,

NewJunctionsPending), difference(ProcessPending, NowProcessExecuted,

NewProcessPending), execute_step(NowActAgenda, NewJunctionsPending,

NowJunctionsExecuted, NowModelInstance, NewProcessPending, NowProcessExecuted, NowCompleteProcessAgenda, NewT).

/* -----------------------------------------------------------

MAIN STUFF: execute_actions_agenda, execute_junction_pending, execute_process_pending

-------------------------------------------------------------*/ % --------------------------------------------------- % execute_actions_agenda/2 %---------------------------------------------------- /* Predicate execute_actions_agenda(+ActionsAgenda, +T): executes all actions of the process agenda that have execution time T */ execute_actions_agenda([], _T). execute_actions_agenda(ActionsAgenda, T):- findall(Actions, member([Actions,T], ActionsAgenda),

ActionsList), flatten(ActionsList, FlatActionsList), write('The following actions are executed: '),

write(FlatActionsList), nl, do_actions(FlatActionsList). % --------------------------------------------------- % execute_junction_pending/7 %---------------------------------------------------- /* Predicate execute_junction_pending(+JunctionsPending, +PreviousJunctionsExecuted, -NowJunctionsExecuted, +PreviousModelInstance, -NowModelInstance, +CompletedProcessTillNow, +T) processes/executes all junctions that are pending and that can be reached based on the completed processes till now (time T) and creates model instances of their post processes */ execute_junction_pending([], JunctionsExecuted, JunctionsExecuted, ModelInstance, ModelInstance, _CompletedProcessTillNow, _T). execute_junction_pending([First|Rest], PreviousJunctionsExecuted, NowJunctionsExecuted, PreviousModelInstance, NowModelInstance, CompletedProcessTillNow, T):- execute_one_junction(First, ModelInstanceCreated,

CompletedProcessTillNow, T), append(ModelInstanceCreated, PreviousModelInstance,

IntermedModelInstance),

131

execute_junction_pending(Rest, [First|PreviousJunctionsExecuted], NowJunctionsExecuted, IntermedModelInstance, NowModelInstance, CompletedProcessTillNow, T).

execute_junction_pending([First|Rest], PreviousJunctionsExecuted, NowJunctionsExecuted, PreviousModelInstance, NowModelInstance, CompletedProcessTillNow, T):- \+ execute_one_junction(First, _NoModelInstanceCreated,

CompletedProcessTillNow, T), execute_junction_pending(Rest, PreviousJunctionsExecuted,

NowJunctionsExecuted, PreviousModelInstance, NowModelInstance, CompletedProcessTillNow, T).

% --------------------------------------------------- % execute_one_junction/4 %---------------------------------------------------- /* Predicate execute_one_junction(+Junction, +ModelInstancesCreated, +CompletedProcessTillNow, +T) succeeds if the junction's type is satisfied */ execute_one_junction(junction(Type, Pre, Post), Post, CompletedProcessTillNow, T):- junc_type_satisfied(junction(Type, Pre, Post),

CompletedProcessTillNow, T), write('Junction '), write(junction(Type, Pre, Post)),

write(' hit '), nl, write('Model instances of processes '), write(Post),

write(' created'), nl. % --------------------------------------------------- % execute_process_pending/9 %---------------------------------------------------- /* Predicate execute_process_pending(+ProcessPending, +PreviousProcessExecuted, -NowProcessExecuted, +PreviousActAgenda, -NowActAgenda,+PreviousCompleteProcessAgenda, -NowCompleteProcessAgenda, +NowModelInstance, +T) fires the execution of pending processes that can start execution at time T and puts their actions in the ActionsAgenda */ execute_process_pending([], ProcessExecuted, ProcessExecuted, ActAgenda, ActAgenda, CompleteProcessAgenda, CompleteProcessAgenda, _NowModelInstance, _T). execute_process_pending([First|Rest], PreviousProcessExecuted, NowProcessExecuted, PreviousActAgenda, NowActAgenda, PreviousCompleteProcessAgenda, NowCompleteProcessAgenda, NowModelInstance, T):- execute_process(First, ActionsAgenda, CompletionAgenda,

NowModelInstance, T), execute_process_pending(Rest, [First|PreviousProcessExecuted],

NowProcessExecuted, [ActionsAgenda|PreviousActAgenda], NowActAgenda, [CompletionAgenda|PreviousCompleteProcessAgenda], NowCompleteProcessAgenda, NowModelInstance, T).

132

execute_process_pending([First|Rest], PreviousProcessExecuted, NowProcessExecuted, PreviousActAgenda, NowActAgenda, PreviousCompleteProcessAgenda, NowCompleteProcessAgenda, NowModelInstance, T):- \+ execute_process(First, _NoActionsAgenda,

_NoCompletionAgenda, NowModelInstance, T), execute_process_pending(Rest, PreviousProcessExecuted,

NowProcessExecuted, PreviousActAgenda, NowActAgenda, PreviousCompleteProcessAgenda, NowCompleteProcessAgenda, NowModelInstance, T).

% --------------------------------------------------- % execute_process/5 %---------------------------------------------------- /* Predicate execute_process(+Process, -ActionsAgenda, CompletionAgenda, +NowModelInstance, +T) succeeds when Process has a model instance, it has already been triggered, and its preconditions hold at time T */ execute_process(Process, [Actions,F], [Process, F],NowModelInstance, T):- process(Process, _PName, _Trigger, Precond, Actions, Duration,

_Cost), member(Process, NowModelInstance), findall(Proc, (gets_triggered(Proc, TriggerT), TriggerT =< T),

SofarTriggered), write('The SofarTriggered processes are '),

write(SofarTriggered), nl, member(Process, SofarTriggered), precondition_holds(Precond), F is T+Duration, write('Process '), write(Process),

write(' starts now execution till timepoint '), write(F), nl, write(' and actions '), write([Actions,F]),

write(' are added to the ActionsAgenda'), nl. %************ % JUNCTIONS %************ % --------------------------------------------------- % junc_type_satisfied/3 %---------------------------------------------------- /* Predicate junc_type_satisfied(junction(+Type, +Pre, +Post), +CompletedProcessTillNow, +T) succeeds if the type of the junction is satisfied at time T based on the processes that have completed execution until time T */ junc_type_satisfied(junction(start, _Pre, _Post),

_CompletedProcessTillNow, _T). junc_type_satisfied(junction(start_and, _Pre, _Post), _CompletedProcessTillNow, _T).

133

junc_type_satisfied(junction(start_or, _Pre, _Post), _CompletedProcessTillNow, _T). junc_type_satisfied(junction(link, Pre, _Post), CompletedProcessTillNow, T):- all_triggered_completed(Pre, CompletedProcessTillNow, T). junc_type_satisfied(junction(and_joint, Pre, _Post), CompletedProcessTillNow, T):- find_all_PreTriggered(Pre, PreTriggered), all_triggered_completed(PreTriggered, CompletedProcessTillNow,

T). junc_type_satisfied(junction(or_joint, Pre, _Post), CompletedProcessTillNow, T):- one_triggered_completed(Pre, CompletedProcessTillNow, T). junc_type_satisfied(junction(and_split, Pre, _Post), CompletedProcessTillNow, T):- all_triggered_completed(Pre, CompletedProcessTillNow, T). junc_type_satisfied(junction(or_split, Pre, _Post), CompletedProcessTillNow, T):- all_triggered_completed(Pre, CompletedProcessTillNow, T). junc_type_satisfied(junction(finish, Pre, []), CompletedProcessTillNow, T):- all_triggered_completed(Pre, CompletedProcessTillNow, T). junc_type_satisfied(junction(finish_and, Pre, []), CompletedProcessTillNow, T):- find_all_PreTriggered(Pre, PreTriggered), all_triggered_completed(PreTriggered, CompletedProcessTillNow,

T). junc_type_satisfied(junction(finish_or, Pre, []), CompletedProcessTillNow, T):- one_triggered_completed(Pre, CompletedProcessTillNow, T). % --------------------------------------------------- % all_triggered_completed/3 %---------------------------------------------------- /* Predicate all_triggered_completed(+ProcessList, +CompletedProcessesTillNow, +T) succeeds if all processes of ProcessList have completed execution till now (time T) */ all_triggered_completed([], _CompletedProcessesTillNow, _T). all_triggered_completed([First|Rest], CompletedProcessesTillNow,T):- member(First, CompletedProcessesTillNow), all_triggered_completed(Rest, CompletedProcessesTillNow, T). % --------------------------------------------------- % find_all_PreTriggered/2 %----------------------------------------------------

134

/* Predicate find_all_PreTriggered(+ProcessList,-TriggeredProcesses) finds all processes in ProcessList that have already been triggered */ find_all_PreTriggered(ProcessList, TriggeredProcesses):-

findall(Process, (member(Process, ProcessList), gets_triggered(Process, _AnyTime)), TriggeredProcesses).

% --------------------------------------------------- % one_triggered_completed/3 %---------------------------------------------------- /* Predicate one_triggered_completed(+ProcessList, +CompletedProcessTillNow, +T) succeeds if at least one process in ProcessList has already completed execution at time T */ one_triggered_completed(ProcessList, CompletedProcessTillNow, _T):- member(SomeProcess, ProcessList), member(SomeProcess, CompletedProcessTillNow). %*********** % TRIGGERS %*********** % --------------------------------------------------- % gets_triggered/2 %---------------------------------------------------- % Predicate gets_triggered(+Pid, -T): returns the trigger time of % process Pid %if no triggers then created at time 0 gets_triggered(Pid, 0):- process(Pid, _PName, [true], _Precond, _Action, _Duration,

_Cost). % if trigger becomes true at point T gets_triggered(Pid, T):- process(Pid, _PName, TriggerCond, _Precond, _Action,

_Duration, _Cost), \+ (member(Trigger,TriggerCond),\+ trigger_holds(Trigger,_T)), findall( TriggerT, ( member(ATrigger,TriggerCond),

trigger_holds(ATrigger,TriggerT) ), AllTriggerTimes), max_list(AllTriggerTimes, T). % trigger_holds (at time of event) trigger_holds(exist(event_occ(EventName)), T):- event_occ(_EventID, EventName, T).

135

%*************** % PRECONDITIONS %*************** % --------------------------------------------------- % precondition_holds/1 %---------------------------------------------------- % Predicate precondition_holds(PreconditionList): succeeds if all % preconditions of the list currently hold precondition_holds([]). precondition_holds([exist(entity_occ(EntityName))|Rest]):- entity_occ(EntityName, _EntityId, _EntityAttribute), precondition_holds(Rest). precondition_holds([not_exist(entity_occ(EntityName))|Rest]):- \+ entity_occ(EntityName, _EntityId, _EntityAttribute), precondition_holds(Rest). precondition_holds([exist(data(Subject))|Rest]):- data(_SubjectID, Subject, _Attributes), precondition_holds(Rest). precondition_holds([not_exist(data(Subject))|Rest]):- \+ data(_SubjectID, Subject, _Attributes), precondition_holds(Rest). %********* % ACTIONS %********* action_result(create_entity(EntityName, EntityAttribute)):- asserta(entity_occ(EntityName, _EntityId, EntityAttribute)). action_result(delete_entity(EntityName, EntityAttribute)):- retract(entity_occ(EntityName, _EntityId, EntityAttribute)). action_result(create_data(Subject, Attributes)):- asserta(data(_SubjectID, Subject, Attributes)). action_result(delete_data(Subject, Attributes)):- retract(data(_SubjectID, Subject, Attributes)). % --------------------------------------------------- % do_actions/1 %---------------------------------------------------- % Predicate do_actions(+ActionsList) fires the execution of all % actions in the ActionsList do_actions([]). do_actions([First|Rest]):- action_result(First), write('Action '), write(First), write(' executed'),nl, do_actions(Rest).

136

%******** % USEFUL %******** update_time(T, NewT):- NewT is T + 1. reduce_one(T, NewT):- NewT is T - 1. find_AllAndPostPr(X):- findall(P, ( (junction(and_split,_Pre,Post);

junction(start_and,_Pre,Post) ), member(P,Post) ), X). all_junctions(X):- findall(junction(Type,Pre,Post), junction(Type,Pre,Post), X). all_processes(ModelProcessesList):- findall(Pid, process(Pid, _PName, _Trigger, _Precond, _Action, _Duration, _Cost), ModelProcessesList). flatten([],[]):- !. flatten([H|T],L3):- !, flatten(H,L1), flatten(T,L2), append(L1,L2,L3). flatten(X,[X]). difference([], _, []). difference([H|T], L, NewList):- member(H, L), !, difference(T, L, NewList). difference([H|T], L, [H|NewList]):- difference(T, L, NewList).

137

Appendix C

11Demo For Workflow Engine

In this document we will illustrate how one can use the developed workflow engine

for BPM simulation. Note that the user is expected to know the cost and time needed

for the execution of each process in the BPM to be simulated, as well as the initial

world state and list of events that are expected to take place. In this demo we will use

a simple BPM for simulation, which is provided in the following graph:

Figure 77: Example BPM for simulation

Figure 78 shows the framework for simulating BPMs with our workflow engine

includes the specification of the processes and junctions in the BPM, the description

of the initial world state and the specification of the event occurrences’ list.

Processes’ specification includes information such as preconditions and time and

cost of a process, and junction specification includes the type of each junction, its

pre- and post-processes. The world state is described in terms of entities that exist in

the world and data that the object of discourse (usually the organization) has in its

database. The output of the system is real-time information resulted from BPM

executions (which processes have been executed at each timepoint, which junctions

have been reached, etc.) and the final results of the total time and cost of the

execution. The details of the framework will be explained in a comprehensive way

later on.

138

The workflow engine is implemented in Prolog (SICSTUS Prolog1), and the user is

expected to be in some extent familiar with the running environment. Under UNIX,

Prolog is started from a shell command, while in Windows it is normally started by

clicking on the corresponding icon (or from the Programs tab).

Figure 78: Graphical representation of workflow engine use framework

Considering the above workflow engine framework, one has to follow five steps in

order to simulate a BPM using our workflow engine:

Process specification

Junction specification

Initial world description

Event occurrences list

Workflow Engine

Results -Total time -Total cost -Real-time workflow state

1) Specify the processes of the BPM

2) Specify the junctions of the BPM

3) Describe the initial world state and provides an event occurrence list

4) Initialize the workflow engine

5) Run the workflow engine in Prolog

Each of these steps will be now explained for the simulation of our example BPM.

1 http://www.sics.se/isl/sicstuswww/site/index.html

139

Step 1: Specify the processes of the BPM

In this step we define all processes of our BPM, thus p1 and p2. The process

specification is written with a standard text editor, and it is saved with a suffix

“.pl”. A process can be defined by the following predicate:

process(Pid, PName, Trigger, Precond, Action, Duration, Cost).

where Pid is the ID of the process, PName is the name of the process, Trigger is the

list of the trigger conditions of the process, Precond is the list of the preconditions of

the process, Action is the list of the actions that the process fires, and Duration and

Cost is the related execution time and cost.

So, for our BPM example we type the following and save it as .pl file, let’s say

myProcess.pl:

Figure 79: Screenshot of myProcess.pl

140

Step 2: Specify the junctions of the BPM

In this step we define all junctions of our BPM, thus for our example BPM the start

and finish junction, as well as the two precedence links. The junction specification is

written with a standard text editor, and it is saved with a suffix “.pl”. A junction

can be defined by the following predicate:

junction(JunctionType, PreProcesses, PostProcesses).

where JunctionType is the type of the process, PreProcesses is the list of the

processes preceding the junction and PostProcesses is the list of the processes

following the junction.

So, for our BPM example we type the following and save it as .pl file, let’s say

myJunctions.pl:

Figure 80: Screenshot of myJunctions.pl

141

Step 3: Describe the initial world state and provide the event occurrence list

In this step we describe the initial world state in terms of entities and data, and define

all the events that will happen throughout BPM execution. This specification is

written with a standard text editor, and it is saved with a suffix “.pl”.

An entity occurrence can be defined by the following predicate, where EntityName

is the name of the entity, EntityId is the ID of the entity and EntityAttribute is

the list of the attributes of this entity:

entity_occ(EntityName, EntityId, EntityAttribute).

Data can be defined by the following predicate, where Subject is the subject of the

data entry, SubjectID is the ID of the data entry and Attributes is the list of the

attributes of this data item:

data(SubjectID, Subject, Attributes).

An event can be defined by the following predicate, where EventId is the ID of the

event, EventName is the event name and T is the timepoint of occurrence of this

event:

event_occ(EventId, EventName, T)

Note that data and entities are defined as dynamic predicates, (meaning that they can

be created or deleted dynamically, at run-time) thus before defining them in our file

we need to type: :- dynamic entity_occ/3. :- dynamic data/3.

For our example, as it is explained in section 4.3 of the thesis, there is no data or

entities in the initial world state, while there is only one event occurrence at

timepoint 2, needForCar. So, for our BPM example we type the following and save

it as .pl file, let’s say myWorld.pl:

142

Figure 81: Screenshot of myWorld.pl

143

Step 4: Initialize the workflow engine

In this step the above files are loaded in the workflow engine, in order to initialize it.

This is done in the body of the workflow engine code, thus we have to open the file

of the workflow engine workflowEngine.pl with a text editor and type: :- ['myWorld.pl']. :- ['myProcess.pl']. :- ['myJunctions.pl'].

This is more obvious in the following screenshot of workflowEngine.pl

Figure 82: Screenshot of workflowEngine.pl

144

Step 5: Run the workflow engine in Prolog

In this step the workflow engine file is run in Prolog, simulating the BPM. The steps

for this procedure are the following. First we start Prolog; this is done either by

typing sicstus in Unix or by clicking the corresponding icon in Windows. In

Windows the Sicstus Prolog environment looks like Figure 83.

Figure 83: Screenshot of Sicstus Prolog environment in Windows

After Prolog is started, we load the workflow engine file, either by typing

[workflowEngine.pl] or by choosing File→Load and then choosing the file

workflowEngine.pl

After the workflow engine is loaded we can start the BPM simulation, simply by

typing run_bpm. and pressing enter. This is shown in the following figure:

Figure 84: Screenshot of run command of workflow engine in Sicstus Prolog

145

Then the BPM execution is simulated and we get the relevant output from the Sicstus

window. In our example case, the output looks like Figure 85:

Figure 85: Screenshot of BPM simulation output

146

Appendix D

12Experiments’ Code

Logical representation of “Buy standard item to order” Junctions Specification junction(start, [], [p2_1]). junction(link, [p2_1], [p2_2]). junction(link, [p2_2], [p2_3]). junction(link, [p2_3], [p2_4]). junction(and_split, [p2_4], [p2_5, p2_8]). junction(link, [p2_5], [p2_6]). junction(link, [p2_6], [p2_7]). junction(finish_and, [p2_7, p2_8], []). Process Specification process(p2_1, identifyOwnNeeds, [true], [exist(entity_occ(needsOnXinventory))], [create_data(needsOnXinventory, [item_X]), create_data(currentXInventoryLevel, [item_X, inv_0])], 1, 200).

process(p2_2, identifyPotentialSuppliers, [exist(event_occ(needForNewXInventory))], [not_exist(entity_occ(supplierX)), exist(data(needsOnXinventory)), exist(data(relevantXsuppliers))], [create_data(potentialXsuppliers, [sup1_good, sup2_ok])], 21, 1000).

process(p2_3, selectSupplier, [exist(event_occ(needForNewXInventory))], [exist(data(potentialXsuppliers)),not_exist(entity_occ(supplierX))], [create_entity(supplierX, [sup1, good])], 21, 1000).

process(p2_4, negotiateContracts, [exist(event_occ(needForNewXInventory))], [exist(entity_occ(supplierX)), exist(data(needsOnXinventory)), not_exist(entity_occ(contractXsupplier))], [create_entity(contractXsupplier, [sup_sup1, reput_good]), create_entity(valueChainForX, [sup_sup1])], 8, 750).

process(p2_5, shareInfoWithSupplier, [exist(event_occ(integrateWithXsupplier))], [exist(entity_occ(valueChainForX)), exist(data(currentXInventoryLevel)), exist(data(demandForecastX)), exist(data(generalBusinessInfo))], [create_data(sharedCurrentXinventoryLevel, _), create_data(sharedDemandForecast, _), create_data(sharedGeneralBusinessInfo,_)], 1, 100).

process(p2_6, getInventoryFromSupplier, [exist(event_occ(lowXInventory)), exist(event_occ(integrateWithXsupplier))],

147

[exist(entity_occ(contractXsupplier)), exist(data(sharedCurrentXinventoryLevel)), exist(data(sharedDemandForecast))], [create_entity(inventoryX, [item_X, amount_1000]), delete_data(currentXInventoryLevel,[item_X, _]), delete_data(sharedCurrentXinventoryLevel, _), create_data(updatedXInventoryLevel, [item_X, amount_1000])], 1, 1000).

process(p2_7, paySupplier, [exist(event_occ(arriveInventoryX))], [exist(entity_occ(money))], [create_data(supplierXpaid, [sup1, 1000])], 1, 40).

process(p2_8, manageSupplier, [exist(event_occ(integrateWithXsupplier))], [], [], 20, 1000). Initial world state description and events’ list event_occ(e1, needForNewXInventory, 1). event_occ(e2, integrateWithXsupplier, 51). event_occ(e3, lowXInventory, 52). event_occ(e4, arriveInventoryX, 53). entity_occ(needsOnXinventory, ent1, [item_X]). entity_occ(money, ent2, [euros_3000]). data(d1, relevantXsuppliers, [item_X, [sup1_good, sup2_ok]]). data(d2, demandForecastX, [item_X, time_oneWeek, level_2000]). data(d3, generalBusinessInfo, [increasingImportance_edi]). Simulation results of “Buy standard item to order”

The workflow engine is designed to give feedback about the workflow state in each

timepoint. In order to save space, and because not in every timepoint there is

interesting information, we will present here the feedback for only the meaningful

timepoints.

-------------------------------------- Time=0 The completed processes till now are [] Junction junction(start,[],[p2_1]) hit Model instances of processes [p2_1] created The SofarTriggered processes are [p2_1] Process p2_1 starts now execution till timepoint 1 and actions [[create_data(needsOnXinventory,[item_X]),create_data(currentXInventoryLevel,[item_X,inv_0])],1] are added to the ActionsAgenda -------------------------------------- Time=1 The following actions are executed: [create_data(needsOnXinventory,[item_X]),create_data(currentXInventoryLevel,[item_X,inv_0])] Action create_data(needsOnXinventory,[item_X]) executed Action create_data(currentXInventoryLevel,[item_X,inv_0]) executed The completed processes till now are [p2_1] Junction junction(link,[p2_1],[p2_2]) hit

148

Model instances of processes [p2_2] created The SofarTriggered processes are [p2_1,p2_2,p2_3,p2_4] Process p2_2 starts now execution till timepoint 22 and actions [[create_data(potentialXsuppliers,[sup1_good,sup2_ok])],22] are added to the ActionsAgenda -------------------------------------- Time=2 The following actions are executed: [] The completed processes till now are [p2_1]

… -------------------------------------- Time=21 The following actions are executed: [] The completed processes till now are [p2_1] -------------------------------------- Time=22 The following actions are executed: [create_data(potentialXsuppliers,[sup1_good,sup2_ok])] Action create_data(potentialXsuppliers,[sup1_good,sup2_ok]) executed The completed processes till now are [p2_2,p2_1] Junction junction(link,[p2_2],[p2_3]) hit Model instances of processes [p2_3] created The SofarTriggered processes are [p2_1,p2_2,p2_3,p2_4] Process p2_3 starts now execution till timepoint 43 and actions [[create_entity(supplierX,[sup1,good])],43] are added to the ActionsAgenda -------------------------------------- Time=23 The following actions are executed: [] The completed processes till now are [p2_2,p2_1]

… -------------------------------------- Time=42 The following actions are executed: [] The completed processes till now are [p2_2,p2_1] -------------------------------------- Time=43 The following actions are executed: [create_entity(supplierX,[sup1,good])] Action create_entity(supplierX,[sup1,good]) executed The completed processes till now are [p2_3,p2_2,p2_1] Junction junction(link,[p2_3],[p2_4]) hit Model instances of processes [p2_4] created The SofarTriggered processes are [p2_1,p2_2,p2_3,p2_4] Process p2_4 starts now execution till timepoint 51 and actions [[create_entity(contractXsupplier,[sup_sup1,reput_good]),create_entity(valueChainForX,[sup_sup1])],51] are added to the ActionsAgenda -------------------------------------- Time=44 The following actions are executed: [] The completed processes till now are [p2_3,p2_2,p2_1]

… -------------------------------------- Time=50 The following actions are executed: [] The completed processes till now are [p2_3,p2_2,p2_1] -------------------------------------- Time=51

149

The following actions are executed: [create_entity(contractXsupplier,[sup_sup1,reput_good]),create_entity(valueChainForX,[sup_sup1])] Action create_entity(contractXsupplier,[sup_sup1,reput_good]) executed Action create_entity(valueChainForX,[sup_sup1]) executed The completed processes till now are [p2_4,p2_3,p2_2,p2_1] Junction junction(and_split,[p2_4],[p2_5,p2_8]) hit Model instances of processes [p2_5,p2_8] created The SofarTriggered processes are [p2_1,p2_2,p2_3,p2_4,p2_5,p2_8] Process p2_5 starts now execution till timepoint 52 and actions [[create_data(sharedCurrentXinventoryLevel,_10257),create_data(sharedDemandForecast,_10252),create_data(sharedGeneralBusinessInfo,_10247)],52] are added to the ActionsAgenda The SofarTriggered processes are [p2_1,p2_2,p2_3,p2_4,p2_5,p2_8] Process p2_8 starts now execution till timepoint 71 and actions [[],71] are added to the ActionsAgenda -------------------------------------- Time=52 The following actions are executed: [create_data(sharedCurrentXinventoryLevel,_10787),create_data(sharedDemandForecast,_10782),create_data(sharedGeneralBusinessInfo,_10777)] Action create_data(sharedCurrentXinventoryLevel,_10787) executed Action create_data(sharedDemandForecast,_10782) executed Action create_data(sharedGeneralBusinessInfo,_10777) executed The completed processes till now are [p2_5,p2_4,p2_3,p2_2,p2_1] Junction junction(link,[p2_5],[p2_6]) hit Model instances of processes [p2_6] created The SofarTriggered processes are [p2_1,p2_2,p2_3,p2_4,p2_5,p2_6,p2_8] Process p2_6 starts now execution till timepoint 53 and actions [[create_entity(inventoryX,[item_X,amount_1000]),delete_data(currentXInventoryLevel,[item_X,_11195]),delete_data(sharedCurrentXinventoryLevel,_11192),create_data(updatedXInventoryLevel,[item_X,amount_1000])],53] are added to the ActionsAgenda -------------------------------------- Time=53 The following actions are executed: [create_entity(inventoryX,[item_X,amount_1000]),delete_data(currentXInventoryLevel,[item_X,_11553]),delete_data(sharedCurrentXinventoryLevel,_11550),create_data(updatedXInventoryLevel,[item_X,amount_1000])] Action create_entity(inventoryX,[item_X,amount_1000]) executed Action delete_data(currentXInventoryLevel,[item_X,inv_0]) executed Action delete_data(sharedCurrentXinventoryLevel,_11550) executed Action create_data(updatedXInventoryLevel,[item_X,amount_1000]) executed The completed processes till now are [p2_6,p2_5,p2_4,p2_3,p2_2,p2_1] Junction junction(link,[p2_6],[p2_7]) hit Model instances of processes [p2_7] created The SofarTriggered processes are [p2_1,p2_2,p2_3,p2_4,p2_5,p2_6,p2_7,p2_8] Process p2_7 starts now execution till timepoint 54 and actions [[create_data(supplierXpaid,[sup1,1000])],54] are added to the ActionsAgenda -------------------------------------- Time=54 The following actions are executed: [create_data(supplierXpaid,[sup1,1000])] Action create_data(supplierXpaid,[sup1,1000]) executed The completed processes till now are [p2_7,p2_6,p2_5,p2_4,p2_3,p2_2,p2_1] -------------------------------------- Time=55 The following actions are executed: [] The completed processes till now are [p2_7,p2_6,p2_5,p2_4,p2_3,p2_2,p2_1]

… --------------------------------------

150

Time=70 The following actions are executed: [] The completed processes till now are [p2_7,p2_6,p2_5,p2_4,p2_3,p2_2,p2_1] -------------------------------------- Time=71 The following actions are executed: [] The completed processes till now are [p2_7,p2_6,p2_8,p2_5,p2_4,p2_3,p2_2,p2_1] Junction junction(finish_and,[p2_7,p2_8],[]) hit Model instances of processes [] created -------------------------------------- Base case hit! The BPM has finished execution at time 71 Results: The junctions executed are: [junction(finish_and,[p2_7,p2_8],[]),junction(link,[p2_6],[p2_7]), junction(link,[p2_5],[p2_6]),junction(and_split,[p2_4],[p2_5,p2_8]), junction(link,[p2_3],[p2_4]),junction(link,[p2_2],[p2_3]), junction(link,[p2_1],[p2_2]),junction(start,[],[p2_1])] The processes executed are: [p2_7,p2_6,p2_8,p2_5,p2_4,p2_3,p2_2,p2_1] with finish times: [[p2_7,54],[p2_6,53],[p2_8,71],[p2_5,52],[p2_4,51],[p2_3,43],[p2_2,22],[p2_1,1]] and with total cost: 5090

Logical representation of “Sell directly to large business and public sector customers” Junctions Specification junction(start, [], [p4_2_1]). junction(and_split, [p4_2_1], [p4_2_2, p4_2_8]). junction(link, [p4_2_2], [p4_2_3]). junction(link, [p4_2_3], [p4_2_4]). junction(link, [p4_2_4], [p4_2_5]). junction(link, [p4_2_5], [p4_2_6]). junction(link, [p4_2_6], [p4_2_7]). junction(finish_and, [p4_2_7, p4_2_8], []). Process Specification process(p4_2_1, identifyPotentialCorporateCustomer, [true], [exist(entity_occ(potentialCorporateCustomer))], [create_data(potentialCorporateCustomer, [potCC_custA]), create_entity(corporateCustomer, [cc_custA, size_1000]), create_data(customerAddress, [cc_custA, address_eh92bh])],25, 750).

process(p4_2_2, identifyCorporateCustomersNeeds, [exist(event_occ(newCorporateCustomer))], [exist(entity_occ(corporateCustomer))], [create_data(corporateCustomersNeeds, [cc_custA, amount_100, perf_medium])], 8, 100).

process(p4_2_3, identifyCorrespondingConfigurations, [exist(event_occ(newCorporateCustomer))], [exist(entity_occ(corporateCustomer)), exist(data(corporateCustomersNeeds)), exist(data(productSpecifications))], [create_data(customerConfigurations, [cc_custA, prodID_sk32])],

151

3, 100).

process(p4_2_4, informCorporateCustomerViaPremierPage, [exist(event_occ(newCorporateCustomer))], [exist(entity_occ(corporateCustomer)), exist(data(customerConfigurations))], [create_entity(customerPremierPage, [cc_custA, ppID_ppc32])], 10, 350).

process(p4_2_5, obtainOrder, [exist(event_occ(integrateCustomerViaPremierPage)), exist(event_occ(customerNeedOnProduct))], [exist(entity_occ(customerPremierPage))], [create_data(customerOrder, [orderID_thre34, cc_custA, prodID_sk32, amount_100])], 7, 40).

process(p4_2_6, receivePayment, [exist(event_occ(customerOrder)), exist(event_occ(customerPayment))], [exist(entity_occ(customerPremierPage))], [create_data(customerOrderPaid, [orderID_thre34, cc_custA, amount_5000])], 1, 40).

process(p4_2_7, deliverOrder, [exist(event_occ(paidCustomerOrder)), exist(event_occ(orderAssembled))], [exist(entity_occ(orderedItems)), exist(data(customerAddress))], [create_data(customerOrderDelivered, [orderID_thre34,method_truck])], 5, 200).

process(p4_2_8, manageBigCustomerRelationships, [exist(event_occ(newCorporateCustomer))], [], [], 2, 300). Initial world state description and events’ list event_occ(e1, newCorporateCustomer, 25). event_occ(e2, integrateCustomerViaPremierPage, 46). event_occ(e3, customerNeedOnProduct, 46). event_occ(e4, customerOrder, 53). event_occ(e5, customerPayment, 53). event_occ(e6, paidCustomerOrder, 54). event_occ(e7, orderAssembled, 55). entity_occ(potentialCorporateCustomer, ent1, [potCC_custA]). entity_occ(orderedItems, ent2, [prodID_sk32, orderID_thre34, amount_5000]). data(d1, productSpecifications, [prodID_sk32, performance_medium, media_good]). Logical representation of myCompany’s “Buy standard item to stock” Junctions Specification junction(start, [], [p2_1]). junction(link, [p2_1], [p2_2]). junction(link, [p2_2], [p2_3]). junction(and_split, [p2_3], [p2_4, p2_7]). junction(link, [p2_4], [p2_5]).

152

junction(link, [p2_5], [p2_6]). junction(finish_and, [p2_6, p2_7], []). Process Specification process(p2_1, identifyOwnNeeds, [true], [exist(entity_occ(needsOnXinventory))], [create_data(needsOnXinventory, [item_X]), create_data(currentXInventoryLevel, [item_X, inv_0])], 1, 200).

process(p2_2, identifyPotentialSources, [exist(event_occ(needForNewXInventory))], [not_exist(entity_occ(supplierX)), exist(data(needsOnXinventory)), exist(data(relevantXsuppliers))], [create_data(potentialXsuppliers, [sup1_good, sup2_ok])], 15, 700).

process(p2_3, selectSupplier, [exist(event_occ(needForNewXInventory))], [exist(data(potentialXsuppliers)), exist(data(needsOnXinventory)), not_exist(entity_occ(contractXsupplier))], [create_entity(contractXsupplier, [sup_sup1, reput_good])],13,1000).

process(p2_4, placeOrder, [exist(event_occ(collaborateWithXsupplier)), exist(event_occ(lowXInventory))], [exist(entity_occ(contractXsupplier)), exist(data(currentXInventoryLevel))], [create_data(orderToX, [sup_sup1, orderID_thrk32])], 2, 100).

process(p2_5, receive, [exist(event_occ(orderPlaced))], [exist(entity_occ(contractXsupplier)), exist(data(orderToX)), exist(entity_occ(transportMedium))], [create_entity(inventoryX, [item_X, amount_1000]), delete_data(currentXInventoryLevel,[item_X, _]), create_data(updatedXInventoryLevel, [item_X,amount_1000])], 10, 1000).

process(p2_6, paySupplier, [exist(event_occ(arriveInventoryX))], [exist(entity_occ(money))], [create_data(supplierXpaid, [sup1, 1000])], 2, 40).

process(p2_7, manageSupplier, [exist(event_occ(collaborateWithXsupplier))], [], [], 10, 400). Initial world state description and events’ list event_occ(e1, needForNewXInventory, 1). event_occ(e2, collaborateWithXsupplier, 29). event_occ(e3, orderPlaced, 31). event_occ(e4, lowXInventory, 1). event_occ(e5, arriveInventoryX, 41). entity_occ(needsOnXinventory, ent1, [item_X]). entity_occ(money, ent2, [euros_3000]). entity_occ(transportMedium, ent3, [type_truck]). data(d1, relevantXsuppliers, [item_X, [sup1_good, sup2_ok]]).

153

Logical representation of myCompany’s “Sell via intermediary to business customers” Junctions Specification junction(start, [], [p4_1]). junction(and_split, [p4_1], [p4_2, p4_7]). junction(link, [p4_2], [p4_3]). junction(link, [p4_3], [p4_4]). junction(link, [p4_4], [p4_5]). junction(link, [p4_5], [p4_6]). junction(finish_and, [p4_6, p4_7], []). Process Specification process(p4_1, identifyPotentialCorporateCustomer, [true], [exist(entity_occ(potentialCorporateCustomer))], [create_data(potentialCorporateCustomer, [potCC_custA]), create_entity(corporateCustomer, [cc_custA, size_1000]), create_data(customerAddress, [cc_custA, address_eh92bh])],25, 750).

process(p4_2, identifyCorporateCustomersNeeds, [exist(event_occ(newCorporateCustomer))], [exist(entity_occ(corporateCustomer)), exist(entity_occ(corporateCustomersNeeds)), exist(data(productSpecifications))], [create_data(corporateCustomersNeeds, [cc_custA,amount_100,perf_medium]), create_data(customerAppropriateProducts, [cc_custA, prodID_sk32])], 10, 150).

process(p4_3, informCorporateCustomer, [exist(event_occ(newCorporateCustomer))], [exist(entity_occ(corporateCustomer)), exist(data(customerAppropriateProducts))],[], 7, 250).

process(p4_4, obtainOrder, [exist(event_occ(customerInformed)), exist(event_occ(customerNeedOnProduct))], [exist(entity_occ(orderingMedium))], [create_data(customerOrder, [orderID_thre34, cc_custA, prodID_sk32, amount_100])], 11, 140).

process(p4_5, receivePayment, [exist(event_occ(customerOrder)), exist(event_occ(customerPayment))], [], [create_data(customerOrderPaid, [orderID_thre34, cc_custA, amount_5000])], 2, 40).

process(p4_6, deliverOrder, [exist(event_occ(paidCustomerOrder))], [exist(entity_occ(orderedItems)), exist(data(customerAddress))], [create_data(customerOrderDelivered, [orderID_thre34,method_truck])], 10, 150).

process(p4_7, manageCustomerRelationships, [exist(event_occ(newCorporateCustomer))], [], [], 7, 700). Initial world state description and events’ list event_occ(e1, newCorporateCustomer, 25). event_occ(e2, customerInformed, 42). event_occ(e3, customerNeedOnProduct, 42). event_occ(e4, customerOrder, 53). event_occ(e5, customerPayment, 53). event_occ(e6, paidCustomerOrder, 55).

154

entity_occ(potentialCorporateCustomer, ent1, [potCC_custA]). entity_occ(orderedItems, ent2, [prodID_sk32, orderID_thre34, amount_5000]). entity_occ(corporateCustomersNeeds, ent3, [cc_custA, amount_100, perf_medium]). entity_occ(orderingMedium, ent4, [meidum_edi]). data(d1, productSpecifications, [prodID_sk32, performance_medium, media_good]).

155