modelling security and trust with secure tropos › ~nzannone › publication ›...
TRANSCRIPT
MODELLING SECURITY AND TRUST WITH SECURE TROPOS
Paolo Giorgini
Department of Information and Communication Technology
University of Trento
Via Sommarive, 14 38050 Povo Trento ItalyPhone: +390461882052
Fax: +390461882093
Email: [email protected]
Haralambos Mouratidis
School of Computing and Technology University of East London, Barking Campus
Longbridge Road, RM8 2AS, Dagenham, England Phone: +44 (0) 20 8223 3315 Fax: +44 (0) 20 8223 2963
Email: [email protected]
Nicola Zannone
Department of Information and Communication Technology
University of Trento
Via Sommarive, 14 38050 Povo Trento ItalyEmail: [email protected]
MODELLING SECURITY AND TRUST WITH SECURE TROPOS
ABSTRACT
Although the concepts of security and trust play an important issue in the development of
information systems, they have been mainly neglected by software engineering methodologies. In
this chapter we present an approach that considers security and trust throughout the software
development process. Our approach integrates two prominent software engineering approaches,
one that provides a securityoriented process and one that provides a trust management process.
The result is the development of a methodology that considers security and trust issues as part of
its development process. Such integration represents an advance over the current state of the art
by providing the first effort to consider security and trust issues under a single software engineering
methodology. A case study from the health domain is employed to illustrate our approach.
INTRODUCTIONIt has been identified in many cases that securing information systems is not only about
providing a set of standard security mechanisms such as authentication or confidentiality. Providing
adequate security requires the capability of reasoning about security and its related concepts.
Some related concepts, and in particular trust, have been identified by recent research (DeTreville,
2002; Li, 2002; Samarati, 2001) as an important aspect to be considered when reasoning about
security.
Anderson (2001) has recognised the need to integrate an indepth analysis of security and trust
issues during the development of information systems. Such analysis should allow developers not
only to model security and trust but also, and most importantly, to reason about these concepts. In
other words, securing information systems needs to evolve from a “one fits all” solution, where
developers introduce standard security mechanisms such as authentication to various parts of the
system without taking into account any interrelation and even conflicts with other requirements of
the system, to completed and integrated solutions according to the real security requirements of
the system, which also take into account any trust relationships that might affect the security of the
system. However, up to now, the current state of the art does not provide modelling languages and
methodologies to assist developers to consider security and trust when they develop information
systems. This statement provides the main motivation behind our work. Our goal is to provide such
languages and methodologies.
In this paper we describe how the integration of two prominent software engineering
approaches, one that provides a securityoriented process (Mouratidis, 2004; Mouratidis, 2005)
and one that provides a trust management process (Giorgini, 2004; Giorgini, 2005a; Giorgini,
2005b; Giorgini, 2005d) results in the development of a methodology that considers security and
trust issues as part of its development process. Such integration represents an advance over the
current state of the art by providing the first effort to consider security and trust issues under a
single software engineering methodology.
The chapter is structured as follows. Next section provides an overview of the Tropos
methodology and discusses the issues of using the Tropos methodology for modelling security
concerns. The following section describes the Secure Tropos concepts and modelling activities,
whereas the next section presents the revised Secure Tropos process. The two subsequent
sections discuss the formalisation of the Secure Tropos concepts and present an application of the
Secure Tropos methodology to a case study from the health care domain. The last two sections
present related work and conclude the chapter.
BACKGROUND ON TROPOSTropos is a software development methodology tailored to describe both the organisational
environment of a system and the system itself (Bresciani, 2004a). Tropos adopts the i* modelling
framework (Yu, 1996), which uses the concepts of actors, goals, tasks, resources and social
dependencies for defining the obligations of actors (dependees) to other actors (dependers).
Models in Tropos are acquired as instances of a conceptual metamodel resting on the following
concepts/relationships (Bresciani, 2004a; Yu, 1996). An Actor models an entity that has strategic
goals and intentionality within the system or the organizational setting. An actor represents a
physical or a software agent as well as a role or position (Yu, 1996). A Goal represents actors’
strategic interests. We distinguish hard goals from softgoals, the second having no clearcut
definition and/or criteria for decide whether they are satisfied or not. Softgoals are typically used to
model nonfunctional requirements. A Plan (also known as task) represents, at an abstract level, a
way of doing something. The execution of a plan can be a means for satisfying a goal or for
satisficing a softgoal. A Resource represents a physical or an informational entity. The main
difference with an agent is that a resource has not intentionality. A Dependency between two
actors indicates that one actor depends, for some reason, on another in order to attain some goal,
execute some plan, or deliver a resource. The former actor is called the depender, while the latter
is called the dependee. The object around which the dependency centres is called the dependum.
Figure 1 shows the graphical representation of the above concepts.
Figure 1: Graphical Representation of Tropos concepts
To assist developers in the development of information systems, Tropos covers four main
software development phases (Bresciani, 2004a). Early Requirements, concerned with the
understanding of a problem by studying an existing organisational setting. The output of this phase
is an organisational model, which includes relevant actors, their respective dependencies and the
security constraints imposed to those actors. Late requirements, where the systemtobe is
described within its operational environment, along with relevant functions; this description models
the system as a (small) number of actors, which have a number of dependencies. These
dependencies define the system’s functional requirements. Architectural design, where the
system’s global architecture is defined in terms of subsystems, interconnected through data and
control flows. Within the framework, subsystems are represented as actors and data/control
interconnections are represented as (system) actor dependencies. Detailed design, where each
architectural component is defined in further detail in terms of inputs, outputs, and control. For this
stage, Tropos is using elements of the Agent UML (Bauer, 2001) to complement the features of i*.
To compliment the above development stages, the original Tropos methodology proposes the
following modelling activities (Bresciani, 2004a). Actor modelling, which consists of identifying and
analyzing both the actors of the environment and the system’s actors and agents. Dependency
modelling, which consists of identifying actors which depend on one another for goals to be
achieved, plans to be performed, and resources to be furnished. A graphical representation of the
model obtained following the above modelling activities is given through actor diagrams, which
describe the actors (depicted as circles), their goals (depicted as ovals), their softgoals (depicted
as cloud shapes) and the network of dependency relationships among actors (two arrowed lines
connected by a graphical symbol varying according to the dependum: a goal, a plan or a resource).
Goal and plan modelling rests on the analysis of an actor goals, conducted from the point of view
of the actor, by using three basic reasoning techniques: meansend analysis, contribution analysis,
and AND/OR decomposition (Mouratidis, 2004). A graphical representation of goal and plan
modelling is given through goal diagrams, which appears as a balloon within which goals of a
specific actor are analyzed and dependencies with other actors are established. Goals are
decomposed into subgoals and positive/negative contributions of subgoals to goals are specified.
Goal decomposition can be closed through a meansend analysis aimed at identifying plans,
resources and softgoals that provide means for achieving the goal.
Original Tropos (Bresciani, 2004a) fails, however, to adequately capture security requirements
(Mouratidis, 2004; Mouratidis, 2005). The process of integrating security and functional
requirements throughout the whole range of the development stages is quite ad hoc, and in
addition, the concept of softgoal that Tropos uses to capture security requirements fails to
adequately capture some constraints that security requirements often represent (Mouratidis, 2004).
Moreover, the methodology fails to provide concepts and processes to model trust relationships
(Giorgini, 2004).
To overcome these limitations, two lines of work have initiated. On one hand, an initial
modification of Tropos methodology (called Secure Tropos) to enable it to model security concerns
throughout the whole software development process has been proposed (Bresciani, 2004b;
Mouratidis, 2003; Mouratidis, 2004; Mouratidis, 2005). In particular this extension proposes the use
of security constraints and secure capabilities as basic concepts to be used for the integration of
security concerns, in a structured and well defined manner, throughout all the phases of the
software development process. On the other hand, a different extension has been proposed in
(Giorgini, 2004; Giorgini, 2005a; Giorgini, 2005b; Giorgini, 2005d) (also called Secure Tropos),
which introduces concepts such as ownership, trust, and delegation within a normal functional
requirements model. The key idea here is the separation of the notion of offering a service and
ownership of the very same service, as well as the distinction of functional dependency and trust
dependency.
Both of these approaches have been found useful when considering security and trust during
the development of information systems. However, they illustrate some limitations. Firstly, the
identification of the security constraints depends mostly on the dependencies between the actors.
However, the process does not take into account any trust relationships that might exist between
these actors. Such trust relationships might lead to the identification of security constraints which
cannot be identified otherwise. Moreover, the analysis of the trust and delegation relationships
results in a high level description. However, the approach lacks a well defined process to transform
such high level description to a lower level description and in turn implementation.
We believe that such limitations can be overcome by integrating the two approaches. The main
challenge here is the redefinition of the existing process to accommodate the new modelling
activities as well as the integration of the concepts introduced by the two approaches on the same
process in a clear and distinguished way.
THE SECURE TROPOS CONCEPTS AND MODELLING ACTIVITIESSecure Tropos extends the original Tropos methodology with some new concepts (Bresciani,
2004b; Mouratidis, 2003; Mouratidis, 2004; Mouratidis, 2005; Giorgini, 2004; Giorgini, 2005a;
Giorgini, 2005b; Giorgini, 2005d). A Security Constraint represents, generally speaking,
constraints that are related to the security of the system. A security constraint is defined as a
restriction related to security issues, such as privacy, integrity and availability, which can
influence the analysis and design of a multiagent system under development by restricting
some alternative design solutions, by conflicting with some of the requirements of the system,
or by refining some of the system’s objectives. Secure Entities represent any secure
goals/tasks/resources of the system. A secure goal represents the strategic interests of an
actor with respect to security. Secure goals are mainly introduced in order to achieve possible
security constraints that are imposed to an actor or exist in the system. However, a secure goal
does not particularly define how the security constraints can be achieved, since alternatives
can be considered. The precise definition of how the secure goal can be achieved is given by a
secure task. A secure task is defined as a task that represents a particular way for satisfying a
secure goal. A secure resource can be defined as an informational entity that is related to the
security of the multiagent system. Secure resources can be divided into two main categories.
Those that display some security characteristics, imposed by other entities, such as security
constraints, secure goals, secure tasks and secure dependencies. Ownership, which indicates
that an actor is the legitimate owner of a goal, a plan, or a resource. Owners have full authority
concerning the achievement of their goals, execution of their plans, or use of their resources.
Additionally, they can delegate their authority to other actors. Provisioning, which indicates
that an actor has the capability to achieve a goal, execute a plan, or deliver a resource. Trust of
permission between two actors indicates that an actor, called truster, believes that other actor,
called trustee, will not misuse a permission to achieve a goal, execute a plan or provide a
resource. In these cases trust is centred on an object, which is called the trustum. In general,
by trusting another actor in relation to a trustum, an actor assumes that the trustum is properly
used. Trust of execution between two actors indicates the belief of one truster that a trustee is
able to achieve a goal, execute a plan, or deliver a resource. In general, by trusting another
actor in relation to a trustum, an actor assumes that the trustum will be delivered. Delegation
of permission between two actors indicates that one actor, called delegater, delegates to the
other actor, called delegatee, the permission to achieve a goal, execute a plan, or use a
resource. In these cases delegation is centred on an object that is called the delegatum. In
general, delegation is used to model a formal passage in the domain. This would be matched
by the issuance of a delegation certificate such as digital credential or a letter. Delegation of
execution between two actors indicates that one delegater delegates to a delegatee the
achievement of a goal or execution of a plan. This would be matched by a call to an external
procedure. When the delegatum is a resource, delegation of execution means that the
delegater requests the resource to the delegatee. Secure Trust of Permission (Execution)
represents that a trust relation between two actors involves the introduction of a security
constraint that must be satisfied either by the truster, the trustee or both for the trust relation to
be valid. Secure trust relations are categorized into truster secure trust relation, in which the
truster introduces security constraints for the trust relation and the trustee must satisfy the
security constraints for the trust relation to be valid, trustee secure trust relation, in which the
trustee introduces security constraints and the truster must satisfy them, and double secure
trust relation, in which both the truster and the trustee introduce security constraints for the trust
relation that both must satisfy for the trust relation to be valid. Secure Delegation of
Permission (Execution) represents that a delegation between two actors involves the
introduction of a security constraint that must be satisfied either by the delegater, the delegatee
or both for the delegation to be valid. Secure delegations are categorized into delegater secure
delegation, in which the delegater introduces security constraints for the delegation and the
delegatee must satisfy the security constraints for the delegation to be valid, delegatee secure
delegation, in which the delegatee introduces security constraints and the delegatee must
satisfy them, and double secure delegation, in which both the delegater and the delegatee
introduce security constraints for the delegation that both must satisfy for the delegation to be
valid.
Moreover, the following modelling activities extend the current activities of the Tropos
methodology to allow the modelling of security and trust.
• Trust of permission modelling consists of identifying actors which trust other actors
for goal, plans, and resources, and actors which own goal, plans, and resources. In
particular, in the early requirement phase, it focuses on modelling trust relations
between social actors of the organizational setting. New trust relations are elicited and
added to the model upon the refinement activities discussed above. During late
requirements analysis, trust modelling focuses on analyzing the trust relations of the
systemtobe actor.
• Delegation of permission modelling consists of identifying actors which delegate to
other actors the permission on goals, plans, and resources. In particular, in the early
requirement phase, it focuses on modelling delegations between social actors of the
organizational setting. New delegations are elicited and added to the model upon the
refinement activities discussed above. During late requirements analysis, delegation
modelling focuses on analyzing the delegations involving the systemtobe actor.
It is worth mentioning that Trust of execution modelling and delegation of execution modelling
follow the same principles as the above modelling activities. Moreover, a graphical
representation of the models obtained following the above modelling activities is given through
four different kinds of actor diagrams: permission trust model, execution trust model, functional
requirements model and trust management implementation. Essentially, the first two models
represent the trust network among the actors involved in the system, the third represents which
obligations are effectively delegated by actors and which actors are responsible for such
obligations, and the last represents which permission are effectively delegated by actors and
which actors receive such permissions. These models use the same notation for actors, goals,
plans and resource used during dependency modelling.
• The security constraint modelling consists of modelling security constraints imposed
to the actors and the system, and it allows developers to perform an analysis by
introducing relationships between the security constraints or a security constraint and its
context. Security constraint modelling is divided into a number of smaller modelling
activities such as security constraint delegation, in which a delegation of a security
constraint from one actor to another is modelled, security constraint assignment, in
which the assignment of a security constraint to a goal is modelled, and security
constraint analysis, which consists of decomposing security constraints and also
identifying secure goals that security constraints might introduce to the system.
• Secure entities modelling consists of identifying secure tasks and resources that
provide means for achieving a secure goal; identifying secure goals that contribute
positively or negatively to the secure goal being analysed; and decomposing secure
goals and/or tasks into subgoals and subtasks respectively. Secure Entities modelling
is considered complementary to the security constraints modelling, and it follows the
same reasoning techniques, such as meansend, contribution and decomposition
analysis, that Tropos employs for goal and task analysis.
A graphical representation of the above modelling activities is given through actor and goal
diagrams. Essentially, the security related modelling activities are combined with the Tropos’s
other modelling activities. It depends on the designer to decide which activity must be employed at
which stage of the system development, since the main aim of these activities is not to restrict the
designer to a stepbystep development of the systemtobe, but rather to provide a framework that
allows the developer to go from a high level design to a more precise and defined version of the
system.
THE SECURE TROPOS PROCESSAs mentioned above, one of the main challenges of the integration was to revise the Tropos
process in order to accommodate the newly introduced modelling activities in a structured and
useful manner.
The previous sections introduced the security and trust concepts supported by the proposed
approach. This section focuses on the generic security process from which the models are
constructed. The overall methodological process for secure Tropos is an iterative process in which
the above presented modelling activities are used to produce different kinds of actor and goal
diagrams. The diagrams produced in one activity are used as input for the other activities. In
general, the process begins with a set of actors, each with an associated list of goals, together with
a definition of any dependencies between them. Trust and delegation relationships are then defined
together with any security constraints that might restrict the actors.
In particular, during the early requirements analysis stage, the analysis starts with the actor
modelling activity, in which the relevant stakeholders of the system’s environment, together with
their goals, are identified and modelled. An actor diagram is produced initially, which is refined after
subsequent analysis. Initially, this diagram is extended to model the dependencies of the actors,
including the trust and delegation relationships. This latter activity will produce a more refined
version of the actor diagram in terms of trust and delegation diagrams, where trust and ownership
relationships are analyzed along with the delegations for permission and execution among the
actors. Then, a third modelling activity (Security Constraint modelling) will further enhance the
analysis by identifying and modelling the security constraints of the involved actors.
For each dependency modelled during the previous step of the process, a trust analysis takes
place by following the trust modelling activities described earlier. When a resource dependency is
concerned, the ownership of the resource must be defined together with the trust relationship.
Then, goal/plan modelling activities are used to analyse the goals/plans of each actor on the base
of the analysis that took place during the previous activities. The resulting model identifies all the
possible goals of each actor together with the plans to satisfy these goals. Moreover, secure
entities modelling is used to further analyse all the security constraints of each actor, together with
any possible secure goals, tasks and/or resources.
It is worth mentioning that the analysis process and the application of the modelling activities is
quite iterative, meaning that various iterations of the actor diagram will take place before the final
one is produced. Moreover, each modelling activity might generate further analysis. For instance,
new actors might be discovered during the delegation modelling. This will start a new iteration of
analysis (starting from the actor modelling) aiming to preserve consistency during the produced
models. Figure 2 shows the process of the early requirements phase.
Figure 2: Early Requirements Development process
The Late Requirements Analysis stage employs the same modelling activities as the early
requirements analysis stage. The main difference is that whereas in the early requirements
analysis the environment of the system is modelled, during the late requirements analysis we
model the systemtobe. The system is introduced into the analysis as an actor, which has a
number of goals. Initially, the dependency modelling drives us to model, as subsequent activity, the
dependencies between the existing actors and the system. These dependencies allow us to
identify the system requirements. Trust, delegation and security constraints modelling result in
revised versions of the actor diagram. Then, the goals/plans and secure entities of the systemto
be are identified and modelled with the help of goal/plan and secure entities modelling. Similarly to
the early requirements analysis stage, this is an iterative process that can require various revisions
of the dependency models and goal/plan analysis.
The Architectural design stage starts with the identification of the overall system architecture.
Selecting the system architecture consists of selecting among alternative architectural styles
using as criteria the security requirements of the system. An independent probabilistic model is
employed, which uses the measure of satisfiability proposed by Giorgini (2002), to represents
the probability that the security requirement will be satisfied. Therefore, the activity results in
contribution relationships from possible architectural styles to the probability of satisfying the
security requirements of the system. After this, the process continues decomposing the system
in subcomponents (subactor) and delegating them goals and responsibilities accordingly with
the selected architecture and the dependency model resulting from the previous phase. Each
subactor is further analysed along its goals and plans. Figure 3 shows the resulting process.
Figure 3: The Architectural Design development process
Detailed design stage: During the detailed design stage, the components identified in the
previous development stages are designed with the aid of Agent Unified Modeling Language
(AUML). In particular, actor capabilities and interactions taking into account the security aspects
are specified with the aid of AUML. The important consideration, from the security point of view, at
this stage is to specify the components by taking into account their secure capabilities. This is
possible by adopting AUML notation.
SECURE TROPOS CONCEPTS FORMALISATION In order to automatically verify the correctness and consistency of functional and security
requirements, we provide a formalization, based on Datalog (Abiteboul, 1995), of the above Secure
Tropos concepts.
The formalization is based on the predicates presented in Figure 4. To make the predicates as
generic as possible, we use the first argument of the predicates as a type parameter. Thus,
delegate, delegateChain, trust, trustChain can have types t ∈ {exec,perm}; as well, confident can
have types t ∈ {satisfy,exec,owner}. For the same reason, predicates take as arguments generic
services (i.e., goals, tasks and resources).
The first set of predicates corresponds to the relations used by the requirements engineer. The
predicate requests(a,s) holds if actor a wants service s fulfilled, while provides(a,s) holds if actor a
has the capability to fulfill service s. The predicate owns(a,s) holds if actor a owns service s. The
predicate delegate(exec,a,b,s) holds if actor a delegates the execution of service s to actor b. The
predicate trust(exec,a,b,s) holds is actor a trusts that actor b fulfills service s. The predicate
delegate(perm,a,b,s) holds if actor a delegates to actor b the permission to fulfill service s. The
predicate trust(perm,a,b,s) holds is actor a trusts that actor b does not misuse service s.
General predicatesdelegate(Type : t, Actor : a, Actor : b, Service : s)delegateChain(Type : t, Actor : a, Actor : b, Service : s)trust(Type : t, Actor : a, Actor : b, Service : s)trustChain(Type : t, Actor : a, Actor : b, Service : s)confident(Type : t, Actor : a, Service : s)Specific for executionrequests(Actor : a, Service : s)provides(Actor : a, Service : s)should_do(Actor : a, Service : s)Specific for permissionOwns(Actor : a, Service : s)has_per(Actor : a, Service : s)Goal refinementgoal(Service : s)subgoal(Service : s1 , Service : s2)OR_subgoal(Service : s1 , Service : s2)AND_decomp(Service : s1 , Service : s2 , Service : s3)
Figure 4: Secure Tropos Formal predicates
Other predicates define properties that will be used during formal analysis. The predicates
delegateChain(exec,a,b,s) and trustChain(exec,a,b,s) hold if there is a delegation and a trust chain
respectively, between actor a and actor b. The predicates delegateChain(perm,a,b,s) and
trustChain(perm,a,b,s) are the dual of their execution counterpart. The predicate should_do(a,s)
identifies actors who can directly fulfill a service. The predicate confident(exec,a,s) holds if actor a
is confident that service s will be fulfilled. The predicate has_per(a,s) holds if actor a has enough
right to access service s. The owner is confident that the permission that he has delegated will not
be misused. In other words, an owner is confident, if there is no likely misuse of his permission. It
can be seen that there is an intrinsic double negation in the statement. Thus we introduce a
predicate diffident(a,s) to model this notion. At any point of delegation of permission, the delegating
agent is diffident, if the delegation is being done to an agent who is not trusted or if the delegatee
could be diffident himself. In this way, confident(owner,a,s) holds if owner a is confident to give the
permission on service s only to trusted actors. Finally, we have predicates for goal/task refinement
and resource decomposition.
Once the requirements engineer has drawn up the model, we are ready for the formal analysis.
A Datalog program is a set of rules of the form L : L1,...,Ln where L (called the head of the rule) is
a positive literal and L1,...,Ln are literals (called the body of the rule). Intuitively, the rule states that
if L1,...,Ln are true then L must be true.
The first batch of axioms (see Figure 5) deals with delegation and trust: Ax1 and Ax2 are used
to build delegation chains, while Ax3 and Ax4 build trust chains. These axioms hold for both
execution and permission. Ax5 and Ax6 propagate trust chains with respect goal refinement.
According to Ax5 trust of execution flows topdown with respect to service decomposition, while
according to Ax6 trust of permission flows bottomup respect to service decomposition.
Axioms for delegation and trustAx1 delegateChain(T,A,B,S) : delegate(T,A,B,S)Ax2 delegateChain(T,A,C,S) : delegate(T,A, B,S), delegateChain(T,B,C,S)Ax3 trustChain(T,A,B,S) : trust(T,A,B,S)Ax4 trustChain(T,A,C,S) : trust(T, A,B,S), trustChain(T,B,C,S)Ax5 trustChain(exec,A,B,S') : subgoal(S,S'), trustChain(exec,A,B,S)Ax6 trustChain(perm,A,B,S') : subgoal(S',S), trustChain(exec,A,B,S)Axioms for executionAx7 should_do(A,S) : delegateChain(exec,B,A,S), provides(A,S)Ax8 should_do(A,S) : requests(A,S),provides(A,S)Ax9 confident(satisfy,A,S) : should_do(A,S)Ax10 confident(satisfy,A,S) : delegateChain(exec,A,B,S), trustChain(exec,A,B,S), confident(satisfy,B,S)Ax11 confident(satisfy,A,S) : OR subgoal(S',S), confident(satisfy,A, S')Ax12 confident(satisfy,A,S) : AND_decomp(S, S',S''), confident(satisfy,A,S'), confident(satisfy,A,S'')Axioms for permissionAx13 has_per(A,S) : owns(A,S)Ax14 has_per(A,S) : delegateChain(perm,B,A,S), has per(B,S)Ax15 has_per(A,S') : subgoal(S',S), has_per(A,S)Ax16 confident(owner,A,S) : owns(A,S), not diffident(A,S)Ax17 diffident(A,S) : delegateChain(perm,A,B,S), diffident(B,S)Ax18 diffident(A,S) : delegateChain(perm,A,B,S), not trustChain(perm,A,B,S)Ax19 diffident(A,S) : subgoal(S',S), diffident(A,S')Axioms for execution and permissionAx20 confident(exec,A,S) : should_do(A, S), has_per(A,S)Ax21 confident(exec,A,S) : delegateChain(exec,A,B,S), trustChain(exec,A,B,S), confident(exec,B,S)Ax22 confident(exec,A,S) : OR_subgoal(S',S), confident(exec,A,S')Ax23 confident(exec,A,S) : AND_decomp(S,S',S''), confident(exec,A,S'), confident(exec,A,S'')
Figure 5: Secure Tropos axioms
The second batch of axioms is specific for execution. Ax7 and Ax8 state that an actor has to
execute the service if he provides a service and if either some actor delegates the service to him,
or he himself aims for the service. Ax912 captures the notion of confidence of satisfy. An actor is
confident that a service will be satisfied if he knows that all delegations have been done to trusted
actors. Axioms Ax11 and Ax12 define how confidence is propagated upwards along goal
refinement. Next we present axioms specific for permission. The owner of a service has full
authority concerning access and disposition of it. Thus, Ax13 states that if an actor owns a service,
he has it. Ax14 states that if an actor has permission on a service and delegates it to another actor,
the delegatee has permission on the service. The notion of confidence and diffidence is captured
by the axioms Ax1519. Last batch of axioms combines execution with permission. Ax2023 define
the notion of confidence of execution. Ax20 and Ax21 state that an actor is confident if he knows
that all delegations have been done to trusted actors and that the actors who will ultimately execute
the service, have permission to do so. Ax2223 deal with goal refinement.
Properties (see Figure 6) are different from axioms: they are constraints that must be checked. If
the set of constraints cannot all be simultaneously satisfied, the system is inconsistent, and
consequently it is not secure. We use the A ⇒ ? B to mean that one must check that each time A
holds, it is desirable that B also holds. In Datalog properties can be represented as the constraint :
A, not B.
Pro1 delegateChain(T,A,B,S) ⇒ ? trustChain(T,A,B,S)Pro2 requests(A,S) ⇒ ? confident(satisfy,A,S)Pro3 owns(A,S) ⇒ ? confident(owner,A,S)Pro4 requests(A,S) ⇒ ? confident(exec,A,S)Pro5 delegateChain(perm,A,B,S) ⇒ ? has_per(A,S)Pro6 should_do(A,S) ⇒ ? not delegateChain(exec,A,B,S)Pro7 owns(A,S) ⇒ ? not delegateChain(perm,B,A,S), A ≠ B
Figure 6: Secure Tropos properties
Pro1 states that if an actor delegates a service to another actor, the delegater must trust the
delegatee. Pro2 states that a requester wants to be confident to satisfy the desired service.
Pro3 states that the owner of the service has to be confident to delegate permission on his
services only to trusted actors. Pro4 states that the requester has to be confident to see the
service fulfilled. Pro5 states that an actor must be entitled to access a service in order to
delegate it. Pro6 states that if an actor provides a service and if either some actor delegates
this service to him or he himself requests the service, then he has to execute the service
without further delegation. Finally, Pro7 states that a service cannot be delegated back to its
owner.
On the other hand, security constraints are modeled by logical formulas. In particular, they can
be implemented as rules and/or integrity constraints. Essentially, secure relations are defined as
Datalog rules where the body represents the constraint and the head represents the relations.
However, rules could be not sufficient, and so we introduce integrity constraints that should be
verified in the model. Essentially, such constraints have the form of properties. Since security
constraints may requires specific domain predicates, we will present them during the analysis of
the case study.
One of the important issues is the verification of the security requirements. For this reason, we
have developed the Secure Tropos tool (STTool) (Giorgini, 2005c). STTool is a CASE tool
developed for design and verification of functional and security requirements, tailored to support
the Secure Tropos methodology (see http://www.troposproject.org for more details).
A CASE STUDYTo better demonstrate the Secure Tropos process, we employ a case study from the health care
sector1. Our case study includes the following five (5) actors. A Patient actor represents an
individual who is in need of medical attention, care or treatment. The Department of Health
represents the government department responsible for improving the national health services. The
Health Research Agency represents an agency that performs medical research. The General
Practitioner represents a physician whose practice is not oriented to a specific medical specialty 1 Due to lack of space, the case study illustrates only the early and late requirements stages of the Secure Tropos methodology.
but instead covers a variety of medical problems in patients. The Nurse represents a person
educated to support General Practitioners in providing primary care to patients.
When the actors together with their goals have been identified, any dependencies between
these actors are modelled with the aid of the dependency modelling activity. In our case study, the
main goal of the Patient actor is to Get Well. In order to achieve this goal, the Patient depends on
the Department of Health to Provide Health Infrastructure. In turn, the Department of Health cannot
achieve its goal Improve National Health Services without the help of the General Practitioner.
Therefore, the Department of Health depends on the General Practitioner to satisfy the goal
Provide Health Services. Similarly, dependencies for the other actors are identified as illustrated in
Figure 7. At this point we are also able to perform an initial analysis of the trust relationships
between the actors as resulted from their dependencies. For instance, the resource dependency
Clinical Data between the Health Research Agency and the Department of Health can be analysed.
The resource Clinical Data is owned by the Department of Health actor. However, this data is an
important element for the Health Research Agency to do some research, which is vital for the
Department of Health in order to understand the problems faced by its patients and provide the
appropriate health infrastructure.
Figure 7: Initial Dependency Model
Therefore, the Department of Health trusts the Health Research Agency not to abuse neither
disclose the provided Clinical Data. Similarly, the General Practitioner not only depends on the
Nurse to Provide Primary Care, but he/she trusts the Nurse to achieve this goal. Figure 8 shows
the diagram derived through trust modelling and Figure 9 shows the Datalog specification
representing this diagram.
Figure 8: Trust Analysis
trust(exec,Patient,DepatmentOfHealth,ProvideHealthInfrastructure)
trust(exec,Nurse,Patient,SupportPatientTreatment)
trust(exec,GeneralPractitioner,Nurse,ProvidePrimaryCare)
trust(exec,DepartmentOfHealth,GeneralPractitioner,ProvideHealthServices)
trust(perm,DepartmentOfHealth,HealthResearchAgency,ClinicalData)
owns(DepartmentOfHealth,ClinicalData)
Figure 9: Datalog Specification derived by trust analysis
On the other hand, delegation relationships can also be analysed. For example, the Department
of Health not only depends on the General Practitioner to achieve the goal Provide Health Services
but also delegates the responsibility to execute this goal to the General Practitioner actor. Similarly,
for the Patient to achieve his/her Get Well goal, appropriate infrastructure must be in place.
However, the Patient actor cannot on their own Provide Health Infrastructure. Therefore the Patient
actor delegates the responsibility for the achievement of this goal to the Department of Health.
Moreover, security constraint modelling takes place. For our case study, an environment security
constraint (Mouratidis, 2004) is imposed to the delegation of Clinical Data between the Department
of Health and the Health Research Agency due to legislation for the anonymity of clinical data. This
means that the Department of Health cannot treat the data as they like, and there are restrictions
imposed by government and human right laws to the Department of Health to preserve the
anonymity of data. Implementing these constraints in our formal framework requires decomposing
Clinical Data into its components. For our purpose, it is sufficient to distinguish between Patient
Personal Identifiable Information (PPII) (such as name, address etc.) and Patient Medical
Information (PMI) (such as clinical pathologies, medical treatments, etc.). Hence, the Department
of Health delegates permission to Health Research Agency only on PMI. This relation can be
implemented as follows:
delegate(perm,DepartmentOfHealth,HealthResearchAgency,PMI)
Furthermore, we need to introduce an integrity constraint in order to verify that Health Research
Agency is not entitled to access Patient Personal Identifiable Information. In fact, there may be
another actor that erroneously delegates permission on PPII to Health Research Agency. Such
integrity constraints can be defined as follows.
: has_per(HealthResearchAgency,PPII)
Figure 10 illustrates the diagram derived through delegation modelling and Figure 11 shows the
Datalog specification representing such diagram.
Figure 10: Delegation and Security constraint analysis
delegate(exec,Patient,DepatmentOfHealth,ProvideHealthInfrastructure)
delegate(exec,Nurse,Patient,SupportPatientTreatment)
delegate(exec,GeneralPractitioner,Nurse,ProvidePrimaryCare)
delegate(exec,DepartmentOfHealth,GeneralPractitioner,ProvideHealthServices)
Figure 11: Datalog specification derived from delegation analysis
However, at this point the analysis does not provide an accurate understanding of the (secure)
trust/delegation relationships among actors. This is due to the fact that the modelled social
relations are based on high level goals for the actors, which need to be further defined. Therefore,
the next step involves the goal/plan modelling in which each actor is internally analysed. Due to
lack of space we do not present the complete internal analysis of the actors of our case study.
Instead, we indicate how such an analysis results in the refinement of the (secure) delegation/trust
relationships. This happens because the internal analysis of each of the actors of the initial actor
diagram will results in the identification of more precise definition of their goals. This in turn will
result in more precise identification of the dependum of the different dependencies that an actor
might be involved in. For instance, the internal analysis of the Nurse actor indicates that the Nurse
does not actually depend on the Patient for the achievement of the Support Patient Treatment goal,
but rather on one aspect necessary for the satisfaction of this goal, which is to obtain the patient’s
personal information. Therefore, the dependency between the Nurse and the Patient has been
revised to indicate the output of the Nurse’s internal analysis. Figure 12 illustrates the refined
dependency model as resulted after the internal analysis of the actors of the case study.
Moreover, the internal analysis helps developers to identify new dependencies. For instance, the
patient’s analysis indicated that the Patient actor depends on the General Practitioner for providing
them with a care plan that they need to follow in order to achieve their main goal Get Well.
Figure 12: Refined Dependency Diagram
Therefore, a resource dependency is introduced between the Patient and the General
Practitioner to indicate this. In addition, the internal analysis of existing actors helps to identify new
actors. For example, the internal analysis of the General Practitioner indicated that this actor
depends on a Medical Administrator to provide the Patient History resource. In turn, the internal
analysis of the newly identified Medical Administrator actor indicated that he/she depends on the
Nurse to satisfy the goal Update Patient Information.
Moreover, the refined actor diagram allow us to perform a more detailed analysis of the trust
relationships of the actors of our case study as shown in Figure 13. For example, the Patient owns
its personal information. However, without providing such information to the Nurse, the health
needs of the patient cannot accurately be identified. Therefore, the Patient trusts that the Nurse will
use the information only for medical reasons. Similarly, other trust relationships can be identified
and analysed. One that is worth mentioning is the trust relationship between the General
Practitioner and the Medical Administrator for a resource (Patient History), which none of them
own.
Figure 13: Refined Trust relationship
Similarly to the trust analysis, we can perform a delegation analysis as shown in Figure 14. The
Medical Administrator not only trust the Nurse to update the patient information but she/he
delegates responsibility of fulfilling this goal to the Nurse. Similarly, the Patient not only trusts the
Department of Health to provide health infrastructure but he/she delegates the achievement of this
goal to the Department of health.
Figure 14: Refined Delegation relationships
We should note that the delegation analysis at this stage plays an important role for the
identification of security constraints. For example, the Patient not only trusts the Nurse with their
personal information but they also delegate permission to the Nurse to provide their information
when necessary for their health care as shown in Figure 14. In particular, a security constraint
(Keep Information Confidential) is imposed by the Patient to General Practitioner for Patient
History. In addition, the Patient actor imposes a constraint to the General Practitioner with respect
to the Care Plan. This security constraint restricts the General Practitioner in requesting the
patient’s consent whenever the patient care plan needs to be shared with anyone. It is worth
mentioning that this point that the introduction of that constraint has resulted in the identification of
new dependencies. Since the General Practitioner needs to satisfy the security constraint he/she
depends on the Medical Administrator to achieve the Obtain Patient Consent task. In turn, the
Medical administrator depends on the Patient for the Consent resource.
Figure 15: Refined Security Constraints
On the other hand, the formal analysis of the delegation relationships allows us to further define
the security constraints. From the model we can see that every agent who plays the role Nurse is
entitled to access patient information. Ideally, we would authorize only the nurses assigned to the
patient to access patient data. Therefore, Medical Administrator enforces nurses to access
information only about patient that are assigned to them as shown in Figure2 16. Further, the
patient requires their personal information to remain confidential and therefore he/she imposes an
2 It is worth mentioning that in this figure we have only modelled the security constrains together with the actors that have been affected by them (either imposed them or imposed to them).
additional security constraint to the Nurse actor. These constraints can be formally implemented as
follows.
delegate(perm,Pat,Nurse,PerData) : isNurseOf(Nurse,Pat), confident(owner,Pat,PerData).
where predicate isNurseOf(a,b) holds if a is the nurse of b.
Late requirements analysis
During the late requirements analysis, the systemtobe is introduced as another actor on the
existing models. This results in redefining some of the dependencies and delegating some goals,
tasks, and resources from existing actors to the newly introduced actor (the system). For instance,
although during the early requirement analysis we have identified that the General Practitioner
actor depends on the Medical Administrator for the Patient History resource, the internal analysis of
the General Practitioner has indicated that the delegation of the responsibility of satisfying this
dependency to an electronic system will result in the General Practitioner working more efficiently,
with less effort and faster. Therefore, the Patient History resource dependency has been re
assigned to the eHealth system actor. Similar conclusions were also drawn for other dependencies,
which subsequently were assigned to the eHealth system actor as shown in Figure 16.
Figure 16: The revised dependency diagram including the system
In addition, the assignment of some of the dependencies to the system actor have resulted into
the identification of new dependencies necessary for the system actor in order to achieve the
dependencies assigned by the existing actors of the system. For example, the assignment of the
Patient History and Patient Record dependencies to the eHealth System has introduced the need
for the system actor to satisfy the Enter Patient Information goal. However, for achieving this goal
the eHealth System actor depends on the Medical Administrator to input the information.
The introduction of the eHealth System and the revision of the dependency diagram also
prompts for a redefinition of the trust relationships of the actors to reflect the modifications on the
dependencies due to the introduction of the system actor. For example, since the Patient History
dependency has been assigned to the eHealth system, the trust relationship for this dependency
must be redefined. In fact, the eHealth System now should trust that the General Practitioner does
not misuse the Patient History resource.
Similarly, the redefinition of the dependencies indicated that the General Practitioner now
depends on the Medical Administrator for managing the patient records. Therefore, a trust
relationships is identified between the General Practitioner and the Medical Administrator for the
Manage Patient Records goal as shown in Figure 17.
Figure 17: The trust relationships including the system
At this point, we also have to study the effects of the system introduction to the delegation
relationships. For example, although initially the Medical Administrator had delegated permission
on the Patient history resource to the General Practitioner, the assignment of the Patient History
resource dependency to the eHealth System leads to the redefinition of the delegation relationship.
On the other hand, delegations for the newly introduced dependencies are identified. For instance,
the eHealth System delegates permission on the Patient Record resource to the Administrator
delegates as shown in Figure 18.
Figure 18: The delegation relationships including the system
The redefinition of the trust and delegation relationships results in the need to redefine some of
the security constraints of the actors. For example, the delegation of the Patient History resource to
the General Practitioner prompts the eHealth System to impose a security constraint on the
General Practitioner to Keep Information Confidential. This constraint has been imposed by the
eHealth System to the General Practitioner to make sure the delegation of the resource does not
violate the security constraint initially imposed to the eHealth System by the Patient. Moreover, the
system has been imposed the Patient consent security constraint. To satisfy this, a new
dependency has been identified between the General Practitioner and the eHealth System to Verify
Consent. Figure 19 illustrates a revised model of the security constraints between the actors.
Figure 19: Security Constraints including the system
After drawing the model representing the system, designers may want to check whether such
model complies with security requirements. For this reason, the STTool can be used to verify the
consistency and correctness of the model. Regarding our case study, an initial verification process
using the STTool reported three inconsistencies that affect the specification representing the
output of our modelling process. In particular, these inconsistencies refer to the failure of Pro5. The
first inconsistency is because Department of Health delegates the permission on Health
Infrastructure to General Practitioner, but the department is not entitled to access the infrastructure.
It is not clear from the requirements if the owner of the infrastructure is the department or another
actor. If the health infrastructure belongs to Department of Health, this ownership relation should be
added to the model. Otherwise, if the health infrastructure belongs to another actor, this actor
should delegate the permission to Department of Health. The other two inconsistencies refer to
eHealth System. Essentially, this actor delegates the permission on patient history and patient
record to General Practitioner and Medical Administration, respectively. Unfortunately, the
requirements do not specify how eHealth System is entitled to access patient data.
The above inconsistencies are mainly the result of modelling mistakes of the developers, and as
such they were easily corrected. However, inconsistencies might also be present due to lack of
accurate requirements. When the analysis spots an inconsistency, system designers should
interact with stakeholders in order to define more accurate requirements, if it is needed, and then
revise the model representing the system. This process should continue until there are no more
inconsistency in the model.
RELATED WORKOur work is not the only one considering security as part of the development process. In fact,
our work is related with work widely available in the literature, such as (Liu, 2002; Jürgens, 2004,
McDermott, 1999, Sindre, 2005), as well as with the approaches presented in the other chapters of
this book. However, most of these approaches only guide the way security can be handled within a
certain stage of the software development process. For example, the work by Liu et al. (2002),
Haley et al. (chapter 2), as well as Yu et al. (chapter 4) are focused on the early development
stages, whereas the approach by Jürgens (2004), by Houmb et al. (chapter 9) and by Koch et al.
(chapter 10) take place in a fairly low level and it is suited to a more operational analysis.
On the other hand, Falcone (1998) argue about the importance of considering trust and provide
a definition of trust in agent systems both as a mental state and a social attitude. Moreover, Jøsang
(1997) provides a formal model for reasoning about trust in information security. In this approach,
trust is a belief and the presented model represents beliefs and a set of operations to combine
beliefs. Trust Management is an approach to manage distributed access control combining
policies, digital signatures, and logical deduction. Trust Management systems were developed as
an answer to the inadequacy of traditional authorization mechanisms such as PGP and X.509.
Over the last ten years, a number of frameworks have been developed for addressing this issue.
PolicyMaker (Blaze, 1996) and KeyNote (Blaze, 1999) are essentially query engines with the aim to
verify whether a required action complies system policies. DeTreville (2002) introduces Binder, a
language based on wellknow logicbased languages improved with the ability to model security
statements in distributed systems. Another solution based on logic programming is the RT
framework (Li, 2002), a family of Rolebased Trustmanagement languages designed for
representing policies and credential in distributed systems. RT takes the notion of role and the
assignment of permissions to roles from RBAC (Sandhu, 1996), and the management of distributed
authorities through the use of credentials from Trust Management approach. Similarly, Freudnthal
(2002) introduce Distributed Role Based Access Control (dRBAC), a decentralized trust
management system, which provides an access control mechanism for manging distributed
authorities. However, there is a gap between these frameworks and the requirements analysis
process.
A first proposal for introducing trust concerns during the system development process is given
by Yu (2001). They model trust using the i* framework. Their approach models intentional
dependency relationships among strategic actors and their rationales. As actors depend on each
other for goals to be achieved, tasks to be performed, and resources to be furnished, the trust
relationships among these actors are considered using the concept of softgoal.
All the above mentioned approaches differentiate between security and trust. In other words,
these approaches mostly consider security and trust as two separate concepts and do not provide
a complete framework to consider both of them at the same time. As it was argued earlier on this
paper, security and all the related concepts, such as trust, should be considered under a single
software engineering methodology. It is only then, that we will be able to fully integrated security
considerations in the development stages of software systems, resulting in the development of
more secure systems.
CONCLUSIONSThis paper has introduced the integration of two prominent Troposbased approaches, which deal
with security and trust issues in the development of information systems. The result is the first ever
methodology to consider both security and trust as part of the software development process.
During the course of our work, we have identified a number of important advantages that resulted
as part of the integration. These are:
• The trust and delegation relationships are further analysed in terms of security
constraints which are then converted to operationalised goal which are easy to
implement. In other words, an initially abstract analysis can be converted to a more
precise and detailed specification.
• Security constraints identification is more precise and it is based on a well defined
analysis around the trust and the delegations demonstrated by one actor to another.
• Able to model trust, delegation and security simultaneously with one analysis
compliment the other but in the same time maintain the separation of each concept.
• Security constraints are imposed either by the environment or the actor that has
ownership of the dependum. Therefore, the ownership analysis performed as part of
the trust modelling allows developers to easier identify the security constraints.
Our work is not complete. We are currently working on applying our approach to more case
studies, in order to further validate our framework, as well as enhancing our methodology by
introducing other concepts related to security.
REFERENCES
Abiteboul, S., Hull, R., and Vianu, V. (1995). Foundations of Databases. AddisonWesley.Anderson, R. (2001). Security Engineering: A Guide to Building Dependable Distributed Systems.
Wiley Computer Publishing. Bauer, B., Müller, J. P., and Odell, J. (2001). Agent UML: A Formalism for Specifying Multiagent
Software Systems. International Journal of Software Engineering and Knowledge Engineering, 11(3):207230.
Blaze, M., Feigenbaum, J., Ioannidis, J., and Keromytis, A. D. (1999). The Role of Trust Management in Distributed Systems Security. Secure Internet Programming, 1603:185210.
Blaze, M., Feigenbaum, J., and Lacy, J. (1996). Decentralized Trust Management. In Proceedings of 1996 IEEE Symposium on Security and Privacy, pages 164173. IEEE Computer Society Press.
Bresciani, P., Giorgini, P., Giunchiglia, F., Mylopoulos, J., and Perini, A. (2004a). TROPOS: An AgentOriented Software Development Methodology. Journal of Autonomous Agents and MultiAgent Systems,8(3):203236.
Bresciani, P., Giorgini, P., Mouratidis, H., and Manson, G. A. (2004b). Multiagent Systems and Security Requirements Analysis. In Software Engineering for MultiAgent Systems II, Research Issues and Practical Applications [the book is a result of SELMAS 2003], LNCS 2940, pages 3548. SpringerVerlag.
Castelfranchi, C. and Falcone, R. (1998).Principles of trust for MAS: Cognitive anatomy, social importance and quantification. In Proceedings of 3rd International Conference on MultiAgent Systems, pages 7279. IEEE Computer Society Press.
DeTreville, J. (2002). Binder, a logicbased security language. In Proceedings of 2002 IEEE Symposium on Security and Privacy, pages 95103. IEEE Computer Society Press.
Freudenthal, E., Pesin, T., Port, L., Keenan, E., and Karamcheti, V. (2002). dRBAC: distributed rolebased access control for dynamic coalition environments. In Proceedings of the 22nd International Conference on Distributed Computing Systems, pages 411420. IEEE Computer Society Press.
Giorgini, P., Massacci, F., Mylopoulos, J., and Zannone, N. (2004). Requirements Engineering meets Trust Management: Model, Methodology, and Reasoning. In Proceedings of the Second
International Conference on Trust Management, LNCS 2995, pages 176190. SpringerVerlag. Giorgini, P., Massacci, F., Mylopoulos, J., and Zannone, N. (2005a). Modeling Security
Requirements Through Ownership, Permission and Delegation. In Proceedings of the 13th IEEE International Requirements Engineering Conference. IEEE Computer Society Press.
Giorgini, P., Massacci, F., Mylopoulos, J., and Zannone, N. (2005b). Modelling Social and Individual Trust in Requirements Engineering Methodologies. In Proceedings of the Third International Conference on Trust Management, LNCS 3477, pages 161176. SpringerVerlag.
Giorgini, P., Massacci, F., Mylopoulos, J., and Zannone, N. (2005c). STTool: A CASE Tool for Security Requirements Engineering. In Proceedings of the 13th IEEE International Requirements Engineering Conference. IEEE Computer Society Press.
Giorgini, P., Massacci, F., and Zannone, N. (2005d). Security and Trust Requirements Engineering. In Foundations of Security Analysis and Design III Tutorial Lectures, LNCS 3655, pages 237272. SpringerVerlag.
Giorgini, P., Mylopoulos, J., Nicchiarelli, E., and Sebastiani, R. (2002). Reasoning with Goal Models. In Proceedings of the 21nd International Conference on Conceptual Modeling, pages 167181.
Jøsang, A., Laenen, F. V., Knapskog, S. J., and Vandewalle, J. (1997). How to Trust Systems. Computers & Security, 16(3):210.
Jürjens, J. (2004). Secure Systems Development with UML. SpringerVerlag. Li, N., Mitchell, J. C., and Winsborough, W. H. (2002). Design of A Rolebased Trustmanagement
Framework. In Proceedings of 2002 IEEE Symposium on Security and Privacy, pages 114130. IEEE Computer Society Press.
Liu, L., Yu, E., and Mylopoulos, J. (2002). Analyzing Security Requirements as Relationships Among Strategic Actors. In Proceedings of the 2nd Symposium on Requirements Engineering for Information Security.
McDermott, J. and Fox, C. (1999). Using Abuse Case Models for Security Requirements Analysis. In Proceedings of 15th Annual Computer Security Applications Conference, pages 5566. IEEE Computer Society Press.
Mouratidis, H. (2004). A Security Oriented Approach in the Development of Multiagent Systems: Applied to the Management of the Health and Social Care Needs of Older People in England. PhD thesis, University of Sheffield.
Mouratidis, H., Giorgini, P., and Manson, G. (2003). Modelling secure multiagent systems. In Proceedings of 2nd International Joint Conference on Autonomous Agents and Multiagent Systems, pages 859866. ACM Press.
Mouratidis, H., Giorgini, P., and Manson, G. (2005). When Security meets Software Engineering: A case of modelling secure information systems. Information Systems, 30(8):609629.
Samarati, P. and di Vimercati, S. D. C. (2001). Access Control: Policies, Models, and Mechanisms. In Foundations of Security Analysis and Design II, FOSAD 2001/2002 Tutorial Lectures, LNCS 2946, pages 137196. SpringerVerlag.
Sandhu, R. S., Coyne, E. J., Feinstein, H. L., and Youman, C. E. (1996). Rolebased access control models. IEEE Computer, 29(2):3847.
Sindre, G. and Opdahl, A. L. (2005). Eliciting security requirements with misuse cases. Requirements Engineering 10(1):3444.
Yu, E. S. K. (1996). Modelling strategic relationships for process reengineering. PhD thesis, University of Toronto.
Yu, E. S. K. and Liu, L. (2001). Modelling Trust for System Design Using the i* Strategic Actors Framework. In Proceedings of the Workshop on Deception, Fraud, and Trust in Agent Societies held during the Autonomous Agents Conference, LNCS 2246, pages 175194. SpringerVerlag.