efficient profile aggregation and policy evaluation in a middleware for adaptive mobile applications

22
Pervasive and Mobile Computing 4 (2008) 697–718 Contents lists available at ScienceDirect Pervasive and Mobile Computing journal homepage: www.elsevier.com/locate/pmc Efficient profile aggregation and policy evaluation in a middleware for adaptive mobile applications Claudio Bettini, Linda Pareschi, Daniele Riboni * Department of Computer Science and Communication, University of Milan, via Comelico 39, I-20135 Milan, Italy article info Article history: Received 16 October 2006 Received in revised form 27 February 2008 Accepted 7 April 2008 Available online 22 May 2008 Keywords: Context-awareness Adaptation Rule-based reasoning Mobile middleware abstract There is a large consensus on the need for a middleware to efficiently support adaptation in pervasive and mobile computing. Advanced forms of adaptation require the aggregation of context data and the evaluation of policy rules that are typically provided by multiple sources. This paper addresses the problem of designing the reasoning core of a middleware that supports these tasks, while guaranteeing very low response times as required by mobile applications. Technically, the paper presents strategies to deal with conflicting rules, algorithms that implement the strategies, and algorithms that detect and solve potential rule cycles. A detailed experimental analysis supports the theoretical results and shows the applicability of the resulting middleware in large-scale applications. © 2008 Elsevier B.V. All rights reserved. 1. Introduction Adaptation tailored to pervasive and mobile computing (PMC) can be considered to still be in its infancy. Some services offer general adaptiveness based on user profiles stored and managed at the service provider, but are not really tailored to mobile applications. Looking at the commercial scenario, many Internet services are limited to transcoding procedures that are activated upon detection of specific values in HTTP request headers. The few location-based services currently available typically limit their adaptiveness to the user location. In the last few years, the research community has investigated the notion of context, as well as the formalisms to represent and reason about context, as a key issue for enabling high adaptivity. In a mobile computing setting, context is not limited to user location and device capabilities, but includes user personal data and preferences, device status, available bandwidth, local time, speed, the current activity the user is involved in, as well as past user interactions with the service; this is to name just a few of the context parameters that may be usefully exploited by adaptive mobile applications. While some of these data may be directly obtained by sensors or devices, others need to be derived by some form of reasoning. Reasoning is also required to model the dynamics of some of the context parameters. For example, changes in the available bandwidth (due for example to a switch to a different mobile device and infrastructure) should correspond to a change in the value of the attributes specifying the desired bitrate for streaming video services. Hence, with respect to standard computing paradigms, pervasive and mobile computing requires a more sophisticated support for dealing with a particularly articulated and dynamic notion of context. This support can be naturally provided by a middleware architecture. The focus of our work is to overcome some of the limitations of current middleware, by proposing CARE (Context Aggregation and REasoning), a new middleware solution, including new representation formalisms, reasoning techniques, and communication protocols specifically designed for adaptive mobile Internet services. There are two major problems that have not yet found an appropriate solution in current middleware proposals. The first concerns the trade-off between the expressiveness of the context representation formalism and the efficiency of a complete * Corresponding author. E-mail addresses: [email protected] (C. Bettini), [email protected] (L. Pareschi), [email protected] (D. Riboni). 1574-1192/$ – see front matter © 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.pmcj.2008.04.002

Upload: claudio-bettini

Post on 26-Jun-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Pervasive and Mobile Computing 4 (2008) 697–718

Contents lists available at ScienceDirect

Pervasive and Mobile Computing

journal homepage: www.elsevier.com/locate/pmc

Efficient profile aggregation and policy evaluation in a middleware foradaptive mobile applicationsClaudio Bettini, Linda Pareschi, Daniele Riboni ∗Department of Computer Science and Communication, University of Milan, via Comelico 39, I-20135 Milan, Italy

a r t i c l e i n f o

Article history:Received 16 October 2006Received in revised form 27 February 2008Accepted 7 April 2008Available online 22 May 2008

Keywords:Context-awarenessAdaptationRule-based reasoningMobile middleware

a b s t r a c t

There is a large consensus on the need for a middleware to efficiently support adaptationin pervasive andmobile computing. Advanced forms of adaptation require the aggregationof context data and the evaluation of policy rules that are typically provided by multiplesources. This paper addresses the problem of designing the reasoning core of amiddlewarethat supports these tasks, while guaranteeing very low response times as required bymobile applications. Technically, the paper presents strategies to deal with conflictingrules, algorithms that implement the strategies, and algorithms that detect and solvepotential rule cycles. A detailed experimental analysis supports the theoretical results andshows the applicability of the resulting middleware in large-scale applications.

© 2008 Elsevier B.V. All rights reserved.

1. Introduction

Adaptation tailored to pervasive and mobile computing (PMC) can be considered to still be in its infancy. Some servicesoffer general adaptiveness based on user profiles stored and managed at the service provider, but are not really tailored tomobile applications. Looking at the commercial scenario, many Internet services are limited to transcoding procedures thatare activated upon detection of specific values in HTTP request headers. The few location-based services currently availabletypically limit their adaptiveness to the user location.

In the last few years, the research community has investigated the notion of context, as well as the formalisms torepresent and reason about context, as a key issue for enabling high adaptivity. In a mobile computing setting, context isnot limited to user location and device capabilities, but includes user personal data and preferences, device status, availablebandwidth, local time, speed, the current activity the user is involved in, as well as past user interactions with the service;this is to name just a few of the context parameters that may be usefully exploited by adaptive mobile applications. Whilesome of these data may be directly obtained by sensors or devices, others need to be derived by some form of reasoning.Reasoning is also required to model the dynamics of some of the context parameters. For example, changes in the availablebandwidth (due for example to a switch to a differentmobile device and infrastructure) should correspond to a change in thevalue of the attributes specifying the desired bitrate for streaming video services. Hence, with respect to standard computingparadigms, pervasive andmobile computing requires amore sophisticated support for dealingwith a particularly articulatedand dynamic notion of context. This support can be naturally provided by a middleware architecture. The focus of our workis to overcome some of the limitations of current middleware, by proposing CARE (Context Aggregation and REasoning), anew middleware solution, including new representation formalisms, reasoning techniques, and communication protocolsspecifically designed for adaptive mobile Internet services.

There are twomajor problems that have not yet found an appropriate solution in currentmiddleware proposals. The firstconcerns the trade-off between the expressiveness of the context representation formalism and the efficiency of a complete

∗ Corresponding author.E-mail addresses: [email protected] (C. Bettini), [email protected] (L. Pareschi), [email protected] (D. Riboni).

1574-1192/$ – see front matter© 2008 Elsevier B.V. All rights reserved.doi:10.1016/j.pmcj.2008.04.002

698 C. Bettini et al. / Pervasive and Mobile Computing 4 (2008) 697–718

reasoner for that formalism. On one hand, in amobile computing scenario it is desirable to automatically derive new contextdata from raw data acquired by sensors as well as from other sources, while, on the other hand, an efficient reasoneris necessary to enable the real-time provision of adaptive services, even in the presence of a high number of concurrentrequests. The second problem concerns the distributed nature of context data. Different entities (user, network operator,device manufacturer, service provider, content provider, etc.) may provide partial and possibly conflicting context data. Forexample, the location reported both by the user GPSmodule and by the network operator infrastructuremay be inconsistent.More importantly, different entities may define different policies to model the dynamics of context; conflicts among localas well as global policies must be handled by the middleware. While the tradeoff between expressiveness and complexityin rule based-systems has been deeply investigated in the area of knowledge representation and automatic reasoning, nostraightforward application of known techniques provides a solution for the interplay of the twoproblemsmentioned above.

While related work will be considered in detail in Section 6, we should mention that several investigations related toour policy rule formalism have been done on policy languages [1–3] and on prioritized logic programs [4,5]; preliminaryefforts have been done on context data conflict resolution [6,7], and several middlewares for adaptiveness have beenproposed [8–11]. However, we are not aware of any middleware with a reasoning core that can effectively handle theproblemsmentioned above. Our solution is based on the representation of context data in an extended CC/PP [12] formalismand on the representation of policies on context data as rules in a restricted logic programming language. The languageallows rule chaining but no recursion and it is supported by a very efficient ad hoc reasoner. While the language has similarfeatures as the one adopted in [10], we also address the difficult issue of conflict resolution when policies are collectedfrommultiple sources. Technically, rules are prioritized and algorithms have been designed not only to ensure that conflictsare solved according to a given priority-based strategy, but also to detect and solve possible rule cycles at evaluation time.Formal properties including correctness and complexity of the algorithms have been proved.

Even if in principle our technical solutionsmaybe applied to generalmiddleware for adaptation, all the components of theCAREmiddlewarehave beendesigned for themobile computing scenario, andhavebeen experimentedwith adaptivemobileapplications. The continuously changing user context is supported in CARE by the use of logic languages for modeling thedynamics of context data, and by optimized algorithms for continuously updating the context data necessary for adaptation.From an architectural point of view, CARE has been designed tominimize computation and power consumption at the clientside. In fact, the most onerous computational tasks are executed by the core modules of CARE, which are expected to behosted by server machines in a wired infrastructure. Network consumption between the client nodes and the rest of theinfrastructure is minimized by communication protocols which avoid the need to send context data together with servicerequests, using instead references to external repositories; the actual data are then retrieved by the server modules.

The contribution of this paper canbe summarized as follows: (i)Wepresent the architecture of an innovativemiddleware,specifically designed to support adaptive mobile services based on context data acquired from multiple sources; (ii) Byencoding policies, explicit value assignments, and priorities into logic programs, we provide both a clear semantics forthe intended model of a set of aggregated policies, and an evaluation procedure; (iii) We propose algorithms to solve thetechnical problem of rule cycles due to the interaction of policies specified by different entities; (iv)We validate our solutionby extensive experimental evaluations.

A preliminary version of some of the technical solutions presented in this paper has been presented in [13]. With respectto that work, the original contribution of this paper consists of (i) algorithms to detect and solve possible rule cyclesintroduced by policies declared by different entities; (ii) a theoretical proof of the preservation of the acyclicity propertyof policy rules after rule transformations; (iii) more accurate performance evaluations with the ad-hoc reasoner; (iv) anextensive performance evaluation of the algorithms for cycle detection and resolution. Moreover, a specific functionalityof the proposed middleware (i.e., continuous monitoring) and the related technical problems have been discussed in [14];however, the technical problems that are the main subject of this work were not addressed in that paper.

The rest of the paper is structured as follows: In Section 2 we briefly illustrate the general architecture of the CAREmiddleware. In Section 3 we provide a solution for context aggregation and policy conflict resolution and we show someinteresting properties. Section 4 reports on experimental results. Section 5 illustrates prototype adaptive services formobilecomputing that take advantage of CARE. Section 6 discusses related work, and Section 7 concludes the paper.

2. The CARE middleware

In principle, context data should include any information useful for offering a “better” response to a mobile servicerequest; i.e., information related to the user, the device, the network infrastructure, the environment as well as the contentof the service request. However, as mentioned in the introduction, these pieces of information are owned by various entitieslocated in different logical and physical places. In our framework we use the term profile to denote a subset of context datacollected and managed by a certain entity. Hence, the “context” of a service request can be seen as an aggregated profile.

Without loss of generality, in our model we consider three main entities involved in the task of building an aggregatedprofile, namely: the user and her devices (called user in the rest of this paper), the network operator (called operator), andthe service provider. Every entity is associated with a Profile Manager (called UPM, OPM, and SPPM, respectively) devoted tomanage profile information and policies. In particular:

C. Bettini et al. / Pervasive and Mobile Computing 4 (2008) 697–718 699

Fig. 1. Information flow upon user request.

• The upm stores information related to the user and her devices. These data include, among other things, personalinformation, user preferences, context information, and device capabilities. The UPM also manages policies defined bythe user, which can dynamically determine content and presentation parameters.• The opm is responsible for managing attributes describing the current network context (e.g., location, connection profile,

and network status).• Finally, the sppm is responsible formanaging service provider proprietary data including information about users derived

from previous service experiences and service provider policies.

The architecture may be also easily extended by introducing other profile managers (e.g., profile managers owningcontext services).

All profilemanagermodules also have tomanage and enforce access control policies and authentication.While a detaileddescription of the mechanism is outside the scope of this paper, some details will be given later in this section. In Fig. 1 weillustrate the system behavior by describing the main steps involved in a service request. At first (step 1) a user issues arequest to a service provider through her device and the connectivity offered by a network operator. The HTTP header of therequest includes the URIs of upm and the opm. Then (step 2), the service provider forwards this information to the contextprovider asking for the profile information needed to perform adaptation. In step 3, the same module queries the profilemanagers to retrieve distributed profile data and user’s policies. Profile data are aggregated by themergemodule in a singleprofile which is given, together with policies, to the inference engine (ie) for policy evaluation. In step 4, the aggregatedprofile is returned to the service provider. Finally, profile data are used by the application logic to properly adapt the servicebefore its provision (step 5). Our architecture can also interact with ontological reasoners [15], but this aspect will not beaddressed in this paper.

The dynamic nature of some context data values claims for a mechanism for keeping up-to-date the profile informationused by the service provider during a session, in order to allow the service provider to adapt the service to the newcontext. Our architecture adopts a trigger-based mechanism [14] for obtaining asynchronous feedback on specific events(e.g., available bandwidth dropping below a certain threshold, user location changed by more than 100 m). Fig. 2 shows anoverview of the modules of CARE that are devoted to this task. The context parameters and associated threshold valuesthat are relevant for adaptation (named monitoring specifications) are set by the service provider application logic, andcommunicated to the context provider. Actual triggers are generated by the context provider on the basis of context dataand policies, according to optimization algorithms aimed at minimizing computation, power, and bandwidth consumption.Triggers are communicated to the proper profile managers; since part of the events monitored by triggers sent to theupm are generated by the user device, the upm communicates triggers to a light server module resident on the user’sdevice. Each time a profile manager receives an update for a context data that makes a trigger fire, it forwards the updateto the context provider. Finally, the context provider re-computes the aggregated profile, and any change satisfying amonitoring specification is communicated to the application logic.More details about themechanism for distributed contextmonitoring – as well as experimental results, both in terms of performance and user experience with an adaptive streamingservice for mobile computing – can be found in [14].

Privacy is recognized as a fundamental issue for the provision of context-aware services. Hence, the integration ofprivacy-preservationmechanisms into ourmiddleware is of paramount importance. A detailed description of the techniquesfor privacy-preservation we intend to adopt in the CARE middleware is out of the scope of this paper; we just mentionthat our solution is based on the combined use of access control policies and anonymization techniques. A preliminarypresentation of our approach can be found in [16].

700 C. Bettini et al. / Pervasive and Mobile Computing 4 (2008) 697–718

Fig. 2. Modules for distributed context monitoring.

3. Profile aggregation and conflict resolution

Conflicts in our framework are due to the implicit assumption that each profile attribute can have a single value (evenif it could be a composite value). Section 3.1 presents the languages for representing context data and policies, and theirformal semantics. Section 3.2 presents a categorization of possible conflicts, and the resolution strategies we adopt in ourframework. Sections 3.3 and 3.4 explain how these strategies are implemented in the case of conflicts arising betweenattribute values and policies, respectively. Finally, Section 3.5 provides a complexity evaluation of the conflict resolutionmechanism.

3.1. Specification of profiles and policies

In order to aggregate profile information, data retrieved from the different entities must be represented using a welldefined schema, providing a means to understand the semantics of the data. For this reason, we chose to represent profiledata using the Composite Capabilities/Preference Profiles (CC/PP) structure and vocabularies [12]. In CC/PP, profiles aredescribed using a 2-level hierarchy in which components contain one or more attribute-value pairs: components andattributes are declared in Resource Description Framework (RDFS) vocabularies; values can be either simple (string, integeror rational number) or complex (set or sequence of values). In order to avoid ambiguities due to the use of differentvocabularies, each attribute must be identified by means of its name, its vocabulary, its component, and its componentvocabulary, according to the following form:

Vocabulary1.Component/Vocabulary2.Attribute

where: Vocabulary1 refers to the vocabulary the component belongs to; Component is the ID of the component containing theattribute; Vocabulary2 refers to the vocabulary the attribute belongs to; and Attribute is the ID of the attribute. In order toimprove readability, throughout the paper the attributes syntax is simplified by omitting the vocabulary and possibly thecomponent they belong to.

Currently, CC/PP is mainly used for describing device capabilities and network conditions; well known CC/PP-compliantvocabularies are UAProf [17] and its extensions. However, they do not take into account various data that are necessaryto obtain a wide-ranging adaptation and personalization of services, therefore vocabularies describing information likeuser’s interests, content and presentation preferences, session variables, and user’s context are also needed. We have beenworking in this direction mostly considering very confined domains, with the goal of experimenting our framework on testapplications. Since a detailed discussion on vocabularies and their sharing policies is out of the scope of this paper, fromnow on we assume there exists a sufficiently rich set of profile attributes accessible by all entities in the framework. Asanticipated in the introduction, policies can be declared by both the service provider and the user. In particular, serviceproviders can declare policies in order to dynamically personalize and adapt their services considering explicit profile data.Similarly, users can declare policies in order to dynamically change their preferences regarding content and presentationdepending on some parameters. Both service providers and users’ policies determine new profile data by analyzing profileattribute values retrieved from the aggregated profile.

The policy languagemust also support the definition of amechanism for handling conflicts that could arisewhen user andservice provider policies determine different values for the same attribute. Our choice for a policy language has privilegedlow complexity, well-defined semantics and well-known reasoning techniques. Indeed, our policies are specified as a setof first-order definite clauses [18] with negation-as-failure and no function symbols, forming a general logic program. Each

C. Bettini et al. / Pervasive and Mobile Computing 4 (2008) 697–718 701

policy rule is composed by a set of conditions on profile data (interpreted as a conjunction) that determine a new value fora profile attribute when satisfied; rules are of the form:

If C1 And . . .And Cn Then Ak(Vk),

where Ak is a predicate corresponding to a CC/PP attribute, Vk is either a value or a variable, and Ci is either a subgoal likeAj(Vl) or not Aj(Vl). For example, the informal user policy: “When I am in the main conference room using my palm device, anycommunication should occur in textual form” can be rendered by the following policy rule:

“If Location(MConfRoom) And Device(PDA)Then PreferredMedia(Text)”

The language also includes various built-in comparative predicates, i.e., <, <=, >, >=, <>, ==, with their standardsemantics in the domain of reals. Due to the special purpose of our logic programs, where atoms like P(a) represent thefact of a being the value of the profile attribute P, we need to ensure that at most a single ground atom for each predicatemust be present in the program model. This is due to the implicit assumption that each profile attribute can have a singlevalue (even if it could be a composite value). For this reason, we have extended the syntax of general logic programs in orderto declare priorities between conflicting rules (i.e., rules having the same head predicate). In particular, in our language, rulesare labeled, and expressions of the form R1 � R2 state that rule R1 has higher priority than rule R2. The relation� on the setsof conflicting rules is a strict partial order. Priorities are declared by the same entity that declares the policy, and if they arenot given, a default ordering is used. The formal semantics of a set of policy rules and priorities is given by the uniquemodelof the logic program in which it can be encoded. In order to guarantee the uniqueness of the program model, we ensurethat the logic program corresponding to the policy rules defined by each entity is stratified [19]. To this aim, we perform asimple test at the insertion of each new policy rule, discarding those rules generating a cycle. Note that this condition doesnot prevent rule chaining. In order to facilitate the exchange of policy rules between the components of the architecture,policies are wrapped in RuleML [20], adopting the XML Schema defined for Datalog [21] with negation. Since that Schemadoes not allow the definition of priorities, we use the order of appearance of rules as an encoding of priorities. Web basedinterfaces are currently used by service providers and users to insert and modify their own policies. User policies may alsobe taken or adapted from a library of predefined policy rules, as well as partially learned from user behavior.

3.2. Conflicts and resolution strategies

Wedistinguish two types of conflicts: (a) conflicts due to policies and/or explicit attribute values given by the same entity,and (b) conflicts due to policies and/or explicit attribute values given by different entities. A simple example of a conflictof type (a) is the use of policies to override default attribute values when specific events occur and/or specific conditionsare verified. In this case, a policy given by an entity, deriving a value for an attribute, intuitively has higher priority over anexplicit value for that attribute given by the same entity.

Conflicts of type (a) also include the case of two policies given by the same entity, that specify different rules to derivethe value for the same attribute. In this case if the conditions of the two rules are not mutually exclusive wemay derive twodifferent values. There is no intuitive way to solve such a conflict, and it is not reasonable to simply disallow each entity tospecify more than one policy to derive the value of an attribute. Hence, we assume that the specification of these possiblyconflicting policies includes an indication of priority, and if this is not given, a default ordering will be used. An example ofconflict of type (a) is given below:

Example 1. Consider the case of the provider of an instantmessaging service declaring its’ policies regarding the notificationof incoming messages to the users of the system. The service provider could declare two policies regarding the Notificationattribute, specifying to send audio notificationswhen the user is using a PDA, and text notificationswhen the user is involvedin an important meeting. If the user is both using her PDA and involved in an important meeting, both the rules would fireproviding conflicting values. In this case, the service provider is forced to specify which rule has the highest priority. In thisexample, it is reasonable for the service provider to give higher priority to the second rule. In this way, when the user isinvolved in an important meeting the first rule cannot fire because the second rule has higher priority.

Conflicts of type (b) can occur, for instance, when a provider is not able or does not want to agree with a user policy orexplicit preference, and sets up a policy rule to override the values explicitly given or derived by the user. Conflicts due torules and explicit values given by different entities can be taken care of considering a priority rule between entities for thatparticular attribute: if an entity has priority over other ones for the value of a specific attribute, policies given by that entityto derive values for that attribute have priority over policies given by other entities.

Example 2. Consider the user of a video streaming service. The user could declare a policy requiring high-resolution mediaas a default when using her PDA; the service provider may want to supply low-resolution media when the availablebandwidth drops below a certain threshold. If both conditions hold, the evaluation of policies would generate a conflict.If the service provider has the highest priority for the attribute, the rule of the service provider prevails over the user’s one.

702 C. Bettini et al. / Pervasive and Mobile Computing 4 (2008) 697–718

A categorization of possible conflicts is useful for determining the system behavior. We summarize the desired behaviorof the system, in the presence of possible conflicts, considering each case as follows:

(1) Conflict between explicit values provided by two different entities when no policy is given for the same attribute. In this case,the priority over entities for that attribute determines which value prevails. This kind of conflict is totally handled bythe merge submodule of the Inference Engine.

(2) Conflict between an explicit attribute value and a policy given by the same entity that could derive a different value. A simpleexample of a conflict of this type is the use of policies to override default attribute values when specific events occurand/or specific conditions are verified. In this case, a policy given by an entity, deriving a value for an attribute, intuitivelyhas higher priority over an explicit value for that attribute given by the same entity. Thus, the value derived from thepolicy must prevail.

(3) Conflict between an explicit attribute value and a policy given by a different entity that could derive a different value. Conflictsof this type can occur, for instance, when a provider is not able or does not want to agree with a user explicit preference,and sets up a policy rule to override the values explicitly given by the user. This kind of conflict can be taken care of byconsidering priority rules between entities. Considering the priority over entities for that attribute, if the entity givingthe explicit value has priority over the other, then the policy can be ignored, otherwise the policy should be evaluatedand if a value is derived, it prevails over the explicit one.

(4) Conflict between two policies given by two different entities on a specific attribute value. Similarly to conflict (3), the priorityover entities for that attribute states the priority in firing the corresponding rule. If a rule fires, no other conflicting rulefrom different entities should fire.

(5) Conflict between two policies given by the same entity on a specific attribute value. There is no intuitive way to solve sucha conflict. Hence, we assume that the entity gives a priority over these rules, using the syntax provided by the policylanguage, and if this is not given, a default ordering will be used. The priority over rules for that attribute is used todecide which one to evaluate first. If a rule fires, no other conflicting rule from the same entity should fire.

Our conflict resolution strategies are meant to allow the service provider to conciliate the preferences of the userregarding the service parameters, with its own policies (e.g., quality of service with respect to the user’s billing rate) andenvironmental conditions (e.g., availability of network, power, and computational resources). The definition of strategiesfor determining the best tradeoff between user satisfaction, service provider policies, and environmental conditions (andconsequently for assigning priorities to rules on the basis of context) is a particularly challenging issue, which is out of thescope of this paper. However, we point out that any such strategy can be easily applied on top of our conflict resolutiontechniques.

3.3. Merging distributed profiles

Even if no policies are given, conflicts can arisewhen different values are given by different profilemanagers for the sameattribute. For example, the upm could assign to the Coordinates attribute a certain value x (obtained through the user’s deviceGPS), while the opm could provide for the same attribute the value y, obtained through triangulation. This kind of conflictresolution is performed in our architecture by the merge submodule of the ie. We have defined a language for allowingservice providers to specify resolution rules at the attribute level. This means that, for instance, a service provider willing toobtain the most accurate value for a user’s location can give preference to the value supplied by the upmwhile keeping thevalue provided by the opm just in case the value from the upm is missing. Priorities are defined by profile resolution directiveswhich associate to every attribute an ordered list of profile managers.

Example 3. Consider the following profile resolution directives:

(1) setPriority */* = (SPPM, UPM, OPM)(2) setPriority NetSpecs/* = (OPM, UPM, SPPM)(3) setPriority UserLocation/Coordinates = (UPM, OPM)

In (1), a service provider gives highest priority to its own profile data, and lower priority to data given by the other entities.Clearly, if no value is present in the service provider profile the value is taken from other profiles following the prioritydirective. Directives (2) and (3) give the highest priority to the operator for network-related data and to the user for thesingle Coordinates attribute, respectively. The absence of SPPM in directive (3) states that values for that attribute providedby the SPPM should never be used.

The semantics of priorities actually depends on the type of the attribute. When the attribute is simple, the value to beassigned to the attribute is the one retrieved from the first entity in the list that supplies it. When the attribute is of typerdf:Bag, the values to be assigned are the ones retrieved fromall entities present in the list. If someduplication occurs, onlythe first occurrence of the value is taken into account (i.e., we apply union). Finally, if the type of the attribute isrdf:Seq, thevalues assigned to the attribute are the ones provided by the entities present in the list, ordered according to the occurrenceof the entity in the list. All duplicates are removed keeping only the first occurrence.

C. Bettini et al. / Pervasive and Mobile Computing 4 (2008) 697–718 703

3.4. Policy formal semantics and conflict resolution

Even if acyclicity is guaranteed in each entity ruleset, cycles can be generatedwhenpolicies of different entities are joined.In order to preserve the stratification of the joined logic program, we adopt a mechanism of cycle detection and resolution,which is presented in Section 3.4.1. Section 3.4.2 presents the conflict resolution transformations that are applied to the logicprogram obtained after cycle resolution. Even if stratification cannot be preserved by applying those transformations, weprove that the resulting logic programmaintains aweaker formof stratification that is sufficient to guarantee the uniquenessof the logic program model.

3.4.1. Cycle detection and resolutionRegardless whether we guarantee that the logic program corresponding to the set of policy rules defined by each entity

is stratified (see Section 3.1), it is still possible that this property is lost when policies defined by different entities are joined.In order to preserve stratification (and consequently the uniqueness of the logic program model), we detect and resolvecycles possibly occurring in the joined logic program.

3.4.1.1. Motivation. Cycles handling is a well-known issue in logic programming. In general, the presence of cycles in alogic programmay lead to non-termination, and to the derivation of multiple models. The motivation for avoiding multiplemodels in our framework has already been explained in Section 3.1.

Our choice is to guarantee the uniqueness of the logic program model by preserving stratification. As proved in [22], inorder to ensure that a logic program is stratified, it is sufficient to show that its rule dependency graph (RDG) is acyclic. Werecall that, given a logic program P, RDG(P) is a directed graph whose nodes are the rules forming P. The graph contains anedge from R to R′ if and only if the head predicate of rule R′ belongs to the set of body predicates of R.

Example 4. Consider the following joined ruleset P:u.r1 (user): A← B.sp.r2 (service provider): B← C, E.sp.r3 (service provider): D← A, B,G.u.r4 (user): E← F.u.r5 (user): C← D.u.r6 (user): G← B.sp.r7 (service provider): F← E.Its rule dependency graph is shown in Fig. 3(a). RDG(P) contains four cycles, corresponding to paths C1 =

(u.r1, sp.r2, u.r5, sp.r3, u.r1), C2 = (sp.r2, u.r5, sp.r3, sp.r2), C3 = (sp.r3, u.r6, sp.r2, u.r5, sp.r3), and C4 = (u.r4, sp.r7, u.r4).

3.4.1.2. Our resolution strategy. We check for the presence of cycles as a preprocessing step before rule evaluation. Sincecycle detection is performed at the time of the service request, an automatic mechanism for resolving cycles is needed. Areasonable strategy to resolve cycles would be to remove from the set of rules involved in the cycle the node correspondingto the policy rule with lowest priority. Unfortunately, it is easily seen that each cycle is composed by rules having differenthead predicates. Thus, since the scope of priorities in our language is limited to each set of rules having the same headpredicate, an ordering of priorities relative to rules having different head predicates cannot be applied.

However, nodes in a cycle can be categorized on the basis of the relative priority of the entity that declared theircorresponding rule. As an example, consider the cycle between rules u.r4 and sp.r7 in Example 4. These rules were declaredby the user and by the service provider, respectively. Suppose that the user has the highest priority – according to profileresolution directives – for the head predicates of both rules. Our resolution strategy consists in discarding one of the rulesdeclared by an entity that does not have the highest priority for the corresponding head predicate (rule sp.r7 in this example).The rationale of this choice is that the evaluation of these rules is never guaranteed, since they can be overwritten by rulesdeclared by higher-priority entities.

It is still possible that this strategy is not applicable, since every rule composing a cyclewas declared by an entitywith thehighest relative priority. In this case, we exploit an interesting property of cycles in our framework: since each single entityruleset is acyclic, every cycle occurring in the joined ruleset will contain at least one node corresponding to a service provider ruleand at least one node corresponding to a user rule. Thus, cycles for which the above mentioned strategy cannot be appliedare resolved by deleting one of the user rules involved in the cycle. This choice is motivated by the fact that users have noreal control over the actual evaluation of their policy rules. As a matter of fact, user policies can be overwritten by policiesor values given by the service provider, depending on the profile resolution directives it applies. On the contrary, serviceproviders must be guaranteed that their policies regarding a given service are thoroughly applied, provided that they keptthe highest priority for the corresponding attributes.

When a user rule is deleted, the application logic can choose the most appropriate action to perform, depending onthe type of service. We believe that, for the majority of Internet services, the requested service can be provided withoutinforming the user. In the case of particular applications (e.g., services involving privacy issues) the user may be informedabout the incompatibility of her preference with the general policies of the provider, and invited to define a new strategyconsistent with the service provider rules.

704 C. Bettini et al. / Pervasive and Mobile Computing 4 (2008) 697–718

(a) Ruleset dependency graph. Bold nodes: rulescandidate for deletion.

(b) Cycle graph.

(c) Application of the heuristic. Hatchednodes: rules to be pruned.

(d) The acyclic RDG.

Fig. 3. Cycle detection and resolution.

3.4.1.3. The CDR algorithm for cycle detection and resolution. When multiple cycles are detected in the joined ruleset, it ispossible that a single rule may be involved in multiple cycles. In this case, an obvious optimization consists in pruning aminimum cardinality set of rules that resolves all cycles – provided that the deletion of rules in this set is consistent withour overall strategy.

The problem of finding a minimum cardinality set of nodes whose deletion cuts every cycle in general graphs is calledfeedback vertex set (FVS) problem (see [23] for a survey). The FVS problem is known to be NP-complete [24], even if an exactsolution of the problem is achievable in polynomial time for particular classes of graphs (e.g., reducible flow graphs [25]).Unfortunately, the rule dependency graph of joined rulesets in our framework does not fall into any of these classes. Hence,since cycle resolution must be performed at the time of the user request, we chose to adopt a low-complexity heuristicalgorithm.

The Cycle Detection and Resolution (CDR) algorithm is shown in Algorithm 1, and consists of the following steps:

• Step 1: Given the logic program P obtained by joining the user and service provider policies, we construct its ruledependency graph RDG(P) (see Fig. 3(a)). In order to detect every cycle Ci occurring in RDG(P) we apply the well-knowndepth-first search (DFS) algorithm [26] for directed graphs.• Step 2: Every cycle Ci is transformed into C′i by discarding those rules in the cycle that are not candidates for deletion on

the basis of our cycle resolution strategy (see Fig. 3(b)). We recall that our resolution strategy ensures that each cyclecontains at least one rule candidate for deletion. Cycles C′i are used to build the cycle graph G, which is composed bycycles in RDG(P) contracted by removing those nodes that correspond to rules that must be preserved accordingly to theresolution strategy. More formally, graph G is obtained by applying the following transformation to RDG(P).

Transformation 1. At first, RDG(P) is transformed by removing every node r that does not appear in any cycle, together withits incoming and outgoing edges. Then, for each node r′ ∈ {Ci} − {C′i}, if a couple of edges (v → r′, r′ → v′) appears in theresulting graph, then a new edge v → v′ is added to RDG(P). Finally, the node r′ and its incoming and outgoing edges areremoved.

Note that v and v′ may coincide. In this case the transformation determines the addition of a self-loop.• Step 3: We apply to the resulting graph the heuristic algorithm for the unweighted FVS problem proposed by Levy and

Low in [27]. This algorithm has time complexity O(|E| log |V|), and is the basis of many other heuristic algorithms for theFVS problem. Fig. 3(c) shows the candidate rules to be pruned in order to obtain an acyclic RDG (see Fig. 3(d)).

3.4.2. Policy conflict resolutionSince policies can dynamically change the value of an attribute that may have an explicit value in a profile, or that may

be changed by some other policies, they introduce nontrivial conflicts. They can be determined by policies and/or by explicitattribute values given by the same entity or by different entities.

We now show how the conflict resolution strategies we have devised can be implemented. In the simple case when nopolicies are given for a certain attribute, conflicts are easily solved by the merge submodule as explained above, and the

C. Bettini et al. / Pervasive and Mobile Computing 4 (2008) 697–718 705

Algorithm 1 Cycle Detection and Resolution.(i) Rsp, Ru, JR are respectively the service provider, the user and the joined ruleset;(ii) G0 = 〈V0, E0〉 is the RDG concerning the joined ruleset;(iii) (EP,1, EP,2, EP,3) the priority over entities for the attribute P;(iv) Cycles[] is the array storing all cycles detected in the PDG;(v) Ci = {Ei.r1, .., Ej.rj, .., Ei.r1} is a cycle path; Ci ∈ Cycles[G];(vi) CandidateRi is the list of rules in the path Ci candidate to exclusion;(vii) FeedSet is the feedback vertex set, it is empty at time 0;

ProcedureMain()/* Step 1: RDG construction and cycle detection */JR := Rsp ∪ Ru;FeedSet := ∅;G0 := RuleDependencyGraph(JR);Cycles[G0] := CyclesDetection(Cycles[],G0);

/* Step 2: Candidates Rules detection */for k = 1..|Cycles[G0]| do

CandidateRk := ∅;for all Ei.rj ∈ Cycles[k] do

H := head(Ei.rj);if Ei 6= EH,1 then

CandidateRk := CandidateRk ∪ Ei.rjend if

end forif CandidateRk = ∅ then

CandidateRk := {user.r|user.r ∈ Cycles[k] ∧ e = user}end if

end for

/* Step 3: Cycle Resolution; G = 〈V, E〉 is the cycle graph */G := CycleGraph(G0);FeedSet := apply Levy&Low algorithm to GJR′ := JR \ FeedSet;return JR′

end procedure;

Function CyclesDetection(Cycle[], graphG)Cycle[] = DepthFirstSearch(G);return Cycle[]end function;

resulting attribute value is directly passed to the Service Provider application logic. However, when policies are present, theresolution strategies must be integrated in the evaluation of logical rules.

A set of policy rules can be encoded in a logic program where each rule has the following form:

A(X)← A1(X1), . . . , Ak(Xk), not Ak+1(Xk+1), . . . , not An(Xn) (1)

where A, A1, . . . , An are predicate symbols corresponding to profile attribute names, and X, X1, . . . , Xn are either variable orconstant symbols and denote attribute values. Note that our language allows positive and negative premises, with negativeones denoting the absence of a value for a specific attribute, but constrain the head of a rule to be positive. Moreover, safetyimposes that if X is a variable appearing in the rule head, the same variable must appear in the rule body.

In addition to logic rules that encode policy rules, we have to encode in the logic program the implicit and explicitpriorities that will be necessary to solve conflicts. For this purpose, a second argument, that we call weight, is added toeach predicate, and the logic program encoding the policy rules is transformed in a program P by modifying each rule of theform (1) into:

A(X,w)← A1(X1,W1), . . . , Ak(Xk,Wk), not Ak+1(Xk+1,Wk+1), . . . , not An(Xn,Wn) (2)

where W1, . . . ,Wn are variables with values in non-negative integers, and w is a non negative integer determined byAlgorithm 2. Note that rules labels as well as priorities over rules are used only in this pre-processing phase, and thereforeare removed from the logic program.

Theweight of a rule is defined as theweight assigned to the predicate in its head. Intuitively, rules on attributes for whicha prevailing fact exists (see point 3 in Section 3.2) are not assigned any weight and discarded, all facts are given weight0, and other rules are assigned increasing weights accordingly to priorities over entities and priorities specified by eachentity.

706 C. Bettini et al. / Pervasive and Mobile Computing 4 (2008) 697–718

Algorithm 2 Setting the Weight parameter.Let ({E3{,E2{,E1}}}) be the priority over entities for the attribute A; Er be the entity among E1, E2, E3 providing the value obtained bythe merge module for A; REj,A be the set of rules declared by Ej for A; and REj,A,k be the kth rule ∈ REj,A in increasing order of priority,according to Ej.

\∗ Facts have always weight 0 ∗\Weight(FactA) := 0w := 0\∗ Repeat ∀ Ej, r ≤ j ≤ 3 ∗\for j = r to 3 do

Kj := ‖REj,A‖

\∗ Repeat for each rule declared by Ej on A ∗ \

for k = 1 to Kj dow := w+ 1Weight(REj,A,k) := w

end forend for

Algorithm 2 ensures that (i) no pair of rules exist having the same head predicate symbol and the same weight; (ii) ruleshaving the same head predicate but higher weight have higher priority, according to our conflict resolution strategy, overthose with lower weights. From a logic programming point of view we can also observe that if the predicate dependencygraph of the starting set of rules is acyclic, then the above transformation preserves acyclicity. Indeed, it is easily seen that theaddition of a second argument to each head predicate does not determine the introduction of new edges in the predicatedependency graph of the logic program.

In order to give standard formal semantics to our policies and to enforce the above evaluation strategy, we still need toencode in the logic program the fact that we do not allow two different values for the same attribute in the output. Thismeans that the logic program should have a unique model and this model should contain at most one single atom for eachpredicate. For this purpose, program P is once more modified as follows.

Transformation 2. Each rule (2) is modified by adding the subgoals:

not A(Y, Z), Z > w, (3)

where Y is a variable with the same domain as X, X1, . . . , Xn, leading to:

A(X,w)← A1(X1,W1), . . . , Ak(Xk,Wk), not Ak+1(Xk+1,Wk+1), . . . , not An(Xn,Wn), not A(Y, Z), Z > w. (4)

We call P′ the resulting program.

Example 5. Consider conflicts between an explicit attribute value provided by the operator, two policies given by the sameentity (e.g.; the user), and a policy given by the service provider, possibly deriving different values for the same attributeA1; in this example, the user declared two policies over the same attribute, and she gave highest priority to the policy user2.Suppose that the priority over entities for the A1 attribute is (SPPM, UPM, OPM). The inference engine preprocessor receivesin input from the SPPM the following logic program:

(op) A1(a)←(user) A3(b)←(p1-user) A1(X)← A2(X)

(p2-user) A1(X)← A3(X)

(p1-sp) A1(X)← A4(X)

p2-user � p1-user

The fact (op) represents the value provided by theOPM for A1, The fact (user) represents the value provided by theUPM forA3, the first policy (p1-user) and the second policy (p2-user) are declared by the user, and the last policy (p1-sp) is declaredby the service provider. Applying the Algorithm 2, the lowest weight (0) is assigned to the facts. The UPM has higher priorityover the OPM, and so the preprocessor, following the priorities defined by the user over her rules, gives weight 1 to the headof the user policy with lowest priority (p1-user) and weight 2 to the head of the policy (p2-user). Finally, the highest weight(3) is assigned to the head of the policy (p1-sp), as it was declared by the entity with highest priority (the service provider inthis case). Note that, if the OPM had highest priority than the UPM and SPPM, no rule would have been assigned any weight,and hence all rules would have been discarded.

C. Bettini et al. / Pervasive and Mobile Computing 4 (2008) 697–718 707

Hence, the above logic program is modified as follows:(op) A1(a, 0)← not A1(Y, Z), Z > 0.(user) A3(b, 0)← not A3(Y, Z), Z > 0.(p1-user) A1(X, 1)← A2(X,W), not A1(Y, Z), Z > 1.(p2-user) A1(X, 2)← A3(X,W), not A1(Y, Z), Z > 2.(p1-sp) A1(X, 3)← A4(X,W), not A1(Y, Z), Z > 3.In this case, the value of A1 is determined as b by the firing of rule (p2-user).

The addition of the subgoal (3) determines the introduction of negative loops (i.e., negative edges connecting one node toitself) in the predicate dependency graph of P′. Hence, since a logic program is stratified iff in its predicate dependency graphthere are no cycles containing negative edges [19], we can conclude that Transformation 2 does not preserve stratification.However, as proved by Theorem 1 and Corollary 1, the transformed programs maintain a weaker form of stratification(known as local stratification [28]) that guarantees the uniqueness of the program model.

Theorem 1. Given a stratified program Pwithweights assigned by Algorithm 2, the logic program P′ obtained by Transformation 2is acyclic [29].

Proof. We demonstrate that the atom dependency graph (ADG) of the ground program G(P′) can be obtained from theatom dependency graph of G(P) inserting arcs that do not introduce cycles. Since the program P is stratified, its predicatedependency graph is acyclic. For this reason, the atomdependency graphD(G(P)) is acyclic too. By Transformation 2,D(G(P′))can be obtained by adding to D(G(P)) the arcs from the ground instances of A(Y,w) to the ground instances of A(X, v), v > w,for each rule in P′. Suppose A(y,w) and A(x, v) are ground instances of A(Y,w) and A(X, v), respectively. Then, an arc fromA(y,w) to A(x, v) is added to D(G(P)). This arc could introduce a cycle in the graph only if there is a path from A(x, v) back toA(y,w). By construction of P and P′, any arc starting from A(x, v) either goes to a nodewith a different predicate or to the nodeA(x1, v1) with v1 > v. The second case is actually similar to the first one, since any arc starting from A(x1, v1) either goes to anode with a different predicate or to the node A(x2, v2) with v2 > v1, and analogously we can repeat this consideration untilthe node A() with the highest weight, which by construction can only have an outgoing arc towards a node with a differentpredicate. Hence, in order to show that no cycle is created, it is sufficient to show that each path starting from A(x, v) withan arc towards a different predicate Aj does not lead to A(t, u), u ≤ v. Considering an arbitrary atom Aj(z,w

′) such that apath to A(t, u) exists in D(G(P)), there can be no path from A(x, v) to Aj(z,w

′). Indeed, if this path would exist, the predicatedependency graph of P would contain a cycle, contradicting our hypothesis. Since the above holds for all the transformedrules, cycles cannot arise in D(G(P′)) if there are none in D(G(P)), and therefore ADG(G(P′)) is acyclic.

The acyclicity property ensures that a topological order ord can be applied to ADG(G(P′)) [26]. In this case, ord constitutesa levelmapping from the elements of the Herbrand base of G(P′) to the natural numbers. Sincewe allow only constant valuesand bound variables to appear in rules atoms, ADG(G(P′)) is finite. Therefore, N = Max{ord(ADG(G(P′)))} exists, and we candefine the function ord′(a) = N − ord(a), which is also a level mapping. For graph construction, for each ground clause

Ai(xj,wk)← [not]Ai1(xj1 ,wk1), . . . , [not]Ain(xjn ,wkn),

ord(Ai(xj,wk)) < ord([not]Aim(xjm ,wkm)) holds. As a consequence,

ord′(Ai(xj,wk)) > ord′([not]Aim(xjm ,wkm)).

This demonstrates that P′ is acyclic with respect to the level mapping ord′. �

Corollary 1. Since acyclic logic programs are a subclass of locally stratified programs [29], program P′ is locally stratified. Hence,it has a unique model [28].

3.5. Evaluation algorithm and complexity analysis

Despite these formal properties guaranteeing the uniqueness of the intended model, and hence providing a clearsemantics to our prioritized rulesets, they do not guarantee in general an efficient evaluation procedure. However, in ourcase a direct evaluation algorithm can be devised that is linear in the number of rules, since each rule has to be evaluatedonly once. The intuitive evaluation strategy is to proceed, for each attribute A, starting from the rule having A() in its headwith the highest weight, and continuing considering rules on A() with decreasing weights till one of them fires. If none ofthem fires, the value of A is the one specified by the fact on A, or none if such a fact does not exist. The algorithm is shownin Algorithm 3.

Theorem 2. The complexity of Algorithm 3 is linear in the number of rules.

Proof. Function Evaluate(r) is called only if r belongs to New (see lines 6, 13, and 18). When executed, function Evaluate(r)removes rule r from the set New. Thus, the cardinality of New decreases each time Evaluate is called. Since at initializationtime the cardinality of New is equal to the number of rules, Evaluate(r) is executed at most once for every r ∈ P. �

708 C. Bettini et al. / Pervasive and Mobile Computing 4 (2008) 697–718

Algorithm 3 Logic program evaluation.(i) P is the initial set of rules in the logic program.(ii) New = {r ∈ P| r has never been in ES}.(iii) M = {derived atoms}.(iv) body(r) = {literals in the body of r, r ∈ P}.(v) head(r) = head of r, r ∈ P.(vi) wAi := Max{w|Ai(Xj,w) ∈ head(rk), rk ∈ P}.(vii) rAi,w := rk|head(rk) = Ai(Xj,w)).(viii) Procedure RuleEval(r) returns the rule head atom if all literals in the body evaluate to True; it returns NULL otherwise.

3: ProcedureMain()New := P; M := ∅;for all rAi,wAi

∈ P do6: if rAi,wAi

∈ New then Evaluate(rAi,wAi)

end forreturn M

9:Procedure Evaluate(rAi,w)New = New \ {rAi,w}

12: for all a = [not]Aj(X,W) where a ∈ body(rAi,w) doif rAj,wAj

∈ New then Evaluate(rAj,wAj)

end foratom := RuleEval(rAi,w)

15: if atom 6= NULL thenM := M ∪ {atom}

else18: if rAi,w−1 ∈ New then Evaluate(rAi,w−1)

end if

Since – formost Internet services – adaptationwill probably be performed considering a small subset of CC/PP attributes,we chose to adopt a formof goal-driven reasoning, implementing our Inference Engine using a backward-chaining approach.Hence, the cost of the evaluation will be generally less than linear in the size of policies, since a number of irrelevant ruleswill be ignored.

4. Experimental evaluation

Since efficiency is a fundamental requirement for the CARE middleware, we performed extensive experiments with theCDR algorithm for cycle detection and resolution (results are reported in Section 4.1), and with the ad-hoc inference enginewe developed for evaluating policies expressed in the logic programming language we defined (results are reported inSection 4.2). Even if we expect that, formost Internet services, adaptationwill be performed on the basis of a relatively smallnumber of rules, in these experiments we have used a greater number of rules in order to study the asymptotic behavior ofour algorithms. Moreover, a different set of experiments (reported in Section 4.3) were performed in order to evaluate theefficiency of our solution with a prototype service for pervasive and mobile computing.

4.1. Experiments with the CDR algorithm

The first set of experiments was aimed at evaluating the execution time of the CDR algorithm for cycle detection andresolution. Experiments were performed on a two-processor Xeon 2.4 GHz workstation running a Linux operating system.Evaluation times are averages of ten runs, each using a different random ruleset. Experiments are executed on artificialrulesets of various cardinalities (from 200 to 1000 rules), containing a significant number of cycles (from around 80 toaround 350). Rules had a 10% probability of occurring in a cycle. Rulesets were built in order to satisfy the assumptions ofour cycle resolution approach. Hence, every cycle contained at least one rule that was candidate for deletion, accordingto the strategies reported in Section 3.4.1. The average number k of antecedents per rule was a number varying from3 to 11, depending on the specific experiment. Since a high average number of rules preconditions can lead to a highconnectivity degree of the rule dependency graph, the behavior of the algorithm strongly depends on k. As a matter offact, it is well known that in a strongly connected graph the probability of cycles is high. For this reason, execution timesare expected to grow with k. Experimental results are shown in Fig. 4. Execution times are dominated by the reduction ofthe rule dependency graph of the joined ruleset to the cycle graph (see Section 3.4.1). We are currently investigating anoptimization for integrating the cycle graph construction with the application of contractions of the heuristic algorithm of

C. Bettini et al. / Pervasive and Mobile Computing 4 (2008) 697–718 709

Fig. 4. Execution times of the CDR algorithm.

Fig. 5. Comparison between the number of detected cycles and the number of pruned rules.

Levy and Low. However, execution times of the CDR algorithm are acceptable with rulesets composed by a few hundreds ofrules.

Although the heuristic algorithm for the FVS problem is well studied, we have performed experiments for evaluatingthe ratio between the number of cycles encountered in the joined logic programs and the number of rules to be pruned inorder to remove all cycles. Fig. 5 shows a comparison between the number of rules that are pruned by the CDR algorithmand the number of cycles in the joined logic program. In this experiment, the average number of preconditions per rule isseven. Results show that the number of pruned rules grows less rapidly than the number of cycles in the rule dependencygraph.

Even if cycles in the rule dependency graph can be generated when policies of multiple entities are joined, we expectthat in real scenarios this would happen quite rarely. As amatter of fact, in our case rules are generally declared for inferringthe values of complex context data on the basis of the values of more simple ones. As an example, a rule could determinethe preferred media quality on the basis of raw data such as available bandwidth and device capabilities. Since it is quiteunlikely that the value of simple data can be inferred on the basis of more complex ones, cycles in rulesets are quite unlikelyto happen. For this reason, we have performed a series of experiments for assessing how the algorithm for cycle detectionaffects the total time of multi-entity policy evaluation in the case the rule dependency graph of the joined logic program isacyclic. In this case, the only tasks to be executed are rule parsing, graph construction, and depth-first search.

For these experiments we have used artificial rulesets of increasing cardinalities. Each rule in the rulesets had a variablenumber of random subgoals, varying from 3 to 15 depending on the specific experiment. Rulesets were properly built inorder to avoid recursion. Experimental results are shown in Fig. 6.

Execution times grow linearly with the size of the ruleset, and are strongly correlated with the number of atoms in therules preconditions. This behavior is motivated by the fact that rules preconditions generate edges in the rule dependencygraph of the logic program, and execution time is dominated by the construction of the data structures encoding the rule

710 C. Bettini et al. / Pervasive and Mobile Computing 4 (2008) 697–718

Fig. 6. Execution times of the CDR algorithm with acyclic rulesets.

Fig. 7. Rulesets evaluation time.

dependency graph. Since the execution time of cycle detection is of the same magnitude of policy evaluation time (seeSection 4.2), cycle detection can be performed without invalidating the feasibility of our logic programming approach.

4.2. Experiments with the ad-hoc inference engine

In order to estimate the feasibility of the evaluation of policies expressed in the logic programming language we defined,we performed experiments using the ad-hoc inference engine we developed, and artificial rulesets of various cardinalities.The experimental setup was the same as the experiments reported in Section 4.1. In this set of experiments, each rulein the rulesets had a variable number of random subgoals (from 3 to 15, depending on the specific experiment). Everyruleset contained three conflicting rules for each head predicate. Rulesets were built in order to avoid recursion, and toallow a random rule to fire for each set of conflicting rules over an attribute. Since policies are parsed during cycle detection,execution times do not include parsing but just policy evaluation.

Results in Fig. 7 show that evaluation times exhibit a linear increasewith the size of the ruleset. This behavior is consistentwith the theoretical result of Theorem 2. Policy evaluation is particularly efficient. As an example, a ruleset of 400 rules isevaluated in around 10 ms, even if rules are quite complex (e.g., having 15 preconditions each).

In order to compare our inference engine with a widely adopted solver, we performed some experiments using theDLV reasoner [30]. The setup was the same of the previous experiment. The syntax of rules has been slightly modified inorder to be acceptable by the DLV parser. However, the semantics of the logic program was conserved. We have comparedthe execution time of DLV with the one of our inference engine. In order to obtain a fair comparison, in this experiment

C. Bettini et al. / Pervasive and Mobile Computing 4 (2008) 697–718 711

Fig. 8. Comparison with the DLV reasoner.

the execution time of our inference engine includes the time of policy parsing. Results are shown in Fig. 8. As expected,evaluation times are considerably higher with DLV, since the class of logic programs it considers (i.e., Disjunctive Datalogprograms) is more complex than ours.

In conclusion, experimental results showed that the algorithms we adopt for cycle detection and policy evaluationare particularly efficient. For instance, rulesets composed by 200 rules are checked for acyclicity and evaluated in around10 ms. Moreover, execution times grow linearly with the size of the rulesets. The algorithm for cycle resolution has highercomplexity; however, the CDR algorithm can execute cycle resolution with rulesets composed by 200 rules and havingaround 90 cycles in around 10 ms. Moreover – as we pointed out before – we believe that in real applications, the greatmajority of joined ruleset will be acyclic.

4.3. Experiments with a prototype service for mobile computing

The following set of experiments aimed at evaluating the efficiency of our solution with a prototype service for pervasiveand mobile computing. As an adaptive prototype we have chosen the POIsmart distributed system (described in Section 5)for context-aware sharing and retrieval of localized resources.

In this experiment, we evaluated the scalability of CARE with respect to the number of concurrent requests in a realisticscenario, in which adaptation is performed on the basis of a relatively small number of policy rules and context data. Policyrules were defined by domain experts in order to properly adapt the service on the basis of a wide set of context data,including location, activity, interest, preferences, and network characteristics. For each user request, the correspondingruleset is composed by 12 rules defined by the service provider, and by a variable number of rules randomly chosen froma set of rules possibly defined by the users of the service. The average number of policy rules per request is 26. The setof context data retrieved from distributed sources is composed of 20 data, whose value is randomly chosen from a setof possible values. As a consequence, each single request simulates a different context situation, characterized by differentcontext data, and by different sets of policies declared by the different users. The sets of context data and rules are presentedin Appendix.

User requests are intercepted by CARE, which is in charge of forwarding them – together with the aggregated contextdata – to the POIsmart service. In particular, the tasks performed by CARE are: (i) parsing of the user’s request; (ii) retrieval ofcontext data and policies from the profile managers; (iii) merge of context data; (iv) rules transformations in order to applyconflict resolution; (v) execution of the CDR algorithm; (vi) policy evaluation; (vii) forward of the request and aggregatedcontext data to the POIsmart service. In order to reduce network latency, in this experimental setup the profile managersbelong to the same subnet of the rest of the CARE infrastructure. The modules of CARE are executed on the same machineused in the previous experiments, while the POIsmart server is executed on a different machine. In order to evaluate thescalability in the case of concurrent requests, we issued an increasing number of requests per second (rps) measuring theaverage execution time of the tasks described above. Results are shown in Fig. 9. Experimental results show that, with arelatively small number of rps (until 30 rps), the average execution time is almost constant. For larger values of rps we cannotice an essentially linear growth of execution time.1

The vertical line for each value of rps corresponds to the standard deviation. The increase of its value with the frequencyof requests is due to the fact that the requests that cannot be immediately serviced are stored in a buffer, and their executiontime includes the time spent in the buffer; clearly the number of requests stored in the buffer increases with rps.

1 The discontinuity points are due to technical details about the handling of the algorithms’ data structures.

712 C. Bettini et al. / Pervasive and Mobile Computing 4 (2008) 697–718

Fig. 9. Execution times with concurrent requests.

(a) POIsmart network. (b) POIsmart clients.

Fig. 10. POIsmart prototype.

5. Prototype adaptive services

Various prototype services addressed to mobile users have been developed that take advantage of CARE for obtainingcontext data for adaptation and personalization.The POIsmart system for navigational points of interest. There is a rapidly growing number of users of GPS enabled mobiledevices, and this or a similar technology for accurate localization will be eventually integrated in most mobile phones. As aconsequence, a number of services providing location-based information are now available to mobile users. However, theseservices generally do not take into account other data than the current location of the user. The aimof the POIsmart prototypesystem is to provide a context-aware service for resource localization that considers not only location but a wider set ofcontext data, including personal interests, device features, and user preferences. In particular, this prototype architecture isdevoted tomanagement and sharing of an extended notion of points of interests (named POIsmarts), which includes semantictagging (e.g., keywords and categories), references toWeb resources (e.g.,Web sites), multimedia attachments (e.g., picturesand videos), comments, and votes.

The POIsmart system (sketched in Fig. 10(a) and described in [31]) is coupled with the CARE middleware for theaggregation of context data retrieved from distributed sources and for the evaluation of dynamic adaptation policies solvingpossible conflicts. Essentially, the system architecture is based on a peer-to-peer network of POIsmart Web services; thesearch mechanism is based on the distributed evaluation of a scoring function that provides a relevance value betweenPOIsmarts and the current user context. The scoring function is transformed into amulti-feature query that is locally executedby the peer that receives the request, and propagated into the peer-to-peer network. The system includes client softwarefor various platforms and devices (see Fig. 10(b)), and takes advantage of an external map server for providing maps anddirections.An architecture for context-aware adaptation of Web applications. Pervasive and mobile computing environments arecharacterized by the heterogeneity of users’ devices, wireless connection networks, and frequent changes in the usage

C. Bettini et al. / Pervasive and Mobile Computing 4 (2008) 697–718 713

context (e.g., as a result of changes in the current user’s activity). Hence, in order to tailor the access to Web applications tothe current environment and users’ situation, techniques for context-aware adaptation of Web contents must be adopted.To this aim, the CARE middleware has been coupled with an intermediary-based architecture for Web content adaptationpresented in [32].

The resulting framework (presented in [33]) allows programmers to modularly build complex adaptive services onthe basis of simpler building blocks. With that solution, adaptation services can be easily developed to tailor Web accessto different classes of devices (e.g., smartphones, PDAs, Tablet PCs), network technologies, modalities (e.g., voice, video),particular kinds of disabilities (e.g., color-blindness, physical disabilities), and geographic location.Other applications of the CAREmiddleware. The CAREmiddleware has been adopted for the adaptation of other context-awareservices that will not be illustrated in this paper; these include an adaptive streaming server [14], a proximity marketingWeb application [34], and a location-based transcoding service [33].

6. Related work

A number of delivery platforms that take into account context data have been developed both by academic and industrialgroups.

Today, most commercial application servers provide personalization and content adaptation solutions. However, theyperform adaptation mainly on the basis of the characteristics of the user’s device and possibly of her current locationand interests, while more complex context data and user preferences are not taken into account. As an example, thepersonalization scheme of the IBM WebSphere Portal solution includes customization of Web pages based on user profilingby means of statistical models andmatching techniques, and content adaptation based on the client device capabilities. Theframework includes a repository of mobile device profiles describing the capabilities of a broad range of terminals. Similarsolutions are provided by other well-known application servers like BEA Weblogic and OracleAS Wireless. With respect tothese frameworks, our middleware allows to take into account a much wider set of context data. As a matter of fact, contextdata in CARE can be gathered from distributed context sources, and aggregated resolving possible conflicts.

With respect to the issue of integrating multi-source profile data, our approach is similar to the one adopted by DELI [6]and Intel CC/PP SDK [35]. DELI is an open-source Java library developedbyHP Labs that allows the resolution ofHTTP requestscontaining references to the CC/PP profile of the client device. DELI adopts the profile integration approach of UAProf, whichconsists in associating a resolution rule to every attribute. Whenever a conflict arises (i.e., when partial profiles providedifferent values for the same attribute) the resolution rule determines the value to be assigned to the attribute by consideringthe order of evaluation of partial profiles. DELI is fully integratedwith Cocoon, thewell-knownXML-based application server.Intel CC/PP SDK [35] proposes an architecture composed by client- and server-side modules for the management of UAProfprofiles. Client-side modules execute on Microsoft Pocket PC 2002 devices. The CC/PP profile of the device is kept up-to-date by a monitoring module that is in charge of retrieving static as well as dynamic information about the device statusand capabilities. The communication of the CC/PP profile to server applications is obtained by means of the a local proxywhich intercepts HTTP requests (e.g., originated by themicro-browser of the device) and inserts profile information into theHTTP headers. The main server-side component of this architecture is a module of the Apache Web server that is in chargeof retrieving partial profiles by analyzing the HTTP request headers, and of combining them in order to obtain the mergedprofile. This profile is used by the application logic for adapting the content and its presentation.

In [36] a different approach to context management is proposed, which is based on the implementation of a distributedstorage system on user devices. In that framework, the whole set of context data is retrieved by adaptive applications on aper-request basis. The proposed approach is useful for preserving the privacy of data. However, the intermittent connectivityof mobile devices along with their limited CPU, storage and power resources, makes it difficult to guarantee the availabilityof context data, even if sophisticated techniques are provided. Moreover, that framework – like the abovementioned ones –does not take into account dynamic user preferences. In CARE, user preferences can bemodeled bymeans of policies declaredby the user.

In the last few years, various other architectures for context awareness adopting rule-based inference techniques havebeen proposed.

A Context Modeling Language (CML) has been proposed in [37] as a graphical tool to assist designers in exploring andspecifying context requirements of a context-aware application. A translation process derives facts and relations amongdata described in the CML model in the form of database tuples. The facts abstraction of CML are used for defining rulesimplementing the situation abstraction, i.e. conditions on the context describing application behaviors and transitions amongpossible states. Reasoning on facts and situations dealing with uncertainty and ambiguity is supported through a form ofclosed world assumption extended to a three-valued logic. The situation abstraction is also the base of the preferencesmechanism provided by that framework: alternative transitions of the application from a situation to the subsequentsare ranked according to context data. These kinds of preferences allow the resolution of possible conflicts previouslydetected by application designers but do not provide techniques to detect and resolve conflicts due to unpredictable contextconfigurations.

Amechanism of rule evaluation against user context data is adopted in the Houdini framework [10]. Themain goal of thatframework is to efficiently support context-aware provision of telecommunication services, while preserving the privacy

714 C. Bettini et al. / Pervasive and Mobile Computing 4 (2008) 697–718

of data. In Houdini, sharing of context information is controlled by policies declared by the user. The key component of thearchitecture is a module that evaluates the requests of context data issued by service providers against the user privacypolicies. The proposed policy language, together with the conflict resolution strategy of its inference engine, is similar toours. However – being primarily focused on adaptation – our policy mechanism is different, since policies are declared byboth the user and the service provider in order to determine customization parameters. Hence, our framework includessophisticated techniques for policy aggregation and conflict resolution.

CARMEN [11] is amiddleware for supporting context-awareness inmobile computing. The adaptation technique adoptedby CARMEN is different from ours, since it is performed by distributed proxies, while in our framework the adaptationis completely demanded to the service provider. The intermediate proxies of CARMEN execute directives obtained byevaluating Ponder [2] policies that manage migration, binding, and access control. The Ponder language turns out to bea good choice for the class of policies used in this middleware, while our policy language is well-suited for adaptation rules,being extremely efficient and owning mechanisms for solving conflicts. Moreover, CARMEN adopts a context managementmechanism that is different from ours, since context data are stored by LDAP-compliant directory services, while in ourframework we make use of CC/PP repositories.

The SweetDeal [38] project investigates rule-based business processes for e-commerce and is based on courteous logicprograms [4], which are closely related to the ones in which we encode our policy rules. However, due to the complexapplication domain addressed in that project, their rule language is more expressive and evaluation cannot be achieved inlinear time as in our case.

An architecture for user-side adaptation of applications in a mobile environment is described in [39]. This architecturecontains a single profile manager which is in charge of discovering context services; i.e., applications providing context data.Data retrieved from context services are stored by a central database, and kept up-to-date by them on the basis of triggers.A client-side module is in charge of evaluating adaptation policies against context data, accordingly modifying the behaviorof local applications. Users can define policies by specifying priorities among applications as well as among resources oftheir devices. Consequently, the behavior of applications is adapted to obtain the optimal level of service with respect to theuser’s requirements. However, it is worth noting that this architecture does not support server-side adaptation.

The GAIA middleware for retrieval and derivation of context data is presented in [40]. GAIA is based on the metaphorof active spaces, i.e., geographical regions coordinated by a context-aware software layer that enables interactions amongheterogeneous entities in the space. Context is represented by means of rules expressed in first-order logics, and enrichedwith quantification operators limited to finite domains. With respect to the issue of conflict resolution, we should mentionthe techniques adopted by ConChat [41], a context-aware messaging application supported by GAIA. Conflicts in ConChatare essentially related to the semantics of words. The strategy for conflict detection and resolution is based on the definitionof static rules previously defined by experts of the spoken languages. Active spaces introduced by GAIA, and applied toConChat, provide a powerful mechanism for coordinating the adaptation of services in constrained mobile and pervasiveenvironments. However, the main focus of our work is to support the adaptation of mobile Internet services, whose usersare not confined to the same region. In the last years a huge effort has been made for developing standard languagesfor policy representation. The Policy Core Information Model (PCIM) [1] is an object-oriented information model for therepresentation of policies developed jointly by the IETF Policy Framework WG and by the Distributed Management TaskForce (DMTF). In this model, a policy is a rule that specifies actions to be taken when a set of conditions is met. Twohierarchies of object classes compose this model: structural classes for the representation and the control of policies,and association classes for indicating relationships between instances of the structural classes. In the PCIM model classesare sufficiently generic to allow the representation of policies related to different domains. A similar, yet more security-oriented language for policy representation is Ponder [2], a declarative, object-oriented language which allows actors to begrouped into domains, and rules to be grouped in roles relating to a position in an organization. Since in our architecture,profile data are provided by only three actors, we do not need the definition of domains and roles for grouping objects andactors.

Both the above mentioned languages adopt the ECA (event-condition-action) paradigm. This paradigm – which isextensively used for modeling the dynamic behavior of information systems – does not naturally support rule chaining,since the domain of actions is generally disjoint by the one of conditions. On the contrary, we believe that rule chainingis a necessary feature for the kind of application addressed by our framework, since rules can be declared for inferringhigher-level context data starting frommore simple ones. Moreover, rule chaining is essential for enabling the compositionof policies declared by multiple entities.

Considering work on policy conflict resolution, we should mention the PDL language and the monitor conceptintroduced in [3]. PDL, as many other policy languages, is based on the ECA paradigm. However, its semantics is given interms of nonmonotonic logic programs as in our approach. An interesting extension of PDL that allows the specificationof preferences regarding the application of monitors is proposed in [42].

An ECA language specifically addressed to ubiquitous computing applications was proposed in [7]. This proposalconsiders the problem of the integration and conflict resolution of policies declared by different entities. Their solutionconsists in annotating actions with semantic information about the effects they produce. Detected conflicts are resolved bymeans of meta-rules that specify the preferred system states.

A possible approach for implementing a form of prioritized conflict resolution is to adopt PLP [5], a Prolog compilerfor logic programs with preferences. Programs are compiled by PLP into regular logic programs, that is the class of logic

C. Bettini et al. / Pervasive and Mobile Computing 4 (2008) 697–718 715

programs in which we encode our policies. However, the need of a Prolog compilation phase poses major problems in termsof response time, especially considering that policies can dynamically change and thus the compilation should be performedrun-time.

An interesting class of engines is the one of production rules systems (e.g., engines based on the RETE algorithm [43] such asOPS-5, CLIPS, and Jess). These engines encode a built-in conflict resolution strategy that in certain systems can be modified.For instance, in Jess rules can be prioritized, and default conflict resolution strategies can be overridden. However, the useof priorities is discouraged, since it can have a negative effect on performance. Furthermore, production rules adopt theforward-chaining approach, which do not seem to be optimal in our case, as explained in Section 3.4.2.

Datalog engines like DLV and Mandarax would be suitable for evaluating our rulesets after the preprocessing phase(Transformation 2 in Section 3.4.2). Nevertheless, the experimental results shown in Section 4 confirm the intuition thateven an optimized Datalog engine is slower than an ad-hoc implementation, since the restrictions we impose to our rulesetscan be profitably exploited for improving the evaluation time.

7. Conclusions

In this paper we addressed the problem of designing the reasoning core of a middleware for supporting adaptation inpervasive and mobile computing environments. Our solution is based on the representation of context data in an extendedCC/PP formalism, and on efficient context reasoning in a restricted logic programming language. We have presentedstrategies to dealwith conflicting rules, algorithms that implement the strategies, and algorithms to detect and solve possiblerule cycles. Our solution is corroborated by both theoretical analyses of the adopted algorithms, and extensive experimentalevaluations. Experimental results show the applicability of the resulting middleware in large-scale applications.

Acknowledgments

This work has been partially supported by Italian MIUR (FIRB “Web-Minds” project N. RBNE01WEJT_005 and InterLinkII04C0EC1D).

Appendix. Context data and rulesets used in the experiments reported in Section 4.3

A.1. Context data

For each context data C considered in the experiments, the corresponding value is chosen at random among a set ofpossible values (v1, v2, . . . , vn). For instance, given the context data AvailableBandwidth, and a set {50, 500, 5000} of possiblevalues, each value in the set has 1/3 probability to be associated to the context data representing the current availablebandwidth.

AvailableBandwidth = {50, 500, 5000}NetSpeed = {high, medium, low}Bearer = {umts, gsm, gprs}MediaModality = {text}SocialActivity = {on_a_date, out_with_friends,

out_with_family, strolling_alone}Radius = {2500}Transport = {public_transportation, car, underground}LocationWeight = {0.34}Location = {milan, rome}HomeTown = {milan, rome}InterestsWeight = {0.33}KeywordsWeight = {0.33}TrafficConditions = {free, jammed}CurrentDay = {anniversary, weekday, workingDay, sunday}CurrentTime = {8, 12, 15, 20, 23 }FuelSensor = {low_on_petrol, medium, full}Weather = {raining, sunny}Temperature = {16, 23, 28}PhysicalActivity = {driving, walking}Activity = {traveling, relaxing, shopping }

716 C. Bettini et al. / Pervasive and Mobile Computing 4 (2008) 697–718

A.2. Rulesets

Set of rules declared by the service provider (each rule is associated to users’ requests)

r1: NetSpeed(high) :- AvailableBandwidth(X), >=(X,1000).r2: NetSpeed(medium) :- AvailableBandwidth(X), >=(X,256), <(X,1000).r3: NetSpeed(low) :- AvailableBandwidth(X), <(X,256).r4: NetSpeed(very_low) :- Bearer(gsm).r5: MediaQuality(X) :- NetSpeed(X).r6: MediaModality(audio) :- PhysicalActivity(driving).r7: MediaModality(audio) :- PhysicalActivity(walking).r8: MediaModality(video) :- NetSpeed(high).r9: MediaModality(text) :- NetSpeed(low).r10: Radius(1000) :- PhysicalActivity(walking).r11: Radius(3000) :- Transport(public_transportation).r12: Radius(5000) :- Transport(car).

Priorities over rules defined by the service provider:r4 � r3 � r2 � r1r9 � r8 � r7 � r6r12 � r11 � r10

Set of rules declared by the users (each rule has 0.5 probability to be associated to a particular user request)

r13: LocationWeight(0.5) :- Location(X), HomeTown(Y),!=(X,Y), PhysicalActivity(walking).

r14: InterestsWeight(0.2) :- Location(X), HomeTown(Y),!=(X,Y), PhysicalActivity(walking).

r15: KeywordsWeight(0.3) :- Location(X), HomeTown(Y),!=(X,Y), PhysicalActivity(walking).

r16: LocationWeight(0.2) :- Transport(car).r17: InterestsWeight(0.5) :- Transport(car).r18: KeywordsWeight(0.3) :- Transport(car).r19: Radius(8000) :- Transport(car), CurrentDay(weekend),

not TrafficConditions(jammed).r20: Radius(5000) :- Transport(car), CurrentDay(workingDay).r21: Radius(3000) :- Transport(underground).r22: Radius(1000) :- PhysicalActivity(walking), CurrentDay(workingDay).r23: Radius(500) :- PhysicalActivity(walking), CurrentTime(X), >=(X,21).r24: Interests(restaurants) :- CurrentTime(X), >=(X,19),

<=(X,21), Activity(traveling).r25: Interests(restaurants) :- CurrentTime(X), >=(X,11),

<=(X,13), Activity(traveling).r26: Interests(gasStations) :- Transport(car), FuelSensor(low_on_petrol).r27: Interests(pubs) :- SocialActivity(on_a_date).r28: Interests(discos) :- SocialActivity(out_with_friends).r29: RestaurantPreference(romantic) :- SocialActivity(on_a_date).r30: RestaurantPreference(cheap) :- SocialActivity(X), !=(X,on_a_date).r31: RestaurantPreference(outdoor) :- Temperature(X), >=(X,24),

not Weather(raining).r32: RestaurantPreference(indoor) :- Weather(raining).r33: Interests(museums) :- CurrentDay(weekend),

SocialActivity(out_with_family),Activity(traveling).

r34: Interests(cinemas) :- CurrentTime(X), >=(X,8),SocialActivity(on_a_date).

r35: Interests(stadium) :- CurrentDay(sunday),SocialActivity(out_with_friends).

r36: Interests(museums) :- CurrentDay(sunday),SocialActivity(strolling_alone).

r37: Interests(park) :- CurrentDay(weekend), Weather(sunny),Activity(relaxing).

C. Bettini et al. / Pervasive and Mobile Computing 4 (2008) 697–718 717

r38: Interests(cinemas) :- CurrentDay(weekend), Weather(raining),Activity(relaxing).

r39: Interests(malls) :- CurrentDay(weekend), Weather(raining),Activity(shopping).

r40: Interests(malls) :- CurrentTime(X), >=(X,15),CurrentDay(anniversary).

Priorities over rules defined by users:r13 � r16r14 � r17r15 � r18r23 � r22 � r21 � r20 � r19r40 � r39 � r38 � r37 � r36 � r35 � r34 � r33 � r28 � r27 � r26 � r25 � r24r32 � r31 � r30 � r29

References

[1] B. Moore, E. Ellesson, J. Strassner, A.Westerinen, Policy core informationmodel – Version 1 specification, Tech. Rep. RFC 3060, IETF - NetworkWorkingGroup February 2001.

[2] N. Damianou, N. Dulay, E. Lupu,M. Sloman, The ponder policy specification language, in: Proceedings of Policies for Distributed Systems andNetworks,International Workshop, POLICY 2001, in: Lecture Notes in Computer Science, vol. 1995, Springer, 2001, pp. 18–38.

[3] J. Chomicki, J. Lobo, S.A. Naqvi, Conflict resolution using logic programming, IEEE Transactions on Knowledge and Data Engineering 15 (1) (2003)244–249.

[4] B. Grosof, Prioritized conflict handling for logic programs, in: Proceedings of the International Logic Programming Symposium, ILPS, 1997, pp. 197–211.[5] J. Delgrande, T. Schaub, H. Tompits, A framework for compiling preferences in logic programs, Theory and Practice of Logic Programming 3 (2) (2003)

129–187.[6] M. Butler, F. Giannetti, R. Gimson, T. Wiley, Device independence and the web, IEEE Internet Computing 6 (5) (2002) 81–86.[7] C.S. Shankar, A. Ranganathan, R.H. Campbell, An ECA-P policy-based framework for managing ubiquitous computing environments, in: Proceedings

of the 2nd Annual International Conference on Mobile and Ubiquitous Systems, MobiQuitous 2005, IEEE Computer Society, 2005, pp. 33–44.[8] A. Rakotonirainy, J. Indulska, S.W. Loke, A.B. Zaslavsky, Middleware for reactive components: An integrated use of context, roles, and event based

coordination, in: Middleware 2001, IFIP/ACM International Conference on Distributed Systems Platforms, in: Lecture Notes in Computer Science, vol.2218, Springer, 2001, pp. 77–98.

[9] H. Chen, T. Finin, A. Joshi, Semantic web in the context broker architecture, in: Proceedings of the Second IEEE International Conference on PervasiveComputing and Communications, PerCom 2004, IEEE Computer Society, 2004, pp. 277–286.

[10] R. Hull, B. Kumar, D. Lieuwen, P. Patel-Schneider, A. Sahuguet, S. Varadarajan, A. Vyas, Enabling context-aware and privacy-conscious user data sharing,in: Proceedings of the 2004 IEEE International Conference on Mobile Data Management, IEEE, 2004, pp. 187–198.

[11] P. Bellavista, A. Corradi, R. Montanari, C. Stefanelli, Context-aware middleware for resource management in the wireless internet, IEEE Transactionson Software Engineering, Special Issue on Wireless Internet 29 (12) (2003) 1086–1099.

[12] G. Klyne, F. Reynolds, C.Woodrow, H. Ohto, J. Hjelm,M.H. Butler, L. Tran, Composite Capability/Preference Profiles (CC/PP): Structure and Vocabularies1.0, W3C Recommendation, W3C. http://www.w3.org/TR/2004/REC-CCPP-struct-vocab-20040115/, January 2004.

[13] C. Bettini, D. Riboni, Profile aggregation and policy evaluation for adaptive internet services, in: 1st Annual International Conference on Mobile andUbiquitous Systems, Mobiquitous 2004, IEEE Computer Society, 2004, pp. 290–298.

[14] C. Bettini, D. Maggiorini, D. Riboni, Distributed context monitoring for the adaptation of continuous services, World Wide Web Journal (WWWJ),Special Issue on Multi-Channel Adaptive Information Systems on the World Wide Web 10 (4) (2007) 503–528.

[15] A. Agostini, C. Bettini, D. Riboni, Loosely coupling ontological reasoning with an efficient middleware for context-awareness, in: Proceedings of theSecond Annual International Conference on Mobile and Ubiquitous Systems: Networking and Services, MobiQuitous 2005, IEEE Computer Society,2005, pp. 175–182.

[16] L. Pareschi, D. Riboni, C. Bettini, S. Mascetti, Towards privacy protection in a middleware for context-awareness (short paper), in: Proc. of the 1stInternational Workshop on Combining Context with Trust, Security and Privacy, Colocated with IFIPTM07, vol. 269, CEUR-WS, 2007, http://CEUR-WS.org/Vol-269/paper4.pdf.

[17] OpenMobileAlliance, User Agent Profile Specification, Tech. Rep. WAP - 248 - UAProf20011020 - a, Wireless Application Protocol Forum.http://www.openmobilealliance.org/, October 2001.

[18] P. Norvig, S. Russell, Artificial intelligence. A modern approach, Prentice Hall Series in Artificial Intelligence, 2003.[19] K.R. Apt, H.A. Blair, A. Walker, Towards a theory of declarative knowledge, in: Foundations of Deductive Databases and Logic Programming, Morgan

Kaufmann, 1988, pp. 89–148.[20] H. Boley, S. Tabet, G. Wagner, Design rationale of RuleML: A markup language for semantic web rules, in: Proceedings of the International Semantic

Web Working Symposium, SWWS, 2001, pp. 381–401.[21] S. Ceri, G. Gottlob, L. Tanca, Logic Programming and Databases, Springer-Verlag, 1990.[22] Y. Dimopoulos, A. Torres, Graph theoretical structures in logic programs and default theories, Theoretical Computer Science 170 (1–2) (1996) 209–244.[23] P. Festa, P. Pardalos, M. Resende, Feedback set problems, in: D.Z. Du, P.M. Pardalos (Eds.), in: Handbook of Combinatorial Optimization, Supplement,

vol. A, Kluwer Academic Publishers, 2000, pp. 209–259.[24] R.M. Karp, Reducibility among combinatorial problems, in: R.E. Miller, J.W. Thatcher (Eds.), Complexity of Computer Computations, Plenum Press,

New York, 1972, pp. 85–103.[25] A. Shamir, A linear time algorithm for finding minimum cutsets in reducible graphs, SIAM Journal on Computing 8 (4) (1979) 645–655.[26] T.H. Cormen, C.E. Leiserson, R.L. Rivest, Introduction to Algorithms, McGrawHill, 1990.[27] H. Levy, D.W. Low, A contraction algorithm for finding small cycle cutsets, Journal of Algorithms 9 (4) (1988) 470–493.[28] Teodor C. Przymusinski, On the declarative semantics of deductive databases and logic programs, in: Foundations of Deductive Databases and Logic

Programming, Morgan Kaufmann, 1988, pp. 193–216.[29] K.R. Apt, M. Bezem, Acyclic programs, New Generation Computing 9 (3–4) (1991) 335–365.[30] N. Leone, G. Pfeifer, W. Faber, T. Eiter, G. Gottlob, S. Perri, F. Scarcello, The DLV system for knowledge representation and reasoning, Technical Report

cs.AI/0211004, arXiv.org, November 2002.[31] C. Bettini, D. Riboni, Context-aware web services for distributed retrieval of points of interest, in: Proceedings of the Second International Conference

on Internet and Web Applications and Services, ICIW 2007, IEEE Computer Society, 2007.[32] M. Colajanni, R. Grieco, D. Malandrino, F. Mazzoni, V. Scarano, A scalable framework for the support of advanced edge services, in: Proceedings of the

2005 International Conference on High Performance Computing and Communications, HPCC’05, 2005, pp. 1033–1042.

718 C. Bettini et al. / Pervasive and Mobile Computing 4 (2008) 697–718

[33] R. Grieco, D. Malandrino, F. Mazzoni, D. Riboni, Context-aware provision of advanced internet services, in: The 4th Annual IEEE InternationalConference on Pervasive Computing and Communications (PerCom 2006), Workshops Proceedings, IEEE Computer Society, 2006, pp. 600–603.

[34] A. Agostini, C. Bettini, N. Cesa-Bianchi, D. Maggiorini, D. Riboni, M. Ruberl, C. Sala, D. Vitali, Towards highly adaptive services for mobile computing,in: Proceedings of IFIP TC8 Working Conference on Mobile Information Systems, MOBIS, 2004, pp. 121–134.

[35] M. Bowman, R.D. Chandler, D.V. Keskar, Delivering customized content to mobile device using CC/PP and the Intel CC/PP SDK, Technical Report, IntelCorporation, 2002.

[36] S. Riché, G. Brebner, Storing and accessing user context, in: Proceedings of Mobile Data Management, MDM 2003, in: Lecture Notes in ComputerScience, vol. 2574, Springer, 2003, pp. 1–12.

[37] K. Henricksen, J. Indulska, Developing context-aware pervasive computing applications: Models and approach, Journal of Pervasive and MobileComputing 2 (1) (2006) 37–64.

[38] B.N. Grosof, T.C. Poon, SweetDeal: Representing agent contractswith exceptions using XML rules, ontologies, and process descriptions, in: Proceedingsof the Twelfth International World Wide Web Conference, WWW 2003, 2003, pp. 340–349.

[39] C. Efstratiou, K. Cheverst, N. Davies, A. Friday, An architecture for the effective support of adaptive context-aware applications, in: Proceedings ofMobile Data Management, Second International Conference, MDM 2001, in: Lecture Notes in Computer Science, vol. 1987, Springer, 2001, pp. 15–26.

[40] M. Roman, C. Hess, R. Cerqueira, A. Ranganathan, R.H. Campbell, K. Nahrstedt, Amiddleware infrastructure for active spaces, IEEE Pervasive Computing01 (4) (2002) 74–83.

[41] A. Ranganathan, R.H. Campbell, A. Ravi, A. Mahajan, ConChat: A context-aware chat program, IEEE Pervasive Computing 01 (3) (2002) 51–57.[42] E. Bertino, A. Mileo, A. Provetti, Policy Monitoring with User-Preferences in PDL, in: Proceedings of Workshop on Nonmonotonic Reasoning, Action,

and Change, NRAC’03, 2003, pp. 37–44.[43] C.L. Forgy, RETE: A fast algorithm for the many pattern/many object pattern matching problem, Artificial Intelligence 19 (1) (1982) 17–37.

Claudio Bettini is a professor of computer science at theDICoDepartment of theUniversity ofMilan. He is also a research professorat the Center for Secure Information Systems of George Mason University, Fairfax, Virginia. He received his Ph.D. in computerscience from the University of Milan in 1993. He is a member of ACM Sigmod.

Linda Pareschi received herM.Sc. degree in computer science from the University ofMilan in 2005 and is currently a Ph.D. studentat the DICo Department of the same university.

Daniele Riboni received his Ph.D. in computer science from the University of Milan in 2007. He currently holds a Postdoc positionat the DICo Department of the same university.