lte_ns3_ottimo
DESCRIPTION
How-to LTE- NS3TRANSCRIPT
317959
Mobile Opportunistic Traffic Offloading
D3.1 – Initial results on offloading foundations and enablers
(public)
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 2
Grant Agreement No. 317959 Project acronym MOTO Project title Mobile Opportunistic Traffic Offloading Advantage Deliverable number D3.1 Deliverable name Initial results on offloading foundations and enablers Version V 1.0 Work package WP 3 – Offloading foundations and enablers Lead beneficiary CNR Authors Vania Conan (TCS), Filippo Rebecchi (TCS), Raffaele Bruno (CNR),
Chiara Boldrini (CNR), Gianni Mainetto (CNR), Andrea Passarella (CNR), Marcelo Dias De Amorin (UPMC), Filippo Rebecchi (UPMC), Engin Zeydan (AVEA)
Nature R – Report Dissemination level PU – Public Delivery date 02/10/2013
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 3
Table of Contents
LIST OF FIGURES ........................................................................................................................................ 5 LIST OF TABLES .......................................................................................................................................... 6 ACRONYMS ............................................................................................................................................... 7 EXECUTIVE SUMMARY .............................................................................................................................. 8 1 INTRODUCTION ................................................................................................................................... 9 2 INVESTIGATIONS ON THE CAPACITY LIMITS OF LTE ............................................................................ 12
2.1 LTE MODULE IN NS3 .............................................................................................................................. 12 2.1.1 Air interface ................................................................................................................................ 13 2.1.2 CQI feedback .............................................................................................................................. 13 2.1.3 Propagation model ..................................................................................................................... 13 2.1.4 Fading model .............................................................................................................................. 14 2.1.5 Data PHY Error Model ................................................................................................................ 14 2.1.6 Adaptive Modulation and Coding .............................................................................................. 14 2.1.7 Resource Allocation model ........................................................................................................ 14
2.1.7.1 Round Robin (RR) ................................................................................................................ 15 2.1.7.2 Proportional Fair (PF) .......................................................................................................... 15 2.1.7.3 Maximum Throughput (MT) ............................................................................................... 15 2.1.7.4 Throughput to Average (TTA) ............................................................................................. 16 2.1.7.5 Blind Average Throughput (BAT) ........................................................................................ 16 2.1.7.6 Priority Set (PS) ................................................................................................................... 16
2.2 CAPACITY LIMITS IN LTE NETWORKS .......................................................................................................... 17 2.2.1 Results in pedestrian environments ........................................................................................... 17 2.2.2 Results in vehicular environments ............................................................................................. 20
3 THE PUSH&TRACK SYSTEM AS A TECHNIQUE FOR OPPORTUNISTIC OFFLOADING .............................. 23 3.1 HIGH LEVEL OPERATION OF PUSH&TRACK .................................................................................................. 23 3.2 SUBSET SELECTION ................................................................................................................................. 24 3.3 WHEN TO PUSH ..................................................................................................................................... 24
3.3.1 Fixed Objective Function ............................................................................................................ 24 3.3.2 Derivative-‐based Re-‐injection (DROiD) ...................................................................................... 25
3.3.2.1 Motivation .......................................................................................................................... 26 3.3.2.2 Re-‐injection strategy ........................................................................................................... 26
3.4 RESULTS ............................................................................................................................................... 27 3.4.1 Evaluation Setup ........................................................................................................................ 27 3.4.2 Fixed Objective Function ............................................................................................................ 28 3.4.3 Derivative-‐based Re-‐injection (DROiD) ...................................................................................... 29
4 THROUGHPUT ANALYSIS OF OPPORTUNISTIC NETWORK PROTOCOLS ............................................... 31 4.1 CONVERGENCE OF FORWARDING PROTOCOLS IN OPPORTUNISTIC NETWORKS .................................................... 32
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 4
4.2 MODELLING THE DELAY OF OPPORTUNISTIC ROUTING PROTOCOLS .................................................................. 35 4.2.1 General framework for modelling the delay .............................................................................. 36 4.2.2 Using the general framework: concrete examples .................................................................... 37
5 NEXT STEPS ....................................................................................................................................... 40 REFERENCES ............................................................................................................................................ 41 DISCLAIMER ............................................................................................................................................. 44
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 5
List of Figures
Figure 1. Reference MOTO networking environment. ...................................................................................... 9 Figure 2. Example of a space-‐time path. ........................................................................................................... 9 Figure 3: Total throughput of a single LTE cell as a function of the distance of the tagged UE from the eNB. A
variable number N of UEs is uniformly distributed in the cell. Downlink traffic flows are saturated. ... 18 Figure 4: Throughput fairness of a single LTE cell as a function of the distance of the tagged UE from the
eNB. A variable number N of UEs is uniformly distributed in the cell. Downlink traffic flows are saturated. ............................................................................................................................................... 19
Figure 5: Throughput perceived by the a tagged UE as a function of the distance of the tagged UE from the eNB. A variable number N of UEs is uniformly distributed in the cell. Downlink traffic flows are saturated. ............................................................................................................................................... 20
Figure 7: LTE link capacity measured by a single mobile UE for different speeds .......................................... 21 Figure 8: Spatial distribution of per-‐UE throughput for a node density of 2 UEs per km ............................... 21 Figure 9: Spatial distribution of per-‐UE throughput for a node density of 10 UEs per km ............................. 22 Figure 10: Scatter plot of average values and coefficients of variation of the throughputs obtained by each
mobile UE for two node densities. ......................................................................................................... 22 Figure 11: High level operation of Push&Track ............................................................................................... 22 Figure 12: Infection rate objective functions. x is the fraction of time elapsed between a message’s creation
and expiration dates. x = 1 is the deadline for achieving 100% infection. ............................................. 24 Figure 13: Discrete time slope detection performed by Push&Track. For clarity we consider the content
creation time t0 = 0. ................................................................................................................................ 25 Figure 14: 1-‐minute delay: average offload ratio for different combinations of whom and when strategies,
three different participation rates are considered. The rows correspond, from top to bottom, to the following whom strategies: Random, Connected Components, Entry-‐Oldest, Entry-‐Average, Entry-‐Newest, GPS-‐Density, and GPS-‐Potential. The columns represent the following when strategies, from left to right: Single Copy, Ten Copies, Quadratic, Slow Linear, Linear, Fast Linear, and Square Root. ... 28
Figure 15: Offloading efficiency for different re-‐injection schema. Different maximum reception delays for messages are considered. ...................................................................................................................... 29
Figure 16: Infrastructure vs. ad hoc load per message sent using the Infra, the Oracle, and the DROiD strategies. Different maximum reception delays for messages are considered. ................................... 30
Figure 17. Example of delays with different forwarding strategies. ............................................................... 34 Figure 18. Semi-‐Markov chain for the general delay modelling framework. .................................................. 36 Figure 19. Scenario 1 (left) and 2 (right). ........................................................................................................ 37 Figure 20. Distribution of the delay in Scenario 1 (exponential mobility). ...................................................... 38 Figure 21. Distribution of the delay in Scenario 2 (exponential mobility). ...................................................... 39
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 6
List of Tables
Table 1: Acronyms ............................................................................................................................................. 7 Table II: Main simulation parameters ............................................................................................................. 17 Table 3. Summary of forwarding strategies. ................................................................................................... 33 Table 4. Convergence conditions. ................................................................................................................... 33
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 7
Acronyms Table 1: Acronyms
Acronym Meaning
AAA Authentication, Authorization and Accounting
AMC Adaptive Modulation and Coding
CQI Channel Quality Indicator
eNB Evolved Node B or eNodeB
HARQ Hybrid Automatic Retransmission Request
MAC Medium Access Control
MCS Modulation and Coding Scheme
MIMO Multiple input multiple output
OFDM Orthogonal Frequency Division Multiplexing
PDCP Packet Data Convergence Protocol
RB Resource Block
RBG Resource Block Group
RLC Radio Link Control
RRC Radio Resource Control
RRM Radio Resource Management
TTI Transmission Time Interval
UE User Equipment
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 8
Executive summary This deliverable provides a first set of enabling concepts, techniques and models for capacity improvements of wireless infrastructures through mobile data traffic offloading. Specifically, we present initial results obtained in the first 7 months of WP3 activities (M4 to M11), along three main lines. The first one is an investigation into the capacity limits of the LTE technology. This clearly shows that there are common cases where LTE users will experience a throughput likely unsuitable to support modern forms of data-‐oriented multimedia applications. Besides providing initial yet numerical evidence about capacity limitations of LTE, this also provides a clear case for the overall MOTO concept of offloading through opportunistic networking techniques. In the second part we present an initial solution for exploiting the capacity available in opportunistic networks in presence of an LTE infrastructure, i.e. the Push&Track system. Push&Track provides a practical technique to improve capacity through offloading. Therefore, it shows a concrete example of the aspects that need to be analysed and modelled to correctly design an offloading system. Modelling one of those aspects is the main objective of the third line reported in this document. Specifically, we describe a stochastic model to describe the expected delay and number of hops of a set of reference forwarding protocols used in opportunistic networks. As explained in the following of the deliverable, the expected delay is the main parameter determining the throughput perceived by users. Thus, it allows us to characterise the capacity available (in terms of throughput) to users when data is disseminated through an opportunistic network. Overall, none of these three lines has provided final results, yet. This was anticipated, and appropriate considering the time span of the activities described in this deliverable. However, all of them provide significant initial results that both (further) motivate the investigation of the MOTO offloading concept, and provide initial tools for the design of effective offloading protocols.
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 9
1 Introduction In this deliverable we start analysing key concepts to characterise foundational aspects of offloading in the reference MOTO networking environments. In particular, this deliverable reports activities related to characterising the capacity properties of the reference MOTO network. For the reader’s convenience, Figure 1 shows a conceptual representation of the environment we consider.
Figure 1. Reference MOTO networking environment.
Among the various challenges of this environment, one of the most interesting is characterising the capacity gain that can be achieved when traffic is offloaded from a wireless infrastructure (and in particular from LTE) to an opportunistic network, i.e. a network where communication happens due to direct encounter between user devices. Opportunistic networks [32] are mobile self-‐organizing networks where the existence of a continuous multihop path formed by simultaneously connected hops is not taken for granted. To deliver a message from a source to a destination, in opportunistic networks it is required that a space-‐time multihop path exists [21] (see Figure 2 for a graphical example). Due to users’ mobility and network reconfigurations, different portions of a space-‐time path can become available at different points in time. For example, in Figure 2 node 2 moves close to node 3 at time t2, while node 5 moves close to the destination at time t3, thus establishing a space-‐time path between nodes S and D. Intermediate nodes in space-‐time paths exploit the store-‐carry-‐and-‐forward concept [17][28]: They temporarily store messages addressed to a currently unreachable destination (if “better” next hops are currently not available), until a new portion of the space-‐time path appears, and therefore the message can progress toward the final destination.
Figure 2. Example of a space-‐time path.
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 10
To be able to characterise the capacity of the integrated network, several steps are needed. In this document we report initial activities of the work package, aimed at three main goals. The first one is to understand the capacity limits of LTE network, perceived by individual users. When finally achieved, this will allow us to have a clear picture about the configurations of users’ spatio-‐temporal distributions that require capacity enhancements through opportunistic networks. The second one is identifying a reference solution for a complete MOTO solution. This allows us to start investigating some aspects of the capacity gain that can be achieved through offloading, and to have practical indications on which aspects are more important to focus on to understand and fully characterise these capacity gains. The third one, is investigating the capacity available in opportunistic networks in isolation. This deliverable reports the status of the MOTO WP3 activities along these three lines of research. Note that, we consider capacity as the throughput perceived by individual users of the network, as well as the entire cell, rather than as the capacity of the network in terms of information theory. This, in our opinion, is more appropriate to derive results of practical applicability, as the former is one of the key elements of the network performance perceived by the users, and thus of the resulting Quality of Experience. Also note that these results mainly come from the work of Task 3.2. Work has been also carried out in Task 3.1 and 3.3, which will be reported in the corresponding scheduled deliverable of the WP.
The following three sections are devoted to each of these lines. Specifically, in Section 2 we present the initial results we have obtained about the limits of LTE capacity. We have used the reference simulation platform of the project, NS3, in order to start an extensive simulation-‐based measurement campaign. We aim at highlighting the limits of LTE (in terms of throughput experienced by a “tagged” user, as well as of overall cell throughput) in some of the scenarios identified in WP2 of the project (e.g., crowds and vehicular enviroenments). Specifically, up to now we have considered the performance of static users in a single cell, with respect to the number of users served by the same eNB and to the scheduling algorithm executed by the eNB. We have then started considering mobile vehicular environments, to understand the performance when users move across multiple eNBs, populated with a number of other users. Our results confirm that enforcing throughput fairness among the users in a cell and maximizing the cell throughput are two contrasting objectives, and a trade-‐off is generally sought by the operator when implementing a radio resource allocation strategy at the eNB. Furthermore, our results already highlight some interesting properties and cases where the LTE network alone does not provide acceptable throughput to the user, considering the likely demands in terms of data traffic. Specifically, as expected the throughput perceived by a tagged user is highly dependent on the quality of the wireless link between the tagged user and the eNB. In fact, when the tagged user is close to the eNB it generally obtains a stable throughput. On the other hand, after a critical distance, throughput performance falls steeply. In addition, the exact throughput behaviour of a tagged user depends in a complex manner on a variety of factors beyond channel conditions, including the history of perceived throughputs.
Section 3 deals with the second line of research. We have considered the Push&Track system (originally proposed by some of the MOTO partners in [43]), as a practical solution for integrating wireless infrastructures with opportunistic networks. In this context, we present the overall Push&Track system architecture. In addition, we discuss two adaptive re-‐injection strategies to fine control the pace at which contents are disseminated. Results presented in this document show that such a solution can efficiently be implemented. The integration between LTE and opportunistic networks provides to the users the benefit of both “worlds”, e.g. the possibility of offloading part of the traffic from possibly congested LTE networks, without losing the timeliness of delivery (when needed) that cannot be guaranteed in conventional opportunistic-‐only offloading strategies. We have used a simplified simulator to abstract the LTE protocol stack, in order to properly focus on the important factors influencing message propagation. In particular, we show through simulation that Push&Track is able to save more than 50% of the LTE traffic, even in the case of tight delivery constraints (in the order of few minutes or less).
Section 4 presents the initial results we have obtained to characterise the capacity of opportunistic networks. In this case we aim at deriving analytical models of the throughput in opportunistic networks, so as to obtain analytical tools to understand the capacity gain in an integrated network (note that one of the
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 11
next steps planned for the analysis of LTE is deriving similar analytical models). We discuss the two main overall aspects that need to be considered from this standpoint. First of all, we consider the problem of convergence, i.e. the possibility that the expected delay of messages from source to destination can be infinite. As discussed in Section 4 this is a possible case, and practically means that messages can be trapped in relays from which they cannot exit, based on the forwarding policy used. Although we are working on this topic in the framework of MOTO, we have not yet obtained original results to present. Therefore, we present the main background results we have obtained before, to describe the starting point from where we move on inside MOTO. Then, we present original results that provide an analytical model of the delay achieved by messages in a number of mobility settings and with a range of forwarding protocols. As discussed in Section 4, characterising the delay is the most important aspect in order to derive models for the throughput. We have been able to derive a model providing closed form expressions for the delay in heterogeneous mobility settings (i.e., when the characteristics of the contact patterns between nodes change across different pair of nodes), and with different types of routing (representative of State-‐of-‐the-‐Art solutions in the literature). Using this model, we have been able to characterise the delay of the protocols in these settings, highlighting the reasons why some protocols behave better or worse than the others. This analysis shows examples of how our model can be used in practice.
Finally, Section 5 discusses the main directions of future work in the work package related to these activities.
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 12
2 Investigations on the capacity limits of LTE LTE technology promises to improve cell performance in terms of coverage, spectral efficiency and throughput by exploiting a mix of advanced Radio Resource Management (RRM) functions, including enhanced OFDMA-‐based and Multiple input multiple output (MIMO) communications, Channel Quality Indicator (CQI) reporting, link adaptation through Adaptive Modulation and Coding (AMC), Hybrid Automatic Retransmission Request (HARQ), and advanced radio resource allocation strategies. However, the use of more sophisticated physical and MAC layer functions make the capacity analysis of LTE networks more complex. This difficulty is farther exacerbated by the fact that the LTE capacity may be influenced by many factors such as radio environment, traffic profiles, mobility patterns, and so on. Thus, simulators are fundamental tools to assess the performance of LTE networks because they provide the flexibility and possibility of testing large-‐scale networks, as well as of modifying environment attributes in a controlled manner. As discussed in the Introduction, in our performance study we adopt a twofold perspective. On the one hand, we consider the operators’ point of view and we evaluate the cell capacity in terms of “average” and/or “aggregate” performance. For instance, one of the goals that an operator may want to reach is to maximize the cell spectral efficiency by maximizing the volume of bits that a single cell base station (eNB) is able to deliver to the cell users (UEs). Alternatively, the operator might be more interested in ensuring that long-‐term throughput fairness, which can be maintained within a cell by guaranteeing minimum data-‐rate requirements. On the other hand, we also consider the users’ point of view by investigating the performance of an individual user with respect to the spatial distribution and traffic profiles of other users in the same cell (or nearby cells). It is important to point out that most existing studies on individual users’ performance in LTE networks have focused on edge users to characterize cell coverage. Instead in our study, we are more concerned with capacity issues, thus we explore a wider range of possibilities. For instance, our initial results suggest that in some scenarios, the users that are not far from the eNB experience reduced throughputs. Thus, the goal of our capacity analysis is to identify some of the common uses cases in which an individual user receive performance that are unsatisfactory, at least for a data-‐intensive applications. It is straightforward to recognize that a simulation-‐based study is affected by some intrinsic limitations. One of the most important is the use of simplified models to keep both the implementation complexity and the computational costs manageable. In the following performance study we have used the reference simulation platform of the project, NS3, which is a popular object-‐oriented event-‐driven packet-‐level open-‐source simulator that not only include a complete IP stack, modules for common network elements, and packet tracking capabilities, but also simulation models for the complete LTE Radio Protocol stack (RRC, PDCP, RLC, MAC, PHY) [1]. Deliverable D.5.1.1 will provide a comprehensive description of the MOTO simulation tool environment and a detailed overview of the LTE-‐EPC simulation model in NS3. In this section, we give a brief overview of the features of the NS3 LTE module that most affect capacity performance, with particular focus on channel models, OFDMA radio resource management and QoS-‐aware packet scheduling. After this introduction we report our initial results on the assessment of the performance perceived by individual users, and we develop a first understanding of capacity limits and resource sharing problems in LTE networks, which might be addressed by exploiting offloading techniques based on opportunistic communications.
2.1 LTE module in NS3 The LTE simulation model in NS3 includes core network interfaces, protocols and entities, but the procedures of the LTE radio protocol stack reside entirely within the UE and the eNB nodes. In the following we overview the implementation of the main physical and MAC layer functions, with special focus on channel models and packet schedulers.
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 13
2.1.1 Air interface The LTE air interface is based on OFDM, which supports high data rates with low inter-‐symbol interference. In particular, LTE uses OFDMA on the downlink to simplify the UE’s receiver and SC-‐FDMA on the uplink to reduce the cost and power consumption associated with the UE’s transmitter. The LTE air interface is built around a frame structure that is further divided into subframes, slots and Resource Blocks (RB). A Resource Block Group (RBG) consists of multiple RBs in a single slot. Radio resource scheduling decisions in LTE are always made in units of RBs or RBGs. Specifically, each 10ms Frame is divided into ten 1ms subframes, with each subframe further divided into two 0.5ms Slots (1ms is also the Transmission Time Interval, or TTI). In principle, a Slot may consist of variable number of OFDM symbols in the time-‐domain depending on the cyclic prefix in use. However, the LTE module assigns to each subframe fourteen ODFM symbols. In the frequency domain, each RB consists of 12 sub-‐carriers that occupy a bandwidth of 180 KHz. The total number of RBs that can be allocated in a slot is variable and it depends on the frequency band assigned to the eNB. According to the standard [5], the downlink control frame starts at the beginning of each subframe and lasts up to three symbols across the whole system bandwidth, where the actual duration is provided by the Physical Control Format Indicator Channel (PCFICH). The information on the allocation is then mapped in the remaining resource up to the duration defined by the PCFICH, in the so called Physical Downlink Control Channel (PDCCH). The PCFICH and PDCCH are modelled with the transmission of the control frame of a fixed duration of 3/14 of milliseconds spanning in the whole available bandwidth, since the scheduler does not estimate the size of the control region. This implies that a single transmission block models the entire control frame with a fixed power across all the available RBs.
2.1.2 CQI feedback The generation of Channel Quality Indicator (CQI) feedback is done accordingly to what specified in [7]. However, among the seven CQI transmission modes specified in the standard for the downlink, the LTE simulation model implements only two of them: i) periodic wideband CQI, i.e., a single value of channel state that is deemed representative of all RBs in use; and ii) inband CQIs, i.e., a set of value representing the channel state for each RB. In downlink, the CQI feedbacks are currently evaluated according to the SINR perceived by control channel (i.e., PDCCH + PCFIC) in order to have an estimation of the interference when all the eNB are transmitting simultaneously. In uplink, two types of CQIs are implemented: i) SRS based, periodically sent by the UEs, and ii) PUSCH based, calculated from the actual transmitted data.
2.1.3 Propagation model Several propagation models are available in ns3, ranging from the simple Friis and Two-‐Ray propagation models to the more sophisticated and realistic Nakagami and Jakes propagation models. However, the propagation model most commonly adopted for LTE evaluation is an extension of the popular Okumura Hata model [2], known as the COST231 [3]. COST231 extends the Okumura Hata model for the frequency range from 1500 MHz to 2000 MHz, and to model more accurately urban, as well as suburban and open environments. In the following we detail the models adopted. The pathloss expression of the COST231 OH is:
L = 46.3 + 33.9 log f -‐ 13.82 log hb + (44.9 -‐ 6.55 log hb ) log d -‐ F(hM ) + C , where F(hM) = (1.1log(f)) -‐ 0.7 x hM -‐(1.56 x log(f) -‐ 0.8) for medium and small size cities, while F(hM) = 3.2 x (log (11.75 x hM))2 for large cities; C = 0 dB for medium-‐size cities and suburban areas, while C = 3 dB for large cities. The parameters in the above formula are: frequency f, eNB height above the ground hb, UE height above the ground hM, and distance d (km) The extension for the standard OH in suburban is
i=LU -‐ 2 (log f/28)2 -‐ 5.4 where LU is the pathloss in urban areas. The extension for the standard OH in open area is
LO = LU -‐ 4.70(log f)2 + 18.33 log f -‐ 40.94
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 14
2.1.4 Fading model The online generation of fading profiles may be computationally costly. Therefore the LTE simulation model allows evaluating fading values during simulation run-‐time based on pre-‐calculated traces. Various parameters can be controlled to simulate different fading conditions, including users’ speed, number of multiple paths (taps) considered, time granularity and number of nodes. Since the number of variables is pretty high, the generation of traces considering all of them might produce a high number of traces of huge size. Therefore, one fading value per TTI is generated, i.e., every 1 ms (as this is the granularity in time of the NS3 LTE PHY model), and one fading value per RB (which is the frequency granularity of the spectrum model used by the NS3 LTE model). Furthermore, the LTE module provides traces for three different scenarios defined in Annex B.2 of [8] for pedestrian (with nodes’ speed of 3 kmph), vehicular (with nodes’ speed of 60 kmph) and urban scenarios (with nodes’ speed of 3 kmph). All traces have duration of ten seconds and they are computed for a total bandwidth of 100 RBs. 2.1.5 Data PHY Error Model The LTE module adopts the well-‐known Gaussian interference models to compute the received interference, according to which the powers of interfering signals are summed up together to determine the overall interference power. Interference, attenuation and fading models determine the received SINR value of each sub-‐channel (note that the received signal quality by each sub-‐carrier in the same sub-‐channel is usually different). From the SINR samples an effective SINR value is computed using a link-‐to-‐system mapping (LSM) technique. The specific LSM method adopted is the LTE module is the one known as Mutual Information Based Effective SINR (MIESM) that is able to maintain a good level of accuracy and at the same time limit the computational complexity. Finally a separate link-‐level simulator has been used to derive the lookup tables that express the code block error rate (BLER) of each modulation and coding scheme as a function of the effective SINR.
2.1.6 Adaptive Modulation and Coding The LTE module implements an Adaptive Modulation and Coding (AMC) model that is a modified version of the model described in [Piro2011]. Specifically, let i denote the generic user, and let �γ i be its SINR. We get the spectral efficiency ηi of user i using the following equations:
BER = 0.00005
Γ =− ln(5∗BER)
1.5
ηi = log2 1+γ iΓ
$
%&
'
()
The procedure described in [10] is used to get the corresponding modulation-‐and-‐coding scheme (MCS) for the downlink. The spectral efficiency is quantized based on CQI samples, rounding to the lowest value, and is mapped to the corresponding MCS scheme. Specifically, the MAC scheduler receives CQI reports from all UEs in the cell based on their measurements of the downlink channel. The reported CQI is a number between 0 (worst) and 15 (best) indicating the most efficient MCS which would give a Block Error Rate (BLER) of 10% or less.
2.1.7 Resource Allocation model The packet scheduler implemented at the eNB is the crucial function of the resource allocation model because it is in charge of assigning portions of spectrum shared among users within each frame, by following specific policies. Specifically, the scheduler generates special control messages, called Downlink Control Information (DCI), which indicates the resource allocation for each user. The information in DCIs include: i) an allocation bitmap which identifies which RBs will contain the data transmitted by the eNB to each user; ii) the Modulation and Coding Scheme (MCS) to be used in each RB; and iii) the MAC transport block size. Note that LTE supports three different ways for allocating RBs or RBGs in downlink grants. At the
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 15
time of writing, the LTE simulation implements only the Type 0 resource allocation, which uses a bitmap of RBGs, where the RBG size is a function of the channel bandwidth. RBGs may be allocated from across the full channel bandwidth. Allocated RGBs are not required to be contiguous. Many different schedulers have been proposed for LTE, but most of them cannot be deployed in real systems due to both the difficulty to be implemented in real devices and the high computational cost required. For these reasons, only a subset of existing schedulers has been included in the LTE simulation model [1][11]. In the following, we describe the features of the most relevant ones, which have been evaluated in the simulations. First of all, let us introduce some useful notation that will be used in the following sections. Let i, j denote generic users, t be the subframe index, and k be the resource block index; let Mi,k(t) be MCS usable by user i on resource block k according to what reported by the AMC model; finally let S(M,B) be the TB size in bits as defined in [6] for the case where a number B of resource blocks is used. Then, the achievable rate Ri(k, t) in bit/s for user i on resource block group k at subframe t is defined as Ri(k, t) = S(Mi,k(t),1)/TTI.
2.1.7.1 Round Robin (RR) The RR scheduler is the simplest channel-‐unaware scheduler supported in the LTE module. It works by dividing the available resources among the active flows, i.e., those logical channels that have a non-‐empty RLC queue. If the number of RBGs is greater than the number of active flows, all the flows can be allocated in the same subframe. Otherwise, if the number of active flows is greater than the number of RBGs, not all the flows can be scheduled in a given subframe; then, in the next subframe the allocation will start from the last flow that was not allocated. The MCS to be adopted for each user is done according to the received wideband CQIs.
2.1.7.2 Proportional Fair (PF) Thanks to CQI feedbacks, which are periodically sent (from UEs to the eNB) using ad hoc control messages, the scheduler can estimate the channel quality perceived by each UE; hence, it can predict the maximum achievable throughput. As explained above, Ri(k, t) is the achievable expected for the user i at the t-‐th TTI over the k-‐th resource block group. Let Ti(t) be the past throughput performance perceived by the user i, which is determined at the end of the subframe t using an exponential moving average approach (more details can be found in [1]). Finally, at the start of each subframe t, each RBG k is assigned to the user ik(t) by solving the following optimization problem
ik (t) = argmaxj=1,…,N
Rj (k, t)Tj (t)
!
"##
$
%&&
In other words, the PF scheduler uses the past average throughput as a weighting factor of the expected data rate, so that users in bad conditions will be surely served within a certain amount of time. The scaling factor used in the moving average estimator of the past throughput determines the time window over which fairness wants to be imposed.
2.1.7.3 Maximum Throughput (MT) The scheduling strategy known as MT aims at maximizing the overall throughput by assigning each RBG to the user that can achieve the maximum throughput in the current TTI. More formally, the user ik(t) to which RBG k is assigned at subframe t is determined as
ik (t) = argmaxj=1,…,N
Rj (k, t)( )
Although MT can maximize cell throughput, it cannot provide fairness to UEs in poor channel conditions. Note that the LTE module implements two MT variants: frequency domain (FDMT) and time domain (TDMT). In FDMT, every TTI, MAC scheduler allocates RBGs to the UE who has highest achievable rate
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 16
calculated by sub-‐band CQI. In TDMT, every TTI, MAC scheduler selects one UE which has highest achievable rate calculated by wideband CQI.
2.1.7.4 Throughput to Average (TTA) The TTA scheduler can be considered as an intermediate between MT and PF. The user ik(t) to which RBG k is assigned at subframe t is determined as:
ik (t) = argmaxj=1,…,N
Rj (k, t)Rj (t)
!
"##
$
%&& ,
where Rj(t) is the achievable rate for user j at subframe t. The difference between Ri(k,t) and Ri(t) achievable rates is in the selection of the MCS value. For Ri(k,t), MCS is calculated by subband CQI while Ri(t) is calculated by wideband CQI. In other words, the “average” achievable throughput in the current TTI is used as normalization factor of the achievable throughput on the considered RBG. Thus, TTA scheduler guarantees that the best RBs are allocated to each user. As a consequence TTA should ensure a strong level of fairness on a temporal window of a single TTI. In fact, the higher the overall expected throughput of a user is the lower will be its metric on a single resource block.
2.1.7.5 Blind Average Throughput (BAT) The BAT scheduler aims to provide equal throughput to all UEs under eNB. The metric used in TTA is calculated as follows:
ik (t) = argmaxj=1,…,N
1Tj (t)
!
"##
$
%&&
Two BAT variants are implemented in the LTE module. In the time-‐domain BAT (TD-‐BET), the scheduler selects the UE with largest priority metric and allocates all RBGs to this UE. On the other hand, in the frequency-‐domain BAT (FD-‐BET), at the start of each TTI, the scheduler first selects one UE with largest priority metric (i.e., lowest expected throughput). Then, scheduler assigns one RBG to this UE, it calculates expected throughput of this UE and uses it to compare with past average throughput Tj(t) of other UEs. The scheduler continues to allocate RBG to this UE until its expected throughput is not the smallest one among past average throughput Tj (t) of all UE. Then, the scheduler will use the same way to allocate RBG for a new UE that has the lowest past average throughput Tj (t) until all RBGs are allocated to UEs. The principle behind this is that, in every TTI, the scheduler tries the best to achieve the equal throughput among all UEs.
2.1.7.6 Priority Set (PS) The PS scheduler controls the fairness among UEs by a specified Target Bit Rate (TBR). Then it uses a two-‐step technique to allocate radio resources. At first, PS scheduler operates in the time domain by selecting multiple subsets of active users in the current TTI among those connected to the eNB. Then, RBs are physically allocated to each user based on frequency-‐selective metrics. The main advantage of such partitioning is that a different policy can be selected in each phase.
More precisely, the PS scheduler implemented in the LTE simulation model divides the UEs with non-‐empty RLC buffer into two sets based on the TBR. Set A is composed of all UE whose past average throughput is smaller than TBR. A priority metric is associated to each UE in set A using the same formulas as in BET. Set B is composed of all UE whose past average throughput is larger (or equal) than TBR. A priority metric is associated to each UE in set A using the same formulas as in BET. A priority metric is associated to each UE in set B using the same formulas as in PF. UEs belonged to set A have higher priority than ones in set B. Then PS scheduler will select Nmux UEs with highest metric in two sets and forward those UE to the packet scheduler. Then, the scheduler allocates RBG k to UE i in a way similar to PF. The only difference is that the past throughput performance perceived by the user i is updated only when the i-‐th user is actually served.
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 17
2.2 Capacity limits in LTE networks In this section we evaluate the capacity limits achieved for the various resource sharing mechanisms described in both pedestrian and vehicular scenarios. Specifically, for each of the considered scheduling policy, we show the aggregate cell throughput, the average throughput of a tagged user, and the well-‐known Jain fairness index.
2.2.1 Results in pedestrian environments The main goal of this first set of tests is to evaluate the maximum throughput that a tagged UE can obtain in an LTE cell depending on the cell congestion levels, its channel conditions, and the scheduling policy. The main simulation parameters are summarized in Table II. In the considered scenario we investigate a single cell with radius 1.5Km. Then, we deploy uniformly in the cell a number N of UEs. An additional tagged UE is positioned at distance D from the centre of the cell. By varying the distance D and the parameter N, we can study the impact of channel interference on the maximum throughput that the LTE technology can ensure to an individual user as a function of the perceived channel quality for different congestion levels. To ensure that our results are statistically valid we replicate each test with 40 different topologies and we plot both average values and confidence intervals with a 90% confidence level.
Table II: Main simulation parameters
Parameter Value
Simulation duration 10 seconds Number of topologies 20
Number of UEs [10,50] + tagged UE Carrier frequency 2 GHz
Bandwidth for the Downlink 5 MHz Symbols for TTI 14 SubFrame length 1 ms SubCarriers per RB 12 SubCarrier spacing 15 kHz Fading scenario pedestrian
eNB Power transmission 43 dBm MAC scheduler RR, PF, MT, TTA, BAT, PS
TBR for PS scheduler 10 Kbps UE Mobility Static Traffic model Best effort: infinite buffer
Figure 3 and Figure 4 represent, for each of the considered algorithms, the aggregate cell throughput and fairness index. Presented results demonstrate how, as expected, MT performs always better than the other strategies in terms of the overall achieved throughput, but significantly worse when we consider the achieved fairness level. The reason is that MT is able to guarantee a high throughput only to a limited number of users, whereas the rest of the users experience very low throughputs. In addition, the fairness decreases as the number of users increases. In fact, growing the number of users, the probability to find a user close to the eNB that monopolizes the channel increases. On the other hand, BAT is able to obtain the highest throughput fairness (see Figure 4) because it is designed to equalize the throughput of individual users. However, this approach is highly inefficient from the point of view of the aggregate throughput because the users with bad channel quality (thus, low expected throughputs) drive the performance of users with good channel conditions. Interestingly, we can observe that the cell throughput obtained with RR is similar to the one obtained with TTA, and the differences between the two schedulers decrease with the number of users in the cell. The schedulers that obtain the best trade-‐off between fairness and cell throughput are PF and PS (which is in part derived by PF). This can be explained by the fact the scheduling decisions take into account the expected data rate that a user would obtain if a given RBG were assigned to him, as well as the past average throughput. Thus, even users with bad channel conditions will receive their share of radio resources on the long term. We can also observe that the impact of parameter D on the cell
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 18
throughput decreases with the number of users. This is quite intuitive because average measurements tend to hide sample variations if the sample size is large. Finally, many studies have shown that cell capacity slightly increases with the number of users in the cell due to the effect of multi-‐user diversity gain (i.e., the probability to find a user experiencing good channel conditions at a given time and on a given frequency increases with the number of users in the cell). However, our results do not reveal this property because the open-‐area pedestrian scenario is typically affected only by flat fading, which minimize the multi-‐user diversity.
0
2000
4000
6000
8000
10000
12000
14000
16000
18000
10 100 1000
Cell Throughput [Kbps]
D [m]
RRPFMT
TTABATPS
0
2000
4000
6000
8000
10000
12000
14000
16000
18000
10 100 1000
Cell Throughput [Kbps]
D [m]
RRPFMT
TTABATPS
(a) N=10 (b) N=20
0
2000
4000
6000
8000
10000
12000
14000
16000
18000
10 100 1000
Cell Throughput [Kbps]
D [m]
RRPFMT
TTABATPS
(a) N=50
Figure 3: Total throughput of a single LTE cell as a function of the distance of the tagged UE from the eNB. A variable number N of UEs is uniformly distributed in the cell. Downlink traffic flows are saturated.
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 19
0
0.2
0.4
0.6
0.8
1
10 100 1000
Fairness Index
D [m]
RRPFMT
TTABATPS
0
0.2
0.4
0.6
0.8
1
10 100 1000
Fairness Index
D [m]
RRPFMT
TTABATPS
(a) N=10 (b) N=20
0
0.2
0.4
0.6
0.8
1
10 100 1000
Fairness Index
D [m]
RRPFMT
TTABATPS
(a) N=50
Figure 4: Throughput fairness of a single LTE cell as a function of the distance of the tagged UE from the eNB. A variable number N of UEs is uniformly distributed in the cell. Downlink traffic flows are saturated.
Up to now we have analysed the cell throughput by highlighting the intrinsic trade-‐off between fairness and aggregate throughput. The general conclusion is that in most cases higher the fairness, the lower the aggregate throughput that is obtained in a cell. However, cell throughput measurements do not provide a particularly useful insight on the performance perceived by an individual user. An obvious result is that the average user throughput decreases as the number of users increases because the same amount of resources has to be shared among a higher number of competing UEs. However, this is generally not true when considering a tagged user. Therefore in Figure 5 we plot the throughput perceived by a tagged user in the same scenarios that were used to obtain the results reported in Figure 3 and Figure 4. Our results indicate that when the tagged user is close to the eNB, it generally obtains a stable throughput. On the other hand, after a critical distance (around 200 meters in the considered fading environment) throughput performance typically falls steeply. In fact, LTE standard changes the Modulation and Coding Scheme (MCS) assigned to a UE as a function of the reported CQI, and the higher the distance between the tagged user and the eNB, the lower the CQI should be. However, the exact throughput behaviour of a tagged user depends in a complex manner on a variety of factors beyond channel conditions, including the history of the past average throughput. For instance, with TTA scheduler there is an intermediate range of distances where the throughput perceived by the tagged user may be even higher than the one obtained when the tagged user is close to the eNB. Another observation is that with MT scheduler throughput performances are greatly influenced by the topology layout and this explains the huge confidence intervals obtained when the MT scheduler is used.
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 20
0
500
1000
1500
2000
2500
10 100 1000
Throughput of tagged UE [Kbps]
D [m]
RRPFMT
TTABATPS
0
200
400
600
800
1000
1200
10 100 1000
Throughput of tagged UE [Kbps]
D [m]
RRPF
TTABATPS
(a) N=10 (b) N=20
0
100
200
300
400
500
10 100 1000
Throughput of tagged UE [Kbps]
D [m]
RRPF
TTABATPS
(a) N=50
Figure 5: Throughput perceived by the tagged UE as a function of the distance of the tagged UE from the eNB. A variable number N of UEs is uniformly distributed in the cell. Downlink traffic flows are saturated.
2.2.2 Results in vehicular environments The goal of this second set of tests is to evaluate the LTE throughput performance in a typical vehicular environment. To this end, we have considered a straight road segment (e.g., a section of an highway) with four eNBs deployed along the road. The cell radius is set to 1.5 Km. Thus, each eNB covers with its signal a section of the road segment that is 3 Km long. The mobile UEs (e.g., mobile phones or onboard wireless transceivers) are initially deployed according to a uniform distribution over the road. Then, the speed of each vehicle is selected uniformly in the range [80,120] kmph. Thus, the number of UEs that are attached to the same eNB varies during the simulation because vehicles can overtake other front vehicles that move slower. The physical layer parameters are the same as the one reported in Table II. Regarding the packet scheduler, we have considered only the PF scheduler because this scheduling algorithm provides the best trade-‐off between fairness and cell throughput. In Figure 6, we plot the throughput obtained by a single UE moving at constant speed as a function of the travelled distance for different velocities. As expected, the throughput shows a bell-‐shaped trend because the LTE capacity is order of magnitudes lower at the cell edge than close to the cell centre. Interestingly, the dependence of the throughput on the UE’s speed is negligible, at least for the considered ranges of speed. This can be explained by noting the robustness of the LTE technology against the Doppler effect.
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 21
0
5
10
15
20
0 2 4 6 8 10 12
Thro
ughp
ut (M
bps)
X (Km)
speed=80kmphspeed=120kmph
Figure 6: LTE link capacity measured by a single mobile UE for different speeds
In Figure 7 Figure 8 we plot the spatial distribution of the throughput obtained by a varying number of mobile UEs attached to the same roadside eNB. Specifically, we vary the density of mobile UEs in the road segment from 2 UE/km up to 10 UE/km. As pointed out above each UE uniformly selects a speed uniformly in the range [80,120] kmph. Then, in the figures we show the average, the maximum and the minimum throughputs measured by a generic UE as a function of the travelled distance under the cell coverage area of the eNB. As expected, the higher the UE density and the lower the throughput. Furthermore, with a low UE density we can observe that there is a higher relative difference between the minimum and maximum throughout performance that each UE obtains.
0
1000
2000
3000
4000
5000
6000
7000
8000
0 500 1000 1500 2000 2500 3000
Thro
ughp
ut (K
bps)
X (meters)
averageminimummaximum
Figure 7: Spatial distribution of per-‐UE throughput for a node density of 2 UEs per km
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 22
0
200
400
600
800
1000
1200
0 500 1000 1500 2000 2500 3000
Thro
ughp
ut (K
bps)
X (meters)
averageminimummaximum
Figure 8: Spatial distribution of per-‐UE throughput for a node density of 10 UEs per km
To investigate more in depth how throughput dynamics are affected by node density, in Figure 9 we show two scatter plots that illustrate the correlation that exists between the average throughput obtained by each UE and the coefficient of variation (CV)1 of the throughout samples for the two node densities of Figure 7 and Figure 8. We remind that the coefficient of variation is defined as the ratio of the standard deviation to the mean and it is a normalized measure of the dispersion of a probability distribution or a discrete data set. Distributions with CV>1 are considered high variance. The plots indicate that all UEs experience a CV of throughput measurements between 1 and 1.2. In other words, the performance of an average user cannot be considered representative of the performance of each individual user.
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1000 1200 1400 1600 1800 2000 2200 2400
CV
Average Throughput (Kbps) 0
0.2
0.4
0.6
0.8
1
1.2
1.4
180 200 220 240 260 280 300 320 340
CV
Average Throughput (Kbps)
(2 UE/km) (10 UE/Km)
Figure 9: Scatter plot of average values and coefficients of variation of the throughputs obtained by each mobile UE for two node densities.
1 CV is a normalized measure of dispersion of a probability distribution or frequency distribution, and it is defined as the ratio of the standard deviation (i.e., the square root of the variance) to the mean.
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 23
3 The Push&Track system as a technique for opportunistic offloading Solving the problem of excessive load on infrastructure networks will require paradigm-‐altering approaches. In particular, when many users are interested in the same content, it is possible to leverage the multiple ad hoc networking interfaces (e.g., WiFi or Bluetooth) ubiquitous on today’s mobile devices in order to assist the infrastructure in disseminating the content. Subscribers may either form a significant subset of all users, comprising for example all those interested in the digital edition of a particular newspaper, or may include all users in a given area, for example vehicles receiving periodic traffic updates in a city.
The core mechanism behind Push&Track aims at alleviating the load on the operator’s infrastructure by reducing redundant traffic. In our vision, mobile nodes may subscribe to various content feeds that are distributed from a point inside the infrastructure’s access network and can be of any size. Whenever the subscriber base is significant enough that islands of ad hoc connectivity exist, Push&Track can leverage these to offload traffic from the infrastructure to the ad hoc radio. The idea is to benefit from node mobility and delay tolerance of a number of content types to help the infrastructure to shift a portion of the traffic from the primary (cellular) channel to an alternative (terminal-‐to-‐terminal) channel. Recent studies confirmed this as an alternative solution when many co-‐located users are interested in the same contents [13][26]. The main limitations of the existing solutions are that they need the knowledge of the contact probability of nodes or a training period. In addition, none of them take into account nodes that enter or leave the system. Push&Track does not rely on any restricted hypothesis on contact statistics, and adapts the offloading process to the current evolution of the dissemination process, leading to much more responsive and efficient offloading levels.
3.1 High level operation of Push&Track
In Push&Track, a subset of mobile users initially receives content from the primary channel and propagates it opportunistically using the ad hoc interface. When a node receives content from a neighbor, it acknowledges the reception to an offloading coordinator through the infrastructure network, forming a feedback loop in the system. This mechanism allows Push&Track coordinator to monitor in real time the evolution of the content dissemination process. The offloading coordinator continually estimates the actual infection status to decide whether or not to re-‐inject additional copies in order to boost the content diffusion in the network. Since acknowledgements sent by mobile nodes on the infrastructure channel are relatively lightweight (compared to the size of the disseminated content), the proposed system allows considerable reduction of the infrastructure load.
Note that the feedback loop guarantees also a fallback method to overcome various issues that may appear in the network, such as node failures or mobile users behaving selfishly -‐ occurrence of these events could heavily impact the opportunistic diffusion [29]. Since opportunistic communications depend heavily on the particular mobility of nodes, only probabilistic guarantees of successful content delivery and reception times can be given. To solve this issue, when we approach the maximum delivery
delay D (i.e., the validity of the content), Figure 10: High level operation of Push&Track
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 24
and the time left is equal to the time required to send the message through the infrastructure, denoted as P, the offloading coordination agent enters a panic zone and pushes the content to all uninfected nodes through the infrastructure, guaranteeing 100% delivery ratio while minimizing the load on the infrastructure. The high-‐level operation of Push&Track is illustrated in Figure 10.
Note that every re-‐injection decision is expected to bring benefit to the system, but it depends on the re-‐injection time and the target node (to which copies will be sent through the infrastructure). In fact, there is a difficult trade-‐off to consider. On the one hand, if too many copies are injected in the beginning (in general, earlier injections have more time to diffuse) the system may be overestimated (as we do not know in advance how nodes will encounter). On the other hand, if the system injects too few copies in the beginning and waits for the panic zone to compensate for lags, many opportunistic encounters might be wasted because of the lack of enough copies in the network. Re-‐injection is beneficial when the subsequent opportunistic transmissions saves additional infrastructure pushes. Of course, the benefit can be null if the offloading coordination agent selects a node that would have received the message later from another node.
3.2 Subset Selection
The following subset-‐selection strategies are considered by Push&Track when content has to be pushed:
• Random: Push to a random node chosen uniformly among those that have not yet acknowledged reception.
• Entry time: If content subscription is localized, then each node’s entry time (i.e., subscription time) is correlated to its position in the interest area. For example, selecting nodes that have the most recent (Entry-‐Newest) or oldest (Entry-‐Oldest) entry times should target nodes near to the edge of the area, whereas pushing to those that are closest to the average entry time (Entry-‐Average) should target the middle of the area.
• GPS-‐based: On top of the existing acknowledge messages, each node may also periodically inform the control system of its current location. From this information we consider two GPS-‐based strategies. In order to ensure rapid replication, GPS-‐Density strategy pushes the content to an uninfected node within the highest density area, GPS-‐Potential pushes the content to the node that is the furthest away from other infected nodes.
• Connectivity-‐based: Nodes can periodically communicate to Push&Track coordinator a list of their current neighbors. Even though each node will still perform opportunistic store-‐and-‐forwarding, the control system will have a good slightly out of sync picture of the global connectivity graph. The CC (Connected Components) strategy uses this information to push content to a randomly chosen node within the largest uninfected connected component. The idea is to push only one copy per connected component thereby getting close to the optimal number of pushed copies.
3.3 When to Push
3.3.1 Fixed Objective Function
A simple re-‐injection strategy is to bind the actual current infection ratio to a fixed objective function. Let x be the fraction of time elapsed between a message’s creation and expiration dates. Each strategy is defined by an objective function (see Figure 11), which indicates for every 0 ≤ x ≤ 1 what the current infection ratio should be (i.e., the fraction of the number of subscribing nodes that have the content).
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 25
Infection ratio can go down if infected nodes unsubscribe or up if non-‐infected nodes unsubscribe. If, at any time, the measured infection ratio, obtained from user-‐sent acknowledgements, is below the current target infection ratio, then the strategy returns the minimal number of additional copies that need to be re-‐injected in order to meet that target. Furthermore, when the time left before the deadline is equal to the time required to push the message directly, the Push&Track coordinator enters a panic zone and pushes the content to all uninfected nodes through the infrastructure, guaranteeing full dissemination.
Fixed Objective Functions may broadly be divided into three categories:
• Slow start: This includes two very simple strategies that push an initial number of copies and then do nothing until the panic zone: the Single Copy and Ten Copies strategies. The objective function for the Quadratic strategy is x2. The Slow Linear strategy starts with an x/2 linear objective for the first half of the message’s lifetime, and finishes with a 3/2 x – 1/2 objective.
• Fast start: The objective function for the Square Root strategy is x . The Fast Linear strategy starts with a 3/2x linear objective for the first half of the message’s lifetime, and finishes with an x/2 + 1/2 objective.
• Steady: This is the Linear strategy which ensures an infection ratio strictly proportional to x.
3.3.2 Derivative-‐based Re-‐injection (DROiD)
The general principle behind Push&Track is to adapt to the heterogeneous individual mobility pattern of nodes. This heterogeneity is most of the time at the base of a stepwise pattern in the epidemic diffusion, alternating plateaux and periods of infection as in Figure 12. For this reason, a better re-‐injection decision is taken by analyzing the outlook of the diffusion rather than comparing the actual infection to a fixed objective function. Exploiting this evidence, Push&Track detects plateaux in the content diffusion evolution, and, if needed, adaptively re-‐injects additional copies in the system to fine control the pace at which contents are disseminated. Thanks to this adaptive re-‐injection strategy, Push&Track reaches much better performance than using fixed objective functions.
Figure 12: Epidemic diffusion of 6 initial copies in the Rollernet dataset: the diffusion behavior presents three steep
zones and three flat zones, resulting from the heterogeneity of encounter probabilities.
Figure 11: Infection rate objective functions. x is the fraction of time elapsed between a message’s creation and expiration dates. x
= 1 is the deadline for achieving 100% infection.
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 26
3.3.2.1 Motivation Let us now dig into the relationship between mobility patterns and the stepwise pattern in the epidemic diffusion. This phenomenon is intrinsically related to the heterogeneity of contact patterns, i.e., the fact that two different nodes do not meet on average the same number of other nodes.
To capture the heterogeneity of patterns, we adopt a Marked Poisson Process model of node contacts [19][33]. In this model, the meeting times of any two nodes (i,j) follow a Poisson Process with rate λij = λpij . The inter-‐contact times Tij are thus independent exponentials with parameter λij, and contact matrix C = (pij) captures the patterns of interactions between nodes. In the homogeneous case, C is the identity matrix, i.e., all nodes can see each other with the same probability. At any given time instant of the dissemination process, a set S of nodes is infected. We are interested in the random plateau duration TS during which the dissemination does not progress. This corresponds to the random time during which this set of nodes do not meet any other nodes. Looking at the set of links between nodes in S and its complement, one can see that TS = infi∈S,j∉S Tij . By Poisson calculus, and noting the cut value ∂S =Σi∈S,j∉S pij, TS is an exponential random variable with parameter λ∂S [16]. The expected plateauing duration, once set S has been reached, is thus 1/λ∂S.
This simple argument shows that TS is directly related to the structural properties of the contact patterns C. This provides a natural connection between the community structure of the contact graph and the progression (and lack of progression) of the opportunistic dissemination process. Applying these ideas to the graph of contacts C (which represents the probability of two nodes to meet) means that a community S of users will spread the message quickly within the group (high conductance), but will reach a plateau once the nodes in the group all have the message, because the weight of inter-‐cluster edges and thus its cut value ∂S is low. This observation provides the motivation of our further investigation of adaptive offloading strategies that are able to chase the individual mobility of nodes, re-‐injecting copies when the diffusion evolution runs into a plateau.
Figure 13: Discrete time slope detection performed by DROiD. For clarity we consider the content creation time
t0=0.
3.3.2.2 Re-‐injection strategy We achieve higher offloading efficiency by making the re-‐injection decision dependent not only on the actual dissemination level, but also on the trend of the infection ratio. For instance, using only fixed objective functions, the offloading coordinator reacts too late when the infection ratio is above the objective function but still not evolving, or overreacts when the infection
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 27
evolves well but its instantaneous value still lies under the objective function. Late or too violent re-‐injections result in a waste of messages pushed through the infrastructure. Another limitation in the use of a fixed objective function is that different objective functions behave differently depending on the content lifetime and network status. With the derivative re-‐injection strategy, the offloading coordinator stores in memory a short snippet of past infection ratio values. All content has an associated tracker that stores the evolution of the infection ratio for a temporal sliding window of size W. As illustrated in Figure 13 at evaluation time step t, the offloading coordinator performs a forward difference quotient on the instantaneous infection ratio I(t) that approximates to a discrete derivative:
WWtItItI)()()( −−
=Δ
∆I(·∙) approximates the slope of the infection ratio and is one of the parameters that influence the re-‐injection decision. Push&Track in this case re-‐injects additional copies of the content whenever the discrete derivative ∆I(·∙) is below a ∆lim threshold computed on line. The threshold ∆lim varies according to the actual distance from the panic zone and the infection rate. ∆lim is computed as the ratio between the fraction of uninfected nodes and the time remaining before the panic zone. A steeper slope is needed when time gets closer to panic zone or the infection ratio is lagging (different from when we are at the beginning of the infection process). Formally speaking, we have:
tPDtIt−−
−=Δ
)()(1)(lim
As a final step, the injection rate rinj(t) is computed as a piecewise function, depending on the ratio of the current ∆I(t) value and the ∆lim threshold:
⎪⎪⎩
⎪⎪⎨
⎧
Δ>Δ
Δ≤Δ<⎥⎦
⎤⎢⎣
⎡
ΔΔ
−⋅
≤Δ
=
)()(0
)()(0)()(1
0)(
)(
lim
limlim
tt
ttttc
tc
tr
I
II
I
inj
where c∈[0, 1] is a clipping value used to limit the overall amount of re-‐injected copies in the case of negative values of ∆I . Finally, rinj(t) is multiplied with the number of uninfected nodes to find R(t), the number of copies to re-‐inject at t:
)()())(1()( trtNtItR inj⋅⋅−=
where |N (t)| is the total number of nodes in the network.
3.4 Results
3.4.1 Evaluation Setup
We evaluate Push&Track re-‐injection strategies using a large-‐scale vehicular mobility trace of Bologna (Italy) with 10,333 vehicles. This dataset, initially exploited to evaluate cooperative road traffic management strategies within the previous FP7 iTetris project, covers 20.6 km2 comprising 191 km of roads. The dataset is derived by real traffic measurements and inferred into a micro-‐mobility model through the SUMO simulator. From the extracted mobility data, we derive a contact trace considering a 100 meters threshold. The final trace has duration of about one hour; in average, 3,500 vehicles are present at the same time (because of their mobility, some nodes leave while others join during the observation period).
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 28
We built a streamlined event based simulator, considering a simple contact-‐based ad hoc MAC model, where a node may transmit only to a single neighbor at a time. Transmission times are deterministic since we do not take into account complex phenomena that occur in the wireless channel such as fading and shadowing. The ad hoc routing protocol employed by nodes to disseminate the content is the epidemic forwarding.
We investigate how our system performs under tight delivery constraints, when the maximum delivery delay D lies in the range [30, 180] seconds. In fact, we are mostly interested in very short maximum reception delays, in the order of minutes, as otherwise users would not realistically accept to trade-‐off reception delays for cellular capacity. All the results presented in this section are averages over 10 simulation runs. Contents are issued periodically; with the previous one expiring when a new one is created (for now a single content is active in the system at a time).
3.4.2 Fixed Objective Function
We focus primarily on the aggregate load that flows through the infrastructure and across the ad hoc links. Load measurements take into account acknowledgement messages as well as failed and aborted transfers. We use two reference strategies for evaluation purposes: “infrastructure only” (Infra) and “connected component oracle” (Oracle). In the Infra strategy, there is no offloading at all, and the infrastructure represents the only means of distributing content. In the Oracle strategy, the offloading coordinator has a real-‐time picture of the ad hoc connectivity of the entire network. In this strategy, the oracle pushes the content to a random node within each existing connected component. We are mainly interested in the offloading efficiency, which is computed by comparing the infrastructure load of a specific run with the reference Infra strategy load, e.g. in the absence of any ad hoc radio.
One of the most interesting result is that the Random re-‐injection strategy consistently does better than most of the more sophisticated strategies described in Section 3.2, as shown in Figure 14.
Random selection combines the best of all the more complex strategies. Indeed it statistically has a high chance of hitting the large connected components and also tends to spread the copies uniformly over the considered area. If one is not willing to deal with the added complexity of a more sophisticated control channel, let alone privacy concerns about localization and/or proximity information, then the simple Random whom-‐strategy consistently performs very well. As we can see from Figure 14, in the absence of feedback loop, the choice of the initial number of copies to inject has a huge impact on the offload ratio. Consider the Single-‐Copy and the Ten-‐Copy strategies. Due to the epidemic propagation, a difference of only 9 initial copies translates to a 4x final offloading efficiency.
Figure 14: 1-‐minute delay: average offloading efficiency for different combinations of whom and when strategies,
three different participation rates are considered. The rows correspond, from top to bottom, to the following whom strategies: Random, Connected Components, Entry-‐Oldest, Entry-‐Average, Entry-‐Newest, GPS-‐Density, and GPS-‐
Potential. The columns represent the following when strategies, from left to right: Single Copy, Ten Copies, Quadratic, Slow Linear, Linear, Fast Linear, and Square Root.
On the other hand, the presence of the control loop permits to quickly react and adapt to changing conditions. This allows Push&Track to avoid massive last-‐minute re-‐injections upon arriving in the panic
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 29
zone, and achieving excellent offloading efficiency (73% for Slow Start and 72% for Linear at 100% participation rate). A drawback of this schema is that it does not propose a single solution, but instead a multitude of objective functions; the problem is that different objective functions behave differently depending on the content lifetime, network status and number of users. For instance, we can clearly see that in Figure 14 the objective function that gives the best results is not the same for 25% and 100% participation rates. 3.4.3 Derivative-‐based Re-‐injection (DROiD)
Figure 15: Offloading efficiency for different re-‐injection schema. Different maximum reception delays for messages
are considered.
For evaluation, we compare the derivative strategy with the Linear and Slow-‐start strategies, since these strategies gives the best results in the 100% participation scenario.
All the Push&Track (PnT in figures) strategies perform very well in terms of reduced infrastructure load, by delivering the majority of traffic through device-‐to-‐device communications even in the case of tight delays. As we can see from Figure 15, compared to Linear and Slow-‐start strategies, the derivative strategy always leads to better results. The gap between the derivative and the two fixed objective functions strategies increases when the tolerance to delay increases, suggesting a better adaptation to the diffusion evolution. This curve shows also a well-‐known phenomenon: an increase in the reception delay corresponds to an increase in the offloading efficiency.
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 30
Figure 16: Infrastructure vs. ad hoc load per message sent using the Infra, the Oracle, and the DROiD strategies.
Different maximum reception delays for messages are considered.
Simulation results plotted in Figure 16, show that DROiD presents roughly the same infrastructure load of the oracle to guarantee 100%-‐delivery ratio. Sudden variations in the infection ratio, due to nodes that dynamically enter and leave, are well handled by the feedback mechanism. Although DROiD and Oracle show more or less the same trend in the offloading efficiency curve, this result is achieved through two completely different strategies. On the one hand, Oracle, exploits its perfect knowledge of the connectivity status in the network, pushing the content to specific high potential nodes. On the other hand, the derivative strategy has a much less complete, and slightly out of sync view of the system, and employs the algorithm explained in Section 3.3.2.2 to guess when additional copies of the content are required.
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 31
4 Throughput analysis of opportunistic network protocols The main goal of this part of the analysis is to provide analytical models describing the throughput attainable in the opportunistic part of the MOTO reference networking scenario. As a matter of fact, in opportunistic networks the main performance figure that impacts on the throughput is the delay experienced by messages sent from a sender to a destination (and forwarded through a given forwarding algorithm). The other key component to derive the throughput is the bandwidth available during direct communications between nodes, which is a topic far less investigated in the framework of opportunistic networks, as it has been already investigated in the more general framework of mobile ad hoc networks.
In order to correctly model the delay, we need to separately investigate two aspects. The first one deals with convergence of forwarding algorithms2, i.e. whether a given forwarding algorithm yields finite or infinite expected delay. The second one deals with providing closed form expressions for the delay, in the cases where routing algorithms converge. While the second aspect is intuitive to understand, the first one needs some additional explanation.
As messages follow multi-‐hop paths across the nodes of the network, their delay is the result of the delay accumulated at each hop along the forwarding path. Therefore, the time (intermeeting time) between consecutive encounters of a pair of nodes is the elementary component of the overall delay. Thus, knowing the distribution of intermeeting times, one could -‐ in principle -‐ model the distribution of the delay experienced by messages. Unfortunately, there is no agreement on the actual shape featured by pairwise intermeeting times in real networks. Of the many hypotheses that have been made [22][24][34][36], the most challenging from the forwarding standpoint is the one proposed by Chaintreau et al. [20]. Chaintreau et al. found intermeeting times extracted from real mobility traces to follow a Pareto distribution, i.e. whose Complementary Cumulative Distribution Function (CCDF) is in the form
P X > x( ) = bb+ x!
"#
$
%&α
, b, x,α > 0 , where b is the scale and α the shape parameter. The problem with
Pareto distributions is that their expectation is finite only for certain values of their exponent α. More specifically, the expectation is finite if α>1, while for α<1 it diverges to infinity. Being the delay the result of the composition of the time intervals between node encounters, depending on the exponent values featured by intermeeting times, the expectation of the delay might diverge. In practical terms, in cases where this happens, messages may be trapped on nodes from where they are not forwarded further (according to the rules of the specific forwarding protocol), thus not reaching the final destination. Therefore, given a specific pattern of nodes mobility (and, thus, a specific pattern of intermeeting times) and a given forwarding protocol, it is important to know whether that forwarding protocol may yield infinite delay, in order to know whether it can “safely” be used in the network or not.
In the following of this section, we therefore first present the main results obtained by MOTO partners on the problem of convergence, and then we present an initial model for the delay of forwarding protocols in case of convergence. The first aspect is background information, as it has been obtained by MOTO partners before the start of the project. It is nevertheless briefly presented hereafter as it is one of the starting points of the activities in the work package. Specifically, we are currently extending these results to more general settings, and we expect to report these new results in the following deliverables of the work package.
2 As the routing and forwarding processes are typically done at the same time in opportunistic networks, the two terms, although conceptually different, are typically used interchangeably in the literature, and in the following of this document.
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 32
4.1 Convergence of forwarding protocols in opportunistic networks We hereafter provide a brief summary of the work presented in [14] (from the CNR team working in MOTO). We assume a network of N mobile nodes. As it is typically (and accepted) in the opportunistic networking literature, we assume that a contact between two nodes lasts long enough to allow the nodes to exchange the messages they have to, according to the rules of the forwarding protocol. Therefore, in the evaluation of the delay, the main role is played by the intermeeting times. Specifically, we assume a general heterogeneous mobility setting, where intermeeting times follow Pareto distributions, with parameters possibly different from pair to pair. Therefore, the CCDF of the intermeeting time between node i and j is denoted as follows:
Fij t( ) =tmin(ij )
tmin(ij ) + x
!
"#
$
%&
αij
(1)
where αij is the shape parameter and tmin(ij ) the scale parameter. Note that considering such heterogeneous
environment (instead of a homogeneous one where all nodes meet with exactly the same distribution) is one of the main contributions of [14] with respect to previous literature.
From the standard properties of Pareto distributions it follows that the average intermeeting time between i and j is finite if and only if (iff) αij is larger than 1. Another important statistic for this study is the residual of intermeeting times, i.e. the time until the next contact between the two nodes, starting from a random point in time. It is known that, if intermeeting times follow a Pareto distribution, residuals are also Pareto with the same scale parameter and shape parameter reduced by one (i.e., αij -‐1 for nodes i and j). It thus follows that residuals have finite expectation iff αij is greater than 2.
In terms of forwarding strategies, results presented in [14] hold for social-‐oblivious protocols, one of the two large families that can be identified in the literature. Social-‐oblivious protocols, which do not exploit any information about the users' context and social behaviour but just hand over the message to the first node encountered (avoiding at most those nodes that have already forwarded the message). The main advantage of these strategies is that they are intrinsically simple and lightweight (practically no information to collect, store, or mine). Despite their simplicity, they are a reference point in the literature, as a number of foundational works on the properties of opportunistic networks have been found considering this class of protocols. To accurately represent the different variants in this class, we identify three main groups, differing in the number of hops allowed between source and destination, the number of copies generated, and whether the source and relay nodes keep track of the evolution of the forwarding process or not. First, forwarding strategies can be single-‐copy or multi-‐copy. In the former case, at any point in time there can be at most one copy of each message circulating in the network. In the latter, multiple copies can travel in parallel, thus in principle multiplying the opportunities to reach the destination (we assume that all copies are generated by the source node). Second, forwarding protocols can be classified based on the number of hops that they allow messages to traverse, or, in other words, based on a TTL computed on the number of hops. When the number of allowed hops is finite, the last relay can only deliver the message to the destination directly. Third, the amount of knowledge that each agent in the forwarding process can rely on (or is willing to collect and store) is an additional element for classifying forwarding strategies. Focusing on the source node, there can be social-‐oblivious strategies in which the source node does not keep track at all of how the forwarding process progresses. In this case, considering the configuration in which the source node can generate up to m copies of the message, the m copies might end up being all distributed to the exact same relay, thus eliminating the potential benefits of multi-‐copy forwarding. A memoryful source, instead, is able to guarantee to use distinct relays. A similar problem holds for intermediate relays. Memoryless relays can forward the message to the same next hop more than once, because they are not at all aware of what happened in the past. On the other hand, memoryful relays possess this knowledge, and are able to refuse the custody of messages that they have already relayed. Please note that we assume that the source node can never be handed over messages that it has generated. This assumption simply takes into account the fact that the source identity is always enclosed into the message header, thus this does
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 33
not require any additional knowledge beside what is already present in the system. Table 3 summarizes the feasible combinations (the ones marked with the checkmarks) of the forwarding characteristics described above when social-‐oblivious schemes are considered. These combinations can be found in well known routing strategies. For example, the 1-‐hop 1-‐copy memoryless forwarding corresponds to the Direct Transmission strategy [7], in which the source node can only deliver the messages to the destination. The 2-‐hop 1-‐copy memoryless forwarding is equivalent to the Two Hop forwarding introduced in [25]. The 2-‐hop m-‐copy memoryful forwarding is equivalent to the multi-‐copy version of the Two Hop protocol studied in [20]. Please note that relays can be memoryful only when they have multiple forwarding choices. This is not the case when the number of hops is limited to either one (there is no relay in this case) or two (relays can only deliver the message to the destination).
Table 3. Summary of forwarding strategies.
In [14] we derive sufficient and necessary conditions on the shapes of the intermeeting time distributions for convergence of the various families of protocols highlighted in Table 3. We hereafter exemplify one of these cases, and then provide the final results for all classes, together with examples of practical applications of these results.
Let us consider the 2-‐hop 1-‐copy memoryless scheme. We can prove that the protocol converges iff both the following conditions hold
where s and d denote the source and destination nodes, respectively. The physical meaning of the conditions is quite intuitive. Recall that in the 2-‐hop 1-‐copy scheme the source hands over the only copy of the message to the first encountered node, which then has to relay it directly to the destination. Condition C1 guarantees that the first phase occurs within a finite expected time. Specifically, the source node encounters the first possible relay with a time that is distributed according to a Pareto law with shape
αsj − Psj∈Ps
∑ . Therefore, the first phase “converges” if the average value of this time is finite, which leads to
condition C1. Condition C2 guarantees that whatever relay is chosen by s, it encounters the destination within a finite expected time (note that the time for such relay to meet the destination is the residual of their intermeeting time, as the process of encounter between nodes is asynchronous, and therefore node s meets the relay at a random point in time with respect to the meetings between the relay and the destination).
Replicating the same methodology also for the other schemes, we obtain the conditions listed in Table 4.
Table 4. Convergence conditions.
Specifically, conditions C3 and C4 are as follows:
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 34
where m denotes the number of copies generated by the source, and m* is defined as follows
and αi
* denotes the i-‐th largest αsj with j ∈ Ps . C3 and C4 are needed only in case of multi-‐copy forwarding. The value m* is a threshold on the number of copies, such that if the source generates up to m* copies, all of them are handed over to m* distinct relays with finite expected delay, while if m exceeds m* the additional copies cannot be handed over with finite expected delay. Condition C3 thus imposes that the source can actually relay m distinct copies of the message; while condition C4 guarantees that the destination meets at least one of the used relays with finite expected delay. These theoretical conditions can be used to decide which protocols to use given a configuration of intermeeting times. For example, let us consider the case of a network of N=10 nodes, and define the following set of exponents
whose components are denoted as α1, …, αN-‐1. We assume that a generic node i meets all the other nodes in a way such that αi,1= α1, …, αi,N= αN-‐1. We also consider the case where the source node is 1 and the destination node 10. According to the above results, the expected delay for the Direct Transmission is not defined, because α1,10 = 1.3, while it should be greater than 2 for convergence. Analogously, the convergence condition for the 1-‐copy 2-‐hop scheme is not satisfied because of condition C2. The only scheme able to achieve a convergent expected delay is the m-‐copy 2-‐hop scheme, with m=4. For the three forwarding strategies discussed above, we plot the empirical cumulative distribution function in Figure 17. As expected, in the case of 4-‐copy 1-‐hop scheme, the great majority of messages (~99.9%) is delivered within a short time (100s) from their generation. For both the 1-‐hop 1-‐copy and the 2-‐hop 1-‐copy schemes, instead, after 10000 seconds there is still a big fraction (around 10%) of messages to be delivered. These long delays, predicted by our model, are those that cause the expected delay to diverge.
Figure 17. Example of delays with different forwarding strategies.
Starting from these results, we are currently extending them in order to take into consideration not only social-‐oblivious protocols, but also social-‐aware protocols, which use context information about the relays and the destination in order to take forwarding decisions. For example, they take into consideration the rate of encounter with the destination as a measure of fitness to relay towards it. We expect to report these results in the next deliverables of the work package.
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 35
4.2 Modelling the delay of opportunistic routing protocols In this section, we describe the initial work carried out in WP3 to model the delay of opportunistic networking protocols. The results are currently under submission for publication in an international journal (the paper is accepted under major revisions), and are also available as a CNR Technical Report.
As already introduced in the previous section, forwarding protocols in opportunistic networks can be classified as social oblivious or social aware. The simplest form in the first class is represented by Epidemic forwarding [42], which generates and hands over a new copy of the message for each new encounter. The rationale behind this approach is to leverage as many routes to the destination as possible. Unfortunately, this greedy approach suffers from severe resource consumption and tends to overload the network [37]. Smarter, social-‐aware, strategies as to whom to forward and how many copies should be generated have been devised. According to the type of information used when making forwarding decisions, these strategies can be further classified as partially social-‐aware [31][38] and fully social-‐aware [15][27][23]. They leverage information about the users, their contact dynamics, the environment they operate in, the social relationships they share, in order to select one (or a bunch of) best next hop. As discussed in the previous section, depending on the number of copies generated for the same message, forwarding protocols can also be classified into single-‐copy or multi-‐copy schemes.
Despite the variety of practical forwarding solutions based on different heuristics to define social-‐aware policies (such as encounter frequency and sociality metrics), no general framework has been introduced so far for the analysis of opportunistic forwarding protocols in a structured way. Some models exist in the literature (e.g., [44][1][38][39][30]), but they are specific to the protocols being studied and can hardly be re-‐used when the protocols are changed. The situation is even worse for social-‐aware schemes, which, despite their popularity, are typically difficult to model analytically. Moreover, the absence of a general consensus on some fundamental properties of user movement patterns (e.g., the distribution of the inter-‐meeting times) makes it even more complex to find a model on a solid basis.
The contribution of the work we report in this section is twofold. First, a general framework for the analysis of single-‐copy forwarding schemes is introduced. This model, based on Markov chains, allows us to compute significant quantities, such as the first and second moments of the number of hops and delay, which characterize the forwarding performance. These moments can then be used to approximate, e.g., the full distribution of the delay and number of hops. Clearly, the full distribution, e.g., of the delay is more informative than just its expectation, as it allows us to analyse, for example, the dependency of the delay on the TTL. This general framework also takes into account social-‐awareness, which can be incorporated seamlessly into the model. In addition, our framework is independent of specific mobility assumptions, thus it would remain usable even if new insights on the way users move were provided.
The second contribution is the instantiation of the framework in three different mobility scenarios. More specifically, we solve the framework exactly in the case of exponential and power law inter-‐meeting times, which are popular assumptions for inter-‐meeting times emerged in the literature [24][20][18]. In addition, we also provide a complete solution to the framework in the case of hyper-‐exponentially distributed inter-‐meeting times. The latter result is particularly significant, since the hyper-‐exponential distribution can approximate the behaviour of a large class of distributions, those having a coefficient of variation greater than 1. The coefficient of variation [40] is defined as the ratio between the standard deviation and the mean, and measures the dispersion of a probability distribution. The higher the coefficient of variation, the more distant a sampled value can be from the mean. High-‐variance distributions are extremely important in opportunistic networks for two reasons. First, they have often emerged as a plausible hypothesis for inter-‐meeting times (apart from the power law hypothesis, recently the LogNormal one has also gained a lot of popularity [41]). Second, high-‐variance distributions can drastically affect the delay experienced by messages, causing the expectation of the delay to diverge in extreme cases, as discussed in Section 4.1.
The characteristics of single-‐copy schemes have been analytically studied in the literature for what concerns social-‐oblivious strategies [38][20], but, to the best of our knowledge, the one we have proposed
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 36
is the first general framework that takes into account the social-‐awareness of the forwarding process. Moreover, results obtained for single-‐copy schemes are important to multi-‐copy schemes as well. Consider for example multi-‐copy schemes in which replication can occur only at the source node. Each copy travels along a path independently of the others. While the delivery from the source node to the first relays is significantly different from a single-‐copy delivery due to the multi-‐copy generation, from the first relay to the destination the delay can be approximated using single-‐copy results. The extension of the framework to the multi-‐copy case is currently under study.
4.2.1 General framework for modelling the delay Due to its flexibility, we use a semi-‐Markov process with N states (N being the number of nodes in the network) to model the opportunistic forwarding process. A semi-‐Markov process is one that changes state in accordance with a Markov chain (called embedded or jump chain) but where transitions between states can take a random amount of time with an arbitrary distribution [35]. As such, it is fully described by the transition matrix associated with its embedded chain and by Ti, i = 0,…N, where Ti denotes the distribution of the time that the semi-‐Markov process spends in state i before making a transition. We express our semi-‐Markov process associated with the single-‐copy message forwarding process in terms of the embedded Markov chain in Figure 18
Figure 18. Semi-‐Markov chain for the general delay modelling framework.
Assuming that node i is currently holding a message whose destination is d, the probability pijd that node i
will delegate the forwarding of the message to another node j is a function of both the likelihood of meeting node j and the probability that node i will hand over the message to node j according to the forwarding policy in use. It is simple to write the delay from node i to the destination as follows
(2)
where Tij denotes the time before node i hands over the message to node j conditioned on the fact that j is the first encountered suitable next hop for node i (corresponding to the time before the chain moves from state i to state j), and pij is the probability that node j is actually the first encountered suitable next hop for node i (a similar equation can be found for the number of hops). The first two moments of the delay can then be written as follows.
(3)
(4)
Equations (3) and (4) are extremely powerful, as they allow us to completely characterize the first two moments of the single-‐copy delay and number of hops. By knowing the first two moments, we can use, for example, the moment matching approximation technique [40] to compute the approximate distribution of the delay, thus completely characterizing it. Note that Equation (3) has an intuitive explanation: the expected value of the delay from node i is the expected time to exit from node i (because of an encounter
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 37
generating a forwarding event), plus the average delay from any possible relay j to node d, weighted by the probability that node i encounters relay j and forwards the message to it (pij).
The first and second moments can be computed when pij, Tij and Ti are characterised. This can still be done in general, i.e. irrespective of the specific forwarding protocols used. Then, these expressions can be customised and converted in closed form expressions for each specific protocol. To provide a general idea, let us focus on the derivation of pij. Denoting with Rij the residual intermeeting time between node i and j, and with Ri the set of possible relays that i may consider for destination d according to the specific forwarding protocol, we obtain
(5)
Basically, Equation (5) tells that the probability that node i uses j as forwarder is the probability that j is the first node encountered by i among those that it will use as forwarders towards destination d.
4.2.2 Using the general framework: concrete examples In this section we exemplify how the proposed framework can be used to assess the performance of the Direct Transmission, Always Forward, Two Hop, Direct Acquaintance, and Social Forwarding schemes in such cases. Direct Transmission and Two Hop have been introduced already in Section 4.1. Always Forward is basically Epidemic Routing. Direct Acquaintance and Social Forwarding are representative of social-‐aware policies. Both forward according to a gradient of fitness with respect to the given destination. In the former case, fitness is computed as the rate of direct encounter with the destination, while in the latter indirect contacts (i.e., contacts mediated from other nodes) are also considered. We consider two mobility scenarios, falling in the category of social-‐oriented mobility models (which are the reference class for opportunistic networks). The two scenarios are represented in Figure 19 (left) and (right). In both cases, nodes are divided in three communities. Most of the nodes move only inside their reference community, while a few nodes (travellers) move across different communities, thus representing bridges among them (travellers are the only way for messages to travel across communities). In Scenario 1, all communities have one traveller towards the other communities, while in Scenario 2 there is only one community with two travellers, one for each of the other communities. Clearly, Scenario 2 is much more challenging from a forwarding standpoint. In both scenarios we considered both exponential and Pareto intermeeting times, fixing the average intermeeting times of regular nodes and travellers appropriately.
Figure 19. Scenario 1 (left) and 2 (right).
Figure 20 show the forwarding performance as far as the delay is concerned for scenario 1 and exponential mobility. Specifically, we compute from the model the expected delay E[Dij] for all pairs i,j, and we plot in Figure 20 the distribution of the expected delay (across all pairs). The Direct Transmission scheme suffers when the source and the destination of the message do not get in touch with each other directly, thus producing infinite delays. This is because, with Direct Transmission, nodes can only deliver their messages directly to the destination, thus missing all the opportunities offered by relaying: when the destination is never met, the message cannot be delivered. However, relaying does not always guarantee a better performance in terms of expected delay, as the Two Hop case in Figure 20 shows. Recall that the expected
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 38
delay is a weighted average of the expected delay of each possible path. Thus, if there exists even a single path with infinite expected delay, the overall expected delay will diverge. This is exactly what happens with the Two Hop strategy: due to the blind selection of the next hop, messages can take a wrong path at the first hop, and then they get stuck there because the intermediate relay node never meets the destination. In this scenario, such sequence of events is possible for all (i,j) source-‐destination pairs such that either (a) source node i and destination node j neither are traveler nor are in the same community or (b) source node i is a traveler. In both cases there are some paths that achieve a finite expected delay, but there are also paths with infinite expected delay, and the latter drag the overall expected delay to infinite. Comparing the Two Hop scheme with the Direct Transmission strategy, in case (a) the fraction of node pairs that experience an infinite expected delay is the same under both protocols. In the second case, instead, i.e., when source node i is a traveler, among the possible paths that are added by the Two Hop scheme with respect to the Direct Transmission strategy, there are some characterized by an infinite delay, and those paths drag to infinite the expected delay for the Two Hop scheme, even if the direct encounter between the traveler and the destination would have a finite expectation. As an example of the first case, consider a message with source node in community C1 and destination node in community C2. In addition, assume that the source and destination nodes are not travelers. If the first encounter of the source node is with the traveler connecting C1 and C3, the message will be handed over to this node. However, this traveler never gets in touch directly with the destination in community C2, and the message will never be delivered. As for the second case, when the traveler is the source of the message (with destination in community C1, for example), there is always a non-‐negligible probability that, at the time the message is generated, the traveler is roaming in a community (C3, for example) different from the one in which the destination resides. In this case, the message will be handed over to the first encountered node, which, in our example, belongs to C3 and which will never meet the destination.
Direct Acquaintance, Social Forwarding, and Always Forward are able to exploit the social bridges between communities and to hand over the message to the convenient node. The Always Forward approach, however, forwards totally at random, and many hops may be required before the message eventually finds, by chance, its destination. Social strategies are instead able to choose the relays providing the best trade-‐off between low delay and efficient use of resources. Note also that in this scenario Direct Acquaintance and Social Forwarding show the same performance. In fact, they only differ when transitivity of contacts needs to be exploited for successful delivery, which is the case of the scenario discussed in the next section.
Figure 20. Distribution of the delay in Scenario 1 (exponential mobility).
Figure 21 shows the same results for Scenario 2. The Direct Transmission, Two Hop, and Direct Acquaintance schemes are not able to deliver a subset of messages. In the case of the Direct Transmission scheme the reason lies in the absence of direct contacts between the source of a message and its destination. The Two Hop scheme again suffers from the problem of messages that move away from their source node and get stuck at intermediate relays. In the case of the Direct Acquaintance policy, losses are due to the fact that a node hands over a message to another node that has a higher probability of meeting
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 39
the destination, measured in terms of direct encounters only. The traveler that visits C2 does not meet any nodes of C3 directly, thus it is not considered a good relay for destinations in C3 by the Direct Acquaintance scheme. However, that traveler will meet in C1 the other traveler that visits C3 and thus it can be considered, indirectly, a good forwarder for C3 by nodes that roam only in C2. For this reason, a more efficient strategy should also consider the transitivity of opportunities (e.g., node a meets b, which in turn meets c, thus a can be considered a good relay for destination c). This transitivity of encounters is detected by the Social Forwarding strategy, which, for this reason, is able to deliver all messages to their destinations. The Always Forward strategy is, as before, able to deliver all messages, but using many relays, even more than in the previous scenario. The reason is that, being the forwarding opportunities so limited, with the Always Forward strategy the destination is typically found by chance after many (bad) relays have been used.
Figure 21. Distribution of the delay in Scenario 2 (exponential mobility).
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 40
5 Next steps Despite far from being complete, we think that the results presented in this deliverable are relevant and interesting for the MOTO objectives, and the objectives of WP3 in particular. The three lines we have pursued and reported in this document have produced interesting results, in order to (i) characterise the limitations of LTE networks in providing sufficient throughput to individual users; (ii) defining a reference system for integrating LTE and opportunistic networks, which also highlights key aspect to focus on in terms of capacity enhancements, and (iii) characterise the capacity of opportunistic networks.
Starting from these results, there are two main directions that we need to pursue (remember that, as planned, this document does not cover the entire spectrum of activities of WP3, but is mainly related to Task 3.2). On the one hand, we need to complete the investigations in these three lines of research. There are still many aspects to be investigated more deeply about the performance of LTE, such as multi-‐cell configurations and unsaturated traffic conditions. We need to derive analytical tools to predict its performance from a given scenario. We need to complete the activities on modelling the capacity of opportunistic networks. We need to refine the characterisation of the opportunities and performance limits of solutions like Push&Track.
On the other hand, we need to “put the individual pieces together”. This will be mainly achieved by deriving models of the capacity of an integrated network (including both wireless infrastructures and opportunistic communications), and characterising the resulting capacity gain, taking systems like Push&Track as reference. These models will provide tools in the hand of the operators, to decide how to configure the offloading process when additional capacity is needed and the infrastructure alone cannot cope with the demand of the users.
These activities will be synergic to the rest of the work package. In particular, as will be described in D3.2, work is already ongoing in T3.1 to characterise the impact of different contact patterns on the capacity of the network, taking in particular consideration the case of duty cycling and energy saving policies. Results from T3.1 will be part of the final model for the capacity of the opportunistic network and of the integrated network. In addition, scheduling policies studied in T3.3 will benefit from these results, as scheduling decisions may be taken also based on the expected capacity available on the different parts of the network.
Finally, the results presented in this deliverable will start feeding the work in WP4 (which has just started) on the design of the control aspects of the offloading process, and the detailed protocols for data dissemination through device-‐to-‐device communication in the integrated infrastructure and opportunistic network.
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 41
References [1] “ns-‐3 Model Library – Release ns-‐3.17”, May 14, 2013.
[2] M.Hata, “Empirical formula for propagation loss in land mobile radio services”, IEEE Trans. on Vehicular
Technology, vol. 29, pp. 317-‐325, 1980.
[3] “Digital Mobile Radio: COST 231 View on the Evolution Towards 3rd Generation Systems”, Commission of the
European Communities, L-‐2920, Luxembourg, 1989.
[4] 3GPP TR 36.814 “E-‐UTRA Further advancements for E-‐UTRA physical layer aspects”.
[5] 3GPP TS 36.211 “E-‐UTRA Physical Channels and Modulation”
[6] 3GPP TS 36.213 “E-‐UTRA Physical layer procedures”
[7] FemtoForum LTE MAC Scheduler Interface Specification v1.11
[8] 3GPP TS 36.104 “E-‐UTRA Base Station (BS) radio transmission and reception
[9] S. Sesia, I. Toufik and M. Baker, “LTE -‐ The UMTS Long Term Evolution -‐ from theory to practice”, Wiley, 2009
[10] 3GPP R1-‐081483 Conveying MCS and TB size via PDCCH
[11] Francesco Capozzi, Giuseppe Piro, Luigi Alfredo Grieco, Gennaro Boggia, Pietro Camarda: Downlink Packet
Scheduling in LTE Cellular Networks: Key Design Issues and a Survey. IEEE Communications Surveys and Tutorials
15(2): 678-‐700 (2013)
[12] A. Al Hanbali, A. Kherani, P. Nain, Simple models for the performance evaluation of a class of two-‐hop relay
protocols, in: IFIP NETWORK-‐ ING'07, 2007, pp. 191{202.
[13] M. Barbera, J. Stefa, A. Viana, M. de Amorim et M. Boc, VIP delegation: Enabling vips to offload data in wireless
social mobile networks, in ACM International Workshop on Challenged Networks, Las Vegas, NV, USA, 2011.
[14] Chiara Boldrini, Marco Conti and Andrea Passarella, “Less is More: Long Paths Do Not Help the Convergence of
Social-‐Oblivious Forwarding in Opportunistic Networks”, Third International Workshop on Mobile Opportunistic
Networks (ACM MobiOpp 2012), Zurich, Switzerland, 15-‐16 March 2012.
[15] C. Boldrini, M. Conti, A. Passarella, Exploiting users' social relations to forward data in opportunistic networks: The
HiBOp solution, Pervasive and Mobile Computing 4 (5) (2008) 633{657.
[16] P. Bremaud, Gibbs fields, Monte Carlo simulation, and queues, Springer,1999
[17] S. Burleigh, A. Hooke, L. Torgerson, K. Fall, V. Cerf, B. Durst, K. Scott, and H. Weiss, “Delay-‐Tolerant Networking:
An Approach to Interplanetary Internet,” IEEE Comm. Magazine, vol. 41, no. 6, pp. 128-‐136, June 2003.
[18] H. Cai, D. Eun, Crossing over the bounded domain: From exponential to power-‐law intermeeting time in mobile ad
hoc networks, IEEE/ACM Transactions on Networking (TON) 17 (5) (2009) 1578{1591.
[19] I. Carreras, D. Miorandi et I. Chlamtac, A framework for opportunistic forwarding in disconnected networks, in
ICST International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services
(MobiQuitous), San Jose, CA, USA, 2006.
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 42
[20] A. Chaintreau, P. Hui, J. Crowcroft, C. Diot, R. Gass, and J. Scott. Impact of human mobility on opportunistic
forwarding algorithms. IEEE Trans. Mobile Comput., pages 606-‐620, 2007.
[21] Y. Chen, V. Borrel, M. Ammar, and E. Zegura, “A Framework for Characterizing the Wireless and Mobile Network
Continuum,” ACM SIGCOMM Computer Comm. Rev., vol. 41, pp. 5-‐13, 2011.
[22] V. Conan, J. Leguay, and T. Friedman. Characterizing pairwise inter-‐contact patterns in delay tolerant networks. In
Autonomics'07, 2007.
[23] E. Daly, M. Haahr, Social network analysis for information flow in disconnected Delay-‐Tolerant MANETs, IEEE
Trans. Mobile Comput. (2008) 606{621.
[24] W. Gao, Q. Li, B. Zhao, and G. Cao. Multicasting in delay tolerant networks: a social network perspective. In ACM
MobiHoc'09, pages 299-‐308. ACM, 2009.
[25] M. Grossglauser and D. Tse. Mobility increases the capacity of ad hoc wireless networks. IEEE/ACM Trans. on
Netw., 10(4):477-‐486, 2002.
[26] B. Han, P. Hui, V. Kumar, M. Marathe, J. Shao et A. Srinivasan, Mobile data offloading through opportunistic
communications, IEEE Trans. Mobile Comput., vol. 11, n° 15, p. 821–834, 2011.
[27] P. Hui, J. Crowcroft, E. Yoneki, Bubble rap: Social-‐based forwarding in delay tolerant networks, IEEE Trans. Mobile
Comput. 10 (11) (2011) 1576 {1589.
[28] S. Jain, K. Fall, and R. Patra, “Routing in a Delay Tolerant Network,” Proc. ACM SIGCOMM, 2004.
[29] M. Karaliopoulos, Assessing the vulnerability of DTN data relaying, IEEE Communications Letters, vol. 13, n° 12, pp.
923-‐-‐925, 2009.
[30] C. Lee, D. Eun, Exploiting Heterogeneity in Mobile Opportunistic Networks: An Analytic Approach, in: IEEE
SECON'10, IEEE, 2010, pp. 1{9.
[31] A. Lindgren, A. Doria, O. Schelen, Probabilistic routing in intermittently connected networks, LNCS (2004) 239{254.
[32] L. Pelusi, A. Passarella, and M. Conti, “Opportunistic Networking: Data Forwarding in Disconnected Mobile Ad Hoc
Networks,” IEEE Comm. Magazine, vol. 44, no. 11, pp. 134-‐141, Nov. 2006.
[33] A. Picu, T. Spyropoulos et T. Hossmann, An analysis of the information spreading delay in heterogeneous mobility
DTNs, in IEEE WoWMoM, San Francisco, CA, USA, 2012.
[34] I. Rhee, M. Shin, S. Hong, K. Lee, S. Kim, and S. Chong. On the levy-‐walk nature of human mobility. IEEE/ACM
Trans. on Netw., 19(3):630-‐643, 2011.
[35] S. Ross, Introduction to probability models, Academic Press, 2007.
[36] M. Seshadri, S. Machiraju, A. Sridharan, J. Bolot, C. Faloutsos, and J. Leskovec. Mobile call graphs: beyond power-‐
law and lognormal distributions. In ACM SIGKDD'08, pages 596-‐604. ACM, 2008.
[37] T. Spyropoulos, K. Psounis, C. Raghavendra, Efficient routing in intermittently connected mobile networks: The
multiple-‐copy case, IEEE/ACM Trans. on Netw. 16 (1) (2008) 77{90.
[38] T. Spyropoulos, K. Psounis, C. Raghavendra, Efficient routing in intermittently connected mobile networks: The
single copy case, IEEE/ACM Trans. on Netw. 16 (1) (2008) 63{76.
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 43
[39] T. Spyropoulos, T. Turletti, K. Obraczka, Routing in Delay-‐Tolerant Networks Comprising Heterogeneous Node
Populations, IEEE Trans. Mobile Comput. (2009) 1132{1147.
[40] H. Tijms, J. Wiley, A first course in stochastic models, Vol. 2, Wiley Online Library, 2003.
[41] P. Tournoux, J. Leguay, F. Benbadis, V. Conan, M. De Amorim, J. Whitbeck, The accordion phenomenon: Analysis,
characterization, and impact on dtn routing, in: INFOCOM 2009, IEEE, 2009, pp. 1116{1124.
[42] A. Vahdat, D. Becker, Epidemic routing for partially connected ad hoc networks, Tech. rep. (2000).
[43] John Whitbeck, Marcelo Dias de Amorim, Yoann Lopez, Jeremie Leguay, Vania Conan: Relieving the wireless
infrastructure: When opportunistic networks meet guaranteed delays. WOWMOM 2011: 1-‐10.
[44] X. Zhang, G. Neglia, J. Kurose, D. Towsley, Performance modeling of epidemic routing, Computer Networks 51 (10)
(2007) 2867:2891.
D3.1 – Initial results on offloading foundations and enablers
WP3 – Offloading foundations and enablers
© MOTO Consortium – 2013 44
DISCLAIMER The information in this document is provided "as is", and no guarantee or warranty is given that the information is fit for any particular purpose. The above referenced consortium members shall have no liability for damages of any kind including without limitation direct, special, indirect, or consequential damages that may result from the use of these materials subject to any liability which is mandatory due to applicable law. Copyright 2013 by Thales Communications & Security SA, Consiglio Nazionale delle Ricerche, Asociación de Empresas Tecnológicas Innovalia, Universite Pierre et Marie Curie -‐ Paris 6, FON wireless ltd, Avea Iletisim Hizmetleri As, Centro Ricerche Fiat Scpa, Intecs Informatica e Tecnologia del Software s.p.a. All rights reserved.