caching using software-defined networking in lte networks · current work on software-defined...

10
HAL Id: hal-01117447 https://hal.inria.fr/hal-01117447v2 Submitted on 24 Feb 2015 HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Caching Using Software-Defined Networking in LTE Networks Mael Kimmerlin, Jose Costa-Requena, Jukka Manner, Damien Saucez, Yago Sanchez To cite this version: Mael Kimmerlin, Jose Costa-Requena, Jukka Manner, Damien Saucez, Yago Sanchez. Caching Using Software-Defined Networking in LTE Networks. 2015. hal-01117447v2

Upload: others

Post on 12-Mar-2020

9 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Caching Using Software-Defined Networking in LTE Networks · current work on Software-Defined Networking in Evolved Packet Core virtualization has enabled us to integrate dy-namically

HAL Id: hal-01117447https://hal.inria.fr/hal-01117447v2

Submitted on 24 Feb 2015

HAL is a multi-disciplinary open accessarchive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come fromteaching and research institutions in France orabroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, estdestinée au dépôt et à la diffusion de documentsscientifiques de niveau recherche, publiés ou non,émanant des établissements d’enseignement et derecherche français ou étrangers, des laboratoirespublics ou privés.

Caching Using Software-Defined Networking in LTENetworks

Mael Kimmerlin, Jose Costa-Requena, Jukka Manner, Damien Saucez, YagoSanchez

To cite this version:Mael Kimmerlin, Jose Costa-Requena, Jukka Manner, Damien Saucez, Yago Sanchez. Caching UsingSoftware-Defined Networking in LTE Networks. 2015. �hal-01117447v2�

Page 2: Caching Using Software-Defined Networking in LTE Networks · current work on Software-Defined Networking in Evolved Packet Core virtualization has enabled us to integrate dy-namically

Caching Using Software-Defined Networking in LTE

Networks

Mael KimmerlinAalto University

School of ElectricalEngineeringOtakaari 5

02150 [email protected]

Jose Costa-RequenaAalto University

School of ElectricalEngineeringOtakaari 5

02150 [email protected]

Jukka MannerAalto University

School of ElectricalEngineeringOtakaari 5

02150 [email protected]

Damien SaucezInria

Sophia AntipolisFrance

[email protected]

Yago SanchezFraunhofer HHI

Einsteinufer 37 10587 BerlinGermany

[email protected]

ABSTRACTThe data consumption is increasing rapidly in mobile net-works. The cost of network infrastructure is increasing,which will lead to an ”end of profit” within next few years.Thus, mobile operators require technology that allows in-creasing the network capacity within low network costs. There-fore, using caching is the most evident solution to be usedin their backhaul networks. However, the current architec-ture of LTE network does not provide su�cient flexibilityto place the caches in the most optimal locations. Thecurrent work on Software-Defined Networking in EvolvedPacket Core virtualization has enabled us to integrate dy-namically the caching functionality in a LTE network andimprove the caching system performance. In this paper, wepresent the solution we designed for this aim based on Soft-ware Defined Networking technology. Moreover, we developa testbed for the proof-of-concept and we present perfor-mance analysis of this solution.

Categories and Subject DescriptorsH.3.4 [Information Systems]: Systems and Software—distributed systems - information networks; D.2.8 [SoftwareEngineering]: Metrics—performance measures

General TermsCaching, Software-Defined Networks, LTE networks

Keywordscaching, Software-Defined Network, LTE networks, contentrelocation

1. INTRODUCTIONThe explosion of data tra�c in mobile networks due to themassive adoption of smart-phones and tablets creates newchallenges for network operators. While customers alwaysask for more bandwidth. On the other hand, they are notwilling to pay more for their mobile data plan. Unfortu-nately, to increase the bandwidth, operators must invest(e.g., re-provision their backhaul, deploy new technology)but cannot echo the investment in the data plan price, pro-gressively leading to an“end of profit”for networks operatorsin 2015 according to a recent study [9]. In this new context,it is essential to optimise network capacity usage in orderto reduce operational expenditure (OPEX). The best solu-tion to improve the capacity usage of the network is to placecaches in strategic locations within the network. Such cachesare commonly used in Internet Services Providers (ISP) forwired networks but are largely ine↵ective in mobile LTE net-works. The reason is that LTE networks rely on tunnels toallow seamless mobility (i.e., without communication inter-ruption) [?]. The usage of tunnels between the base stationswhere users connect their devices to the ISP network andthe gateways that connect the ISP network to the Internetmakes it di�cult to place caches in the most appropriate lo-cations in the LTE network. The presence of tunnels indeedimplies that caches are located at the ends of tunnel whichin practice means either in the base station or in the ISPcore network. Such centralisation of caches is not adequatebecause of the distributed and random nature of contentconsumptions. The general research trend today for wirednetwork is to propose in-network caching where each net-work component disposes of storage and opportunisticallycaches contents passing through it [?]. In-network cachingleverages tra�c locality so that contents are likely to bedelivered from caches located close to the consumers. Ap-plying such model to LTE network would result in an overallbackhaul link usage reduction.

To overcome the limitation of mobile networks, and moreparticularly the operational constraints imposed by LTE,we propose to use a Software Defined Networking (SDN)

Page 3: Caching Using Software-Defined Networking in LTE Networks · current work on Software-Defined Networking in Evolved Packet Core virtualization has enabled us to integrate dy-namically

approach to remove the need of establishing tunnels. WithSDN, packets are forwarded according to rules installed onthe switches by a centralised controller [15]. The solution wepropose enables in-network caching in the core and backhaulof the LTE provider network. As packets are not encapsu-lated, routers have direct access to the IP packets as sent byend-hosts and can thus apply caching. Moreover, the factthat forwarding rules are installed by a controller instead ofa distributed routing algorithm permits to scale as signalingmessages are not flooded in the network but sent directly tothe appropriate switches.

The contribution of this paper is three-fold:

• we propose an architecture to enable in-network cachingin LTE network by replacing the tunnels that are es-tablished to support mobility by a programmable data-plane controlled with an SDN controller;

• we propose an optimisation mechanism formalised withan Integer Linear Program to determine the best cachingdecision, taking into account the various constraints ofthe LTE network;

• we propose a closed loop mechanism to dynamicallyand autonomously adjust caching decision accordingto the dynamics of tra�c inherent to mobile networks.

The paper is organised as follows. Sec. 2 analyses the relatedwork. Sec. 3 precisely defines our SDN architecture that re-moves the need of tunnelling packets in the backhaul and thetechnical choices that have to be respected to keep the sys-tem compliant with the LTE protocol suite. Sec. 4 formaliseswith an Integer Linear Program the optimal caching deci-sion scheme for SDN-based LTE networks. Sec. 4.1 proposesa closed-loop algorithm to adjust caching decision to tra�cdynamics induced by mobility. Our architecture and cachingdecision optimization is evaluated with extensive simulationsin Sec. ?? that uses the Gauss-Markov model [?] to simulatemobility and Sec. 5 concludes our work.

2. RELATED WORKPrevious works, such as [6], have investigated which loca-tion for the caches could be the most beneficial for the op-erators. It was observed that the further from the edge, themore cache hits happen and the easier to manage it is butalso the more resources are consumed. It was found outthat caching at the base station makes the number of hitsdecrease due to the small amount of users per base station.However, distributed caching systems were not considered,which would permit to increase the number of cache hits.

Such distributed solutions are used in Information-CentricNetworks (ICN). Previous works[7, 12] have focused on ICNand Software Defined Networks (SDN), aiming at mergingboth technologies for higher caching performances. As such,a transparent content management layer[7] has been de-signed to integrate ICN based on SDN in mobile networks.This layer analyzes the clients’ requests through a proxy andthe SDN controller selects the locations where the contentsare stored. Then the controller takes the establishment ofthe rules in charge, to forward the requests and responses,respectively.

This proposal requires a lot of computational resources toimplement the SDN logic that manages the transactions toaccess the cached content. When using this approach, theSDN functionality might become a bottleneck since the mo-bile network control plane is already supporting a high loadon signaling for connection setup i.e. user attachment andhandover procedure. Therefore, we propose a caching sys-tem with a similar approach based on a content managementlayer but with a optimized usage of SDN functionality thatovercomes the complexity issue, reducing the required com-putational resources.

3. LTE IN-NETWORK CACHING ARCHI-TECTURE

The purpose of our LTE in-network caching architecture isto direct the content retrieval requests to the most appro-priate cache instead of going directly to the server providingthe requested content. Three classes of solutions exist toachieve this goal. First, the network can ask the contentrequester to change the destination of its request to directit to the cache instead of the server. This solution is largelyadopted by Content Delivery Networks to balance the loadbetween their servers. Drawbacks are that it is specific tothe application layer protocol used to request a content andthat it requires additional communication between the mo-bile device that requests a content and the entity that sug-gests the redirection. However, our objective is to reduceas much as possible the tra�c at the edge and in the back-haul. Another option is to ask the base station – the ENodeB – to tunnel the tra�c directly to the adequate cache forthe requested content. Alas, tunnelling prevents in-networkcaching as intermediate nodes between the ENode B andthe cache only see opaque packet passing through them andcannot determine to which content they are related. Thepossibility we are exploring is to ensure directly at the for-warding level that all packets of the flow corresponding toa particular request are forwarded to the appropriate cache.In this situation, packets are moved in the network on a flowbasis instead of a destination basis. The common approachused in SDN to achieve this goal is to dynamically setup aforwarding path for each flow. This approach is the idealas it gives the highest possible flexibility. In this work, weadopt this last solution.

We identify three components to implement a Software De-fined Networking caching system for LTE infrastructure: (i)the Caching component that is composed of the storage it-self and a logic unit deciding where and when content mustbe cached; (ii) the Forwarding component that ensures thatpackets are forwarded to the most appropriate cache; and(iii) the Mapping component that binds forwarding andcaching together.

The role of the mapping component is essential. Indeed,for e�ciency reasons the forwarding plane cannot operateat the application layer level. The reason is that forward-ing must be accomplished at link rate which is only possiblewhen lookups are performed on fixed-length fields found inwell defined locations in packets. We thus have to limit thegranularity of the forwarding plane to the lowest layers ofthe network stack. However, the network layer of a packetonly provides information about the server where the con-tent can be retrieved, but nothing about the content itself

Page 4: Caching Using Software-Defined Networking in LTE Networks · current work on Software-Defined Networking in Evolved Packet Core virtualization has enabled us to integrate dy-namically

and several contents can be provided from the same server.The mapping component thus have to analyse the applica-tion layer information contained in the packet to determinethe content that is requested and then translate this infor-mation into information that can be processed at the net-working layer. To do that, the mapping component usesinformation from both caching and forwarding components.The caching component permits to know which cache mustbe used to retrieve the content while the forwarding com-ponent tells how to reach the cache. With this information,the mapping component annotates packets so that they canbe forwarded to their appropriate cache without requiringrouters to be aware of the content.

3.1 System implementationTo implement our LTE in-network caching architecture werely on OpenFlow [15]. OpenFlow implements the SDN con-cept with a communication protocol that permits to a cen-tralized controller to install specific forwarding rules to thedi↵erent switches composing the network. A rule associatesa predicate to an action. If a packet traversing a switchholds true for the predicate of a rule, the switch executesthe action associated to the matching rule, such as switch-ing the packet to a particular port or modifying the packet.While it is possible to setup a di↵erent forwarding path toevery flow with OpenFlow, it would come at the expense ofa high signaling overhead and our objective is to minimizethe tra�c at the edge and backhaul of LTE networks. Wetherefore relax this by forwarding packets based on annota-tions: packets are annotated based on the cache the requestshould be directed to. The advantages is that forwardingentries are independent of the flow dynamics and can bepre-installed hence minimizing the signaling overhead. Theannotation is performed by the mapping component thatis functionally equivalent to a deep-packet inspector. Thepacket is inspected by the mapping element when it entersthe network to determine the content it is related and istagged with the annotation corresponding to the cache ithas to be transmitted to. After this first inspection step,the packet is natively forwarded in the network.

Di↵erent packet annotation solutions are supported by Open-Flow to annotate packets, such as VLAN tag or MPLS la-bels. However, they all require every node in the networkto support the mechanism. To reduce the costs, we hencedecide to rely on native IP forwarding. For that we mapa specific object with an IPv6 address. This IPv6 addressbecomes the temporary identifier of the object. Since weare using the IP fields in a di↵erent way, the internal cachenetwork is isolated from the usual tra�c. We select an ini-tial IPv6 prefix to cover all objects, which is divided intosub-prefix, one per cache. Those sub-prefixes are assignedto each cache. As a result, the IP address prefix is usedfor forwarding decisions in the network while the host partof the address is used by the various caching component toidentify the content related to the packet without havingto inspect the packet. The IPv6 address used as identifieris then unique per cache and per content type. The map-ping component is the entry point of the caching infrastruc-ture via which every packet from mobile clients passes. Themapping component is located directly at the ENode-B andinspects each packet to determine the content it is relatedto. It then queries the caching component to determine the

Client device ENodeB!(Mapping component)

Caching controller!(Caching component)

OpenFlow Controller!(Forwarding component)

Cache [prefix = P]!(Caching component) Server [address = S]

Send request to Server

Extract content information (e.g., URL)

Content identification

Get cache IP prefix and

Set forwarding rules

Determine cache for content

Translate destination to P::I

Serve content

Translate source to S

Receive reply

Serve content

What identifier and cache to use?

Content is I Cache is CAddress construction

Prefix for C is P

If not cacheable: forward client request (destination = S)

Forward request (destination = P::I)

Forward reply (source = P::I)

Forward reply (source = S)Forward reply (source = S)

Client request (destination = S)

What prefix for C?

Figure 1: LTE caching architecture workflow

cache to which the packet must be forwarded to and the for-warding component to know the IPv6 prefix attached to thecache. The packet is then forwarded in the network usingthe IPv6 address.

Fig. 1 summarizes the operations.

4. CACHING ALLOCATIONIn Sec. 2 a SDN-based architecture to enable in-networkcaching in LTE networks is presented. In this section, weprovide an allocation scheme to optimally decide where tocache contents in the network. We follow an o✏ine approachwhere the caching logic calculates an optimal location foreach object based on their expected demands for the nearfuture.

To that aim, we use an Integer Linear Program formulation.

Let G = (E, V ) be a finite directed graph where every edge(i.e., line) e 2 E has capacity ce � 0 and each vertex (i.e.,network node) v 2 V has storage memory mv � 0. Let Obe the set of objects of size so. The objective is to find a|O| ⇥ |V | binary allocation matrix A where Ao,v 2 {0, 1}indicates whether or not object o 2 O must be stored on thenetwork node v 2 V .

The storage limitation is accounted with the following con-straint:

8v 2 V :X

o2O

Ao,v so mv. (1)

Line capacity is accounted with

Page 5: Caching Using Software-Defined Networking in LTE Networks · current work on Software-Defined Networking in Evolved Packet Core virtualization has enabled us to integrate dy-namically

8l 2 E :X

i2V

X

o2O

X

e2V

Ao,e Bo,i,e,l +

1�

X

e2V

Ao,e

!Bo,i,d

o

,l cl (2)

with Bo,i,e,l = (upo,i Pi,e,l + downo,i Pe,i,l) where Ps,d,l 20, 1 indicates if edge l 2 E is on the shortest path betweens 2 V and d 2 V and downo,n � 0 (resp. upo,n � 0) is theexpected download (resp. upload) transfer rate observed forobject o 2 O issued from node n 2 V . do is the locationfrom where object o 2 O must be retrieved if not stored inthe network.

The first term of Eq. 2 accounts for the bandwidth consump-tion on the lines on the path to the storage position in thenetwork, while the second term takes into account the traf-fic for object that have to be retrieved from their originalserver. It is worth noticing that we neglect the bandwidthconsumed to copy objects to their local storage node.

To ensure that an object is stored in only one location inthe network, we add the following constraint:

8o 2 O :X

n2V

Ao,n 1. (3)

If the objective of using caches in the network is to minimizethe network load, then the objective is to find an allocationmatrix that minimizes the following function while respect-ing the constraints given above:

minX

o2O

"X

e2V

Ao,e

po,a

o

,e +X

i2V

�o,i po,i,e

!#

+

"X

i2V

Y

e2V

(1�Ao,e)

!�o,i po,i,d

o

#(4)

minX

o2O

"X

e2V

Ao,e

po,a

o

,e +X

i2V

�o,i po,i,e

!#

+

"X

i2V

1�

X

e2V

Ao,e

!�o,i po,i,d

o

#(5)

where �o, i is the demand for object o 2 O issued from nodei 2 V and ps,d is the cost associated to the path betweennodes s 2 V and d 2 V + with V + being the union of V andthe set of serververs. ao is the location where object o 2 Owas retrieved from before the relocation.

4.1 Dynamic caching allocation adjustmentThe o✏ine allocation model we propose requires to be re-computed periodically to take into account the dynamics oftra�c caused by user mobility and the variation of objectdemand with time. We propose to automatically adjust the

time period to the dynamics of tra�c with a feedback controlalgorithm inspired from the additive-increase/multiplicative-decrease (AIMD) congestion avoidance algorithm of TCP [16].The period to wait until the next relocation is determinedwhen the new allocation is computed. This period is de-termined from the relative di↵erence between the value ofthe objective function as computed with the optimal allo-cation matrix for the current period and the value of theobjective function using the previous allocation matrix. Ifthe di↵erence of values is minor (i.e., below a threshold fixedby configuration), the relocation was not necessary and theperiod is linearly increased (i.e., additive increase with a fac-tor alpha > 0) to reduce the relocation frequency. On thecontrary, if the values significantly di↵er, it means that theperiod was too long between two relocation and the timeperiod is abruptly reduced (i.e., multiplicative increase witha multiplicative factor 0 � < 1 ) to increase the relocationfrequency. The operator can fix the lower and upper bound.

Data: period: relocation period of previous runData: A: allocation matrix of previous runData: A’: allocation matrix for current runData: obj(·): objective function for current runData: ⌧ : relative di↵erence thresholdData: min period: minimum time between two relocationsData: max period: maximum time between two

relocationsResult: Period of time between two relocationsif |obj(A) - obj(A’)|/obj(A’) < ⌧ then

period period + ↵else

period period * �endreturn min(max period, max( min period, period))

Algorithm 1: Rellocation period computation

5. CONCLUSIONSMobile operators are currently interested into deploying cachesolutions to bring the contents closer to the users to improvetheir quality of experience but also to reduce OPEX. To helpin the adoption of caches in LTE networks, we propose acaching system architecture leveraging the Software DefinedNetworking principle. The role of SDN is primarily to en-able the possibility of removing the GTP tunnelling, thusremoving all limitations to place the cache in the part ofthe network. We also propose an optimal caching allocationscheme. As tra�c and devices are dynamics within LTEnetworks, we propose a caching relocation system based onsimple, hence implementable, feed-back loop algorithm.

Page 6: Caching Using Software-Defined Networking in LTE Networks · current work on Software-Defined Networking in Evolved Packet Core virtualization has enabled us to integrate dy-namically

Figure 2: The Operation Flow in the Caching Sys-tem.

5.1 The Infrastructure of the SystemThe design of our system includes SDN nodes, which consistof the SDN controller and SDN switch and three additionalelements. Those elements are web proxy performing tra�canalysis, caches and caching controllers. The system designcan be improved by integrating the tra�c analyzers withthe caches. The caching controller includes a module forinterfacing with the SDN controller which will set the ruleson the SDN switches. Figure 2 presents the proposed systemdesign and the above mentioned elements.

All the SDN switches are configured with a default rule (e.g.all requests with destination port=80) that will redirect tothe proxy the first request of a client to a website that wasnot already fetched. The proxy analyzes the request, getsthe name of the destination host to which the original webrequest was sent and sends it to the cache controller. Thecache controller resolves the host name received from theproxy. The same host name might be associated to multipleIP addresses because the destination web server has asso-ciated several IP addresses for load balancing. The cachecontroller associates the IP addresses received to a cache lo-cation. Next, the cache controller interacts with the SDNcontroller to set new rules in the switches that will redirectto the caches all the HTTP tra�c with a destination ad-dress that matches one of the IP addresses resolved fromthe host name. Thus, the system maintains a correspon-dence between destination IP addresses and the cache loca-tions where the objects are stored. The following requestwill be directly redirected to the cache storing the contentfor the destination website.

The advantage of this design is that it permits to easilyrelocate the object from one cache to another. Apart fromtransferring the data from one cache to the other, changingthe rules on the SDN switches to change the redirection is asimple operation.

5.2 System ImplementationIn order to prove the e�ciency of the proposed design, Fig-ure 3 shows the testbed we implemented including all therequired elements. We performed di↵erent test cases us-ing HLS (Http Live Stream) video that was downloaded bythe mobile clients. The test cases consist of several usersdistributed between di↵erent base stations and consuming a

Figure 3: Testbed.

streaming service at 2Mb/s bitrate. In the test cases, we con-sider two base stations with seven users (5 on the first and2 on the second). The SDN switches are OpenVSwitch[13]bridges and the caches are instances of Tra�cServer[2] witha dedicated module. The analyzer is included in the Tra�c-Server module in order to make the process more e�cient,by reducing the number of intermediaries in case of locallycached content. We run di↵erent test cases which consistof adding artificial congestion (i.e. we use netem and qdisccommands to increase the delay in the selected openvswitchlinks) in some segments of the network to the e↵ect of hav-ing di↵erent numbers of users in di↵erent base stations. Thefirst test case is fetching the objects of the website directlyfrom the public Internet outside the mobile network andthis case is used as reference to compare the load generatedby the other test cases when content is cached in di↵erentlocations as shown in Figure 3.

6. RESULTS ANALYSIS6.1 Performances of the SystemIn order to quantify the impact of our caching system on thenetwork, we consider the sum of the loads of all the links asnetwork load metric. Thus, a request of 1MB going through7 links will generate a network load with value of 7MB. Thisnetwork load metric is used to quantify the reduction of theload in the total network when introducing the caches. Thismetric also includes the resource consumption required tomove contents. We quantify this network load metric as

NX

i=0

nX

j=0

xi ⇤ yi,j +NX

i=0

nX

j=0

xi ⇤ zi,j

with N being the number of requests issued from all thebase stations, n the number of links, xi being the size of theresponse, yi,j being 0 if the response does not go throughthe link, 1 otherwise, zi,j being 0 if the response has beenmoved between caches through the link, 1 otherwise.

Figure 4 shows the results of the network load metric. Wecan see the percentage of load reduction achieved for di↵er-ent test cases where we change the location of the cachingcontent as depicted in Figure 3. The results show that ob-viously locating the content in the cache, which is closest toa higher number of users provides the best load reduction.

Next, we consider the impact on the variation of the networkload reduction in relation with the distribution of the users.In this case the number of users remains constant (7 users),but their location changes from one base station to another.Figure 5 presents the network load reduction achieved whenchanging each cache location based on the mobility of theusers. The results show that the same cache can achieve

Page 7: Caching Using Software-Defined Networking in LTE Networks · current work on Software-Defined Networking in Evolved Packet Core virtualization has enabled us to integrate dy-namically

05

1015202530354045

cache 2 cache 3 cache 4 cache 5

Net

wor

k lo

ad re

duct

ion

(%)

Figure 4: Load Reduction depending on caching lo-cation.

-20

-10

0

10

20

30

40

50

60

0 20 40 60 80 100

Load

redu

ctio

n (%

)

Percentage of users on enodB1 vs enodeB2 (%)

cache 3 cache 2 cache 4 cache 5

Figure 5: Load Reduction depending on repartitionof the users.

both the worst and the best load reduction, since a badlyselected location that is not up to date with the users’ loca-tion and requests, increases the load and degrade the per-formances of the whole system by increasing the load. Thisdemonstrates the need for short periods in relocation opera-tions, since outdated locations can damage the performance.

6.2 Proposed System Optimizations6.2.1 Main Process

After extensive testing, we noticed that the proposed designhas some flaws. The main drawback is the usage of the IPaddress to redirect the request to the cache. This IP addressis the one obtained when the cache controller resolves thehost name of the target web site. Using the IP address doesnot allow storing di↵erent contents from the same website inseparate locations depending on their demand. The usageof the IP address allows redirecting the whole website to acache but not specifically a single object. Next, we proposean improved version of the system design, where we addeda feature to map a specific object with an IPv6 address.This IPv6 address becomes the temporary identifier of theobject. Since we are using the IP fields in a di↵erent way,the internal cache network is isolated from the usual tra�c.We select an initial pool of IPv6 addresses, which is dividedinto several smaller pools, one per cache. Those smallerpools of IPv6 address assigned to each cache can also besubdivided to separate the content types. For example, then first bytes identify the cache, the y next bytes indicate thetype of the object and the remaining bytes identify the ob-ject itself. The IPv6 address used as identifier is then uniqueper cache and per content type. Thus, if we duplicated thecontent, the same object would be identified di↵erently ondi↵erent caches. With these IPv6 subnets, the SDN rules

Operation Flow

Figure 6: The Operation Flow in the OptimizedCaching System.

on the switches can be aggregated per cache (and contenttype) which allows applying QoS policies to di↵erent contentobjects.

Thus, all the requests from the mobile clients go through theproxy to the analyzer that performs a mapping between theURL and an IPv6 address. Proxies are entry points that,based on the results of the analysis of the requests, will querythe cache controller to get the content identifiers. Then,the proxies will forward the requests using those identifiersover the SDN network that will route it to the right cache.In order to reduce the number of operations needed, theIPv6 address assigned as identifiers are based on the chosenlocation of the address pool. This will allow the requestscan be routed by static rules using masks to match only thebytes identifying the cache are set on all the SDN switchesto forward each pool to the right cache. When retrievinga new object, the analyzer would be given its identifier andwould thus be able to fetch it without the need for new rules.

Page 8: Caching Using Software-Defined Networking in LTE Networks · current work on Software-Defined Networking in Evolved Packet Core virtualization has enabled us to integrate dy-namically

It is worth noticing that the time spanned between two relo-cations must be selected carefully and we let this study forfuture work.

We propose three methods for relocating an object. In thefirst one, the relocation is done by updating the analyzersfrom the cache controllers so that the following requests willbe forwarded with a new content identifier to the new cachelocation directly. This method is referred to as a Non-SDNrelocation method in the remaining. The second method isusing the SDN capabilities of the network. It consists ofinserting a new rule on the switches with a higher prioritymatching the exact content identifier to route it to its newlocation; This rule will match instead of the rules routingthe identifier pool to the previous cache, at least until theanalyzers entries for the relocated object gets outdated. Therule will have a timer so that the identifier can be usedagain later. When its entry will get outdated, the analyzerswill fetch the new identifier from the cache controller. Bothmethods have advantages and drawbacks, so we propose athird method that is an hybrid version, selecting the moste�cient method in terms of operations for every relocation.

7. IMPACT OF SYSTEM OPTIMIZATION7.1 Simulation SetupIn order to determine the impact of those improvements, wehave set up a simulation. We simulate a network of 16 basestations on which we randomly distribute users (up to 10per base station), with 31 switches, all of them SDN nodeswith caching capabilities. The network is pyramidal and weattribute increasing costs to the links as getting closer to theedge. Figure 7 presents the simulated network.

Let the time be discrete, the requests be unrelated to eachother and N be the number of objects on the internet, rankedby popularity order. The first object is the most popular andthe last is the least one. Let all the users send a request atevery time slot. Let a be the identifier of the base stationand xa be the number of users on the base station a. Let tbe the interval between relocations.

We assume the requests of the users follow a Zipf-like dis-tribution: the probability that the user fetches the object i(position in the popularity list) is

PN,↵(i) =1

PNj=1

i↵

j↵

The value of ↵ might be di↵erent for each network. We willarbitrarily set this variable to 0.9[5] . Besides, we limit thenumber of existing contents that can be fetched by the usersto 1500 items such as web pages or images, and we considerthat the average size of the responses is 1MB, consideringthe repartition of size of the requests on the web[14].

7.2 Limitations of the simulationThe main limitation of our model is about the number ofobjects used which for simplicity is restricted to a total of1500 items. Our simulation is based on the usage of requestsbased on a Zipf-like law. Laws of this kind are directly de-pendent of the number of contents. The arbitrary choice of↵ does not reflect the reality either since this value variesfrom a single network to another. Finally, we do not focus

Figure 7: Simulated Network.

11

12

13

14

15

16

17

18

19

0 100 200 300 400 500

Load

Red

uctio

n (%

)

Relocations interval (timeslot)

Figure 8: Network Load Reduction.

on the physical constraints such as processing and storagecapacity in the nodes. In case of storage problems, previ-ous work has focused on replacement models to ensure thehighest cache hits[3].

7.3 Simulation ResultsUsing the network load metric discussed earlier, Figure 8presents the load reduction quantifier depending on the re-locations interval.

The load reduction presents three distinct phases. The firstphase is increasing the e�ciency due mainly to the reduc-tion of the number of relocation which leads to reducedbandwidth consumption for the relocations. The numberof requests also increases and is thus more representativeof the whole behavior of the clients for the most requestedcontents, which leads to a better placement. In the secondphase, the e�ciency is decreasing due to the lack of reac-tivity of the system. This in practice means that when theperiod between relocations is high the users request contentthat is not properly allocated. This phase is a transitionbetween the state where we prefer a reactive system thatdetects small changes in the user’s requests and is adaptingto it compared with a system relying more on the trendsas used in the third phase. In the third phase the networkload reduction increases again and we consider the reasonis that users are fetching the same content which has beenreallocated properly and does not need to be changed.

We focused then on the di↵erent relocation methods per-mitted by our system. Considering that updating a proxyor a switch is a single operation, we compare the numberof operations needed for each method. Figure 9 displaysthe number of relocation operations per period of relocationand shows that the hybrid method is the most e�cient. Wefigured out that the relocation method not using SDN per-forms slightly better with small relocation periods but farworst with larger relocation periods than the method using

Page 9: Caching Using Software-Defined Networking in LTE Networks · current work on Software-Defined Networking in Evolved Packet Core virtualization has enabled us to integrate dy-namically

0

1000

2000

3000

4000

5000

6000

7000

8000

9000

0 100 200 300 400 500 600

Num

ber o

f Ope

ratio

ns

Relocation Interval (timeslot)

relocation without SDN Relocation using SDN only Hybrid relocation

Figure 9: Comparison of the relocation Method.

0

0,1

0,2

0,3

0,4

0,5

0,6

0,7

0,8

0,9

1

0 100 200 300 400 500

Prop

ortio

n of

Rel

ocat

ions

Relocations Interval (timeslot)

proportion of relocation on proxies proportion of relocation on switches

Figure 10: Proportion of each method in the relo-cation processes.

only SDN. The high relocation period induces many requestsfrom most base stations on most of the contents, making itmore expensive to reconfigure than to configure the switchesin general. Those results are biased by the small size of thenetwork used. In order to correct it for real networks, acoe�cient adapted to the network situation (depending onthe SDN controller load for example) can be set in order topromote a method over the other. The third method thenprovides a way to take advantage of both methods, even ifthe load on the controller is already significant.

Considering that the cost of an operation on a switch issimilar to the cost of an operation on the analyzer (both arereconfigured over the network by a controller), Figure 10presents the proportion of relocations processed with theSDN method and the non-SDN method. As expected of theprevious results, the use of the non-SDN method is muchhigher with short relocation periods and much lower withhigh relocation periods than the use of the SDN method.

Thus, Figure 11 presents the average number of operationsfor each method, and separated for the third one, the oper-ations on the analyzers (proxies) and on the switches. Thehybrid average number of operations per relocation is lowerthan the minimum average number of operations per reloca-tion of the two methods separately, since for any cases whenit would be higher for the non-SDN method, the relocation

0

1

2

3

4

5

6

7

8

9

0 100 200 300 400 500 600

Aver

age

Num

ber o

f Ope

ratio

ns p

er R

eloc

atio

n

Relocations Interval (timeslot)

on switch on proxy without sdn with sdn only

Figure 11: Average Number of Operation per relo-cation.

is performed using SDN and vice versa.

8. CONCLUSIONSMobile operators are currently interested into deploying cachesolutions to bring the contents closer to the users. There-fore we propose a new system to provide in-network cachingin the backhaul networks, using the capabilities of Software-Defined Networking. We first present a solution and analyzeits performances and then based on the obtained results wedesign an optimized solution which is validated with simula-tions. The usage of SDN allows us to dynamically relocatethe contents from one cache to another located in any partof the LTE network. Our method uses either a feature ofthe proxies or the SDN capabilities to process to the reloca-tion, depending on their e�ciency. This solution performsbetter than any of those methods separately. Our designpermits to place the content at optimal location, minimizingthe bandwidth consumption and thus the costs. The reloca-tion method proposed permits to keep the content at theiroptimal location with the lowest costs. This permits to im-prove the quality of experience (QoE) and save power sincethe content is downloaded faster. Moreover, the system isnot dependent on a single relocation decision algorithm. Thesystem can use any algorithm implemented for that purpose.This solution is aimed at o↵ering new caching possibilitiesfor mobile operators in order to reduce the resource con-sumptions the most possible. The obvious solution wouldbe to place the cache closer to the user but the results showthat based on the number of users a more centralized lo-cation of the cache has an impact on the overall networkload. Another important remark is the relevance of SDN todeploy the proposed solution. The role of SDN is primar-ily to enable the possibility of removing the GTP tunneling,thus removing all limitations to place the cache in the partof the network. Besides this feature, SDN should not haveany major role in caching but the results show that usingSDN can still reduce the number of operations required toreallocate the cache. A final result worth considering is theintervals between cache reallocation which seems to be de-pendent on the content. This is left for further work sinceit requires further analysis in order to identify the optimalinterval between cache reallocation.

9. ACKNOWLEDGMENTS

Page 10: Caching Using Software-Defined Networking in LTE Networks · current work on Software-Defined Networking in Evolved Packet Core virtualization has enabled us to integrate dy-namically

This work was done in the SIGMONA research project fundedby the Finnish Funding Agency for Technology Innovationand industry. This work benefited from the advice of JesusLlorente Santos and Vicent Ferrer Guasch.

10. REFERENCES[1] AltoBridge. Data-at-the-edge, 2013.[2] Apache Foundation. Tra�cserver, 2013.[3] A. Balamash and M. Krunz. An overview of web

caching replacement algorithms. CommunicationsSurveys Tutorials, IEEE, 6(2):44–56, Second 2004.

[4] A. Basta, W. Kellerer, M. Ho↵mann, K. Ho↵mann,and E.-D. Schmidt. A virtual sdn-enabled lte epcarchitecture: A case study for s-/p-gateways functions.In Future Networks and Services (SDN4FNS), 2013IEEE SDN for, pages 1–7, Nov 2013.

[5] L. Breslau, P. Cao, L. Fan, G. Phillips, andS. Shenker. Web caching and zipf-like distributions:evidence and implications. In INFOCOM ’99.Eighteenth Annual Joint Conference of the IEEEComputer and Communications Societies. Proceedings.IEEE, volume 1, pages 126–134 vol.1, March 1999.

[6] X. Cai, S. Zhang, and Y. Zhang. Economic analysis ofcache location in mobile network. In WirelessCommunications and Networking Conference(WCNC), 2013 IEEE, pages 1243–1248, April 2013.

[7] A. Chanda and C. Westphal. Contentflow: Mappingcontent to flows in software defined networks. arXivpreprint arXiv:1302.1493, 2013.

[8] J. Costa-Requena. Sdn integration in lte mobilebackhaul networks. In Information Networking(ICOIN), 2014 International Conference on, pages264–269, Feb 2014.

[9] M. Hibberd. Tellabs ”death clock” predicts end ofprofit for mnos, February 2011.

[10] S. H. Kim. Intelligent caching solution to addressquality-of-experience and data deluge in mobilemultimedia delivery networks, 2013.

[11] L. Li, Z. Mao, and J. Rexford. Towardsoftware-defined cellular networks. In Software DefinedNetworking (EWSDN), 2012 European Workshop on,pages 7–12, Oct 2012.

[12] X. N. Nguyen, D. Saucez, and T. Turletti. E�cientcaching in content-centric networks using openflow. InINFOCOM, 2013 Proceedings IEEE, pages 1–2, April2013.

[13] OpenvSwitch Community. Openvswitch, 2014.[14] S. Souders. Http interesting stats, 2014.[15] N. McKeown, T. Anderson, H. Balakrishnan, G.

Parulkar, L. Peterson, J. Rexford, S. Shenker, and J.Turner. 2008. OpenFlow: enabling innovation incampus networks. SIGCOMM Comput. Commun.Rev. 38, 2 (March 2008), 69-74.

[16] M. Allman, V. Paxson, and E. Blanton, ”TCPCongestion Control,” Internet RFC 5681, Sept. 2009.