a novel fog computing based architecture to improve the ......fog computing extends cloud computing...

14
Research Article A Novel Fog Computing Based Architecture to Improve the Performance in Content Delivery Networks Fatimah Alghamdi , Saoucene Mahfoudh, and Ahmed Barnawi Faculty of Computing and Information Technology, King Abdul Aziz University (KAU), Jeddah, Saudi Arabia Correspondence should be addressed to Fatimah Alghamdi; [email protected] Received 18 May 2018; Revised 15 October 2018; Accepted 6 November 2018; Published 23 January 2019 Academic Editor: Miguel Garcia-Pineda Copyright © 2019 Fatimah Alghamdi et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Along with the continuing evolution of the Internet and its applications, Content Delivery Networks (CDNs) have become a hot topic with both opportunities and challenges. CDNs were mainly proposed to solve content availability and download time issues by delivering content through edge cache servers deployed around the world. In our previous work, we presented a novel CDN architecture based on a Fog computing environment as a promising solution for real-time applications. In such architecture, we proposed to use a name-based routing protocol following the Information Centric Networking (ICN) approach, with a popularity- based caching strategy to guarantee overall delivery performance. To validate our design principle, we have implemented the proposed Fog-based CDN architecture with its major protocol components and evaluated its performance, as shown through this article. On the one hand, we have extended the Optimized Link-State Routing (OLSR) protocol to be content aware (CA- OLSR), i.e., so that it uses content names as routing labels. en, we have integrated CA-OLSR with the popularity-based caching strategy, which caches only the most popular content (MPC). On the other hand, we have considered two similar architectures for conducting performance comparative studies. e first is pure Fog-based CDN implemented by the original OLSR (IP-based routing) protocol along with the default caching strategy. e second is a classical cloud-based CDN implemented by the original OLSR. rough extensive simulation experiments, we have shown that our Fog-based CDN architecture outperforms the other compared architectures. CA-OLSR achieves the highest packet delivery ratio (PDR) and the lowest delay for all simulated numbers of connected users. Furthermore, the MPC caching strategy shows higher cache hit rates with fewer numbers of caching operations compared to the existing default caching strategy, which caches all the pass-by content. 1. Introduction Content retrieval has become the primary usage of the Internet. To solve content availability and download time issues, Content Delivery/Distribution Networks (CDNs) [1, 2] have evolved as virtual overlay networks built on top of existing network infrastructures. With CDNs, content is distributed to cache servers located close to users, instead of to single, remote servers. CDNs have become the main part of Internet architecture, since they help improve the quality of Internet services (QoS) as well as the quality of users’ experience (QoE) [2]. Globally, CDNs are expected to carry 70% of all Internet traffic by 2021, and most of them will be carrying video traffic [3]. Although CDNs succeed in delivering content with high availability and performance, they cannot properly handle the recent, heavy workload on edge servers [2, 4]. Accordingly, real-time and latency- sensitive applications, like video streaming, are delivered out of place, and the user experience is affected. To overcome this issue, we have proposed [5] a Fog-based CDN architecture, in which Fog nodes are introduced at the edges of CDN servers without disrupting the conventional CDN infrastructure. In our proposed model, Fog nodes are suggested to communi- cate with each other by the Information Centric Networking (ICN), name-based routing approach, and cache only the most popular content (MPC). Fog computing and ICN are two dominant technologies discussed in the future Internet research context. is paper aims to prove the effectiveness of our Fog- based CDN model while analyzing its performance. To achieve this, we propose a new ICN name-based routing protocol by extending the Optimized Link-State Routing Hindawi Wireless Communications and Mobile Computing Volume 2019, Article ID 7864094, 13 pages https://doi.org/10.1155/2019/7864094

Upload: others

Post on 22-Jul-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A Novel Fog Computing Based Architecture to Improve the ......Fog computing extends cloud computing to the edge of the network to eliminate the delay caused by transferring data to

Research ArticleA Novel Fog Computing Based Architecture to Improve thePerformance in Content Delivery Networks

Fatimah Alghamdi , Saoucene Mahfoudh, and Ahmed Barnawi

Faculty of Computing and Information Technology, King Abdul Aziz University (KAU), Jeddah, Saudi Arabia

Correspondence should be addressed to Fatimah Alghamdi; [email protected]

Received 18 May 2018; Revised 15 October 2018; Accepted 6 November 2018; Published 23 January 2019

Academic Editor: Miguel Garcia-Pineda

Copyright © 2019 Fatimah Alghamdi et al. This is an open access article distributed under the Creative Commons AttributionLicense, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properlycited.

Along with the continuing evolution of the Internet and its applications, Content Delivery Networks (CDNs) have become a hottopic with both opportunities and challenges. CDNs were mainly proposed to solve content availability and download time issuesby delivering content through edge cache servers deployed around the world. In our previous work, we presented a novel CDNarchitecture based on a Fog computing environment as a promising solution for real-time applications. In such architecture, weproposed to use a name-based routing protocol following the Information Centric Networking (ICN) approach, with a popularity-based caching strategy to guarantee overall delivery performance. To validate our design principle, we have implemented theproposed Fog-based CDN architecture with its major protocol components and evaluated its performance, as shown throughthis article. On the one hand, we have extended the Optimized Link-State Routing (OLSR) protocol to be content aware (CA-OLSR), i.e., so that it uses content names as routing labels. Then, we have integrated CA-OLSR with the popularity-based cachingstrategy, which caches only the most popular content (MPC). On the other hand, we have considered two similar architecturesfor conducting performance comparative studies. The first is pure Fog-based CDN implemented by the original OLSR (IP-basedrouting) protocol along with the default caching strategy. The second is a classical cloud-based CDN implemented by the originalOLSR. Through extensive simulation experiments, we have shown that our Fog-based CDN architecture outperforms the othercompared architectures. CA-OLSR achieves the highest packet delivery ratio (PDR) and the lowest delay for all simulated numbersof connected users. Furthermore, theMPC caching strategy shows higher cache hit rates with fewer numbers of caching operationscompared to the existing default caching strategy, which caches all the pass-by content.

1. Introduction

Content retrieval has become the primary usage of theInternet. To solve content availability and download timeissues, Content Delivery/Distribution Networks (CDNs) [1,2] have evolved as virtual overlay networks built on topof existing network infrastructures. With CDNs, content isdistributed to cache servers located close to users, insteadof to single, remote servers. CDNs have become the mainpart of Internet architecture, since they help improve thequality of Internet services (QoS) as well as the quality ofusers’ experience (QoE) [2]. Globally, CDNs are expected tocarry 70% of all Internet traffic by 2021, and most of themwill be carrying video traffic [3]. Although CDNs succeedin delivering content with high availability and performance,they cannot properly handle the recent, heavy workload

on edge servers [2, 4]. Accordingly, real-time and latency-sensitive applications, like video streaming, are delivered outof place, and the user experience is affected. To overcome thisissue, we have proposed [5] a Fog-based CDN architecture, inwhich Fog nodes are introduced at the edges of CDN serverswithout disrupting the conventional CDN infrastructure. Inour proposed model, Fog nodes are suggested to communi-cate with each other by the Information Centric Networking(ICN), name-based routing approach, and cache only themost popular content (MPC). Fog computing and ICN aretwo dominant technologies discussed in the future Internetresearch context.

This paper aims to prove the effectiveness of our Fog-based CDN model while analyzing its performance. Toachieve this, we propose a new ICN name-based routingprotocol by extending the Optimized Link-State Routing

HindawiWireless Communications and Mobile ComputingVolume 2019, Article ID 7864094, 13 pageshttps://doi.org/10.1155/2019/7864094

Page 2: A Novel Fog Computing Based Architecture to Improve the ......Fog computing extends cloud computing to the edge of the network to eliminate the delay caused by transferring data to

2 Wireless Communications and Mobile Computing

Origin Server

Cache Server

Cache Server Cache Server Cache Server (UK)

(USA) (India)(Saudi)

End users (UK)

End users (USA) End users (India)End users (Saudi)

Figure 1: Abstract architecture of CDN.

(OLSR) protocol to be content aware, briefly named CA-OLSR.Moreover, we incorporate the following caching strat-egy: Most Popular Content (MPC) caching into CA-OLSR.The performance of the proposed model is evaluated usinga network simulator tool and is compared with two similararchitectures. The first is another Fog-based CDN model, inwhich Fog nodes are introduced as native cache resources,while the request routing is performed by the original OLSR(IP-based routing). The second is the classical cloud-basedCDN. Comparing Fog-based CDN with classical cloud CDNcan show us the impact of Fog on CDN performance.

The main contributions of this work are as follows:

(i) Design a novel CDN architecture based on the Fogcomputing environment, in which Fog nodes arecloser to users and can provide them with the mostdesired content.

(ii) Exploit ICN, name-based routing mechanisms in theFog network by implementing a new, content awarerouting protocol based on the OLSR ad hoc routingprotocol, using the network simulator tool NS2.

(iii) Evaluate the performance of the proposed CDNcompared with two similar approaches. The resultsshow that our Fog-based CDN achieves significantperformance gains and outperforms the comparedarchitectures.

The rest of the paper is organized as follows. In Section 2,we provide an overview of the three Internet-based infras-tructures that have been combined in our architecture: CDN,ICN, and Fog computing. Section 3 presents our Fog-basedCDN architecture with its content delivery process and MPC

caching strategy, while Section 4 focuses on the proposedCA-OLSR routing protocol design. Our simulation approachand results are discussed in Section 5. Finally, Section 6 con-cludes the paper and provides the direction of future work.

2. Internet Content Delivery: State of the Art

This section reviews the existing content delivery platforms,including CDN, ICN, and Fog computing, and provides adetailed analysis of their approaches.

2.1. Content Delivery Network (CDN). Content delivery net-works, or content distribution networks, (CDNs) are definedin the Internet Engineering Task Force (IETF) RFC 3466 [6]as a type of content network, which emerged centered around“content”. CDNs can be seen as virtual overlay networks builton top of generic IP to solve performance problems related tonetwork congestion and to improve web content accessibilityin a cost-effective way [2, 7]. CDNs consist of several cacheservers, also called surrogates, containing copies of webcontent and distributed around the world in order to satisfyuser requests by utilizing the most appropriate server, ratherthan remote origin servers, as shown in Figure 1.

Therefore, CDNs benefit not only the end users, butalso the content providers and the Internet service providers(ISPs)whodeployCDN servers in their networks [7].The enduser can perceive higher QoS, in terms of download time andbandwidth, resulting in improved user experience (QoE).Thecontent provider can offer larger volumes of reliable services,and the ISP can benefit from reducing the traffic transmittedto its origin server (backbone).

Page 3: A Novel Fog Computing Based Architecture to Improve the ......Fog computing extends cloud computing to the edge of the network to eliminate the delay caused by transferring data to

Wireless Communications and Mobile Computing 3

Table 1: Comparison of information-centric and host-centric networking.

Information-centric networking Host-centric networking

Naming Content name is persistent and unique. Host name can easily break and is notaware of the existence of replica content.

Security ModelSecures the content itself, which allows

for stronger, more flexible, andcustomizable content protection.

Secures the host-to-host communicationchannels.

Mobility of Nodes Does not affect connection management. Makes the connection hard to manage.Failure Tolerance High. Low.Scalability ofContent Distribution

High (no need to modify any domain orarchitecture).

Limited (depends on overlay dedicatedsystems).

However, because of the continuous increase in Inter-net traffic, CDNs cannot continue consistent, high qualitycontent delivery due to overloading on their edge servers[2, 4]. Over the last decade, CDN architecture has seen rapidevolution to solve the scalability issue of CDN edge serversand optimize delivery quality as well as user experience.Limited to the length of the article, we only present someexamples from the existing work in this context.

A hierarchical architecture with cooperative caching [8]and onewith application levelmulticast [9] were proposed fordelivering on-demand and live multimedia content, respec-tively. Such architecture scales well for increasing traffic,but CDN servers are expensive to deploy and maintain.To save infrastructure cost, CDNs were integrated withinfrastructure-less content delivery technology, which is Peerto Peer (P2P) [4, 10]. Such hybrid architecture combined thesuccess and reliability of CDNs with the scalability and costeffectiveness of P2P.

Recently, leveraging cloud computing resources in CDNshas gained extensive attention [8]. Such cloud CDN modelscan provide high-performance delivery for wider rangesof applications without costly infrastructure. Furthermore,they can be established on either of the previously men-tioned approaches, hierarchal CDN and hybrid CDN/P2P.In [9], a hierarchal cloud CDN was proposed, combiningmulticloud providers, while, in [11], P2P communicationwas incorporated in cloud CDN edge servers to improvethe response time of video streaming services. Accordingly,cloud CDNs are considered the most valuable and cost-effective alternatives to traditional CDNs [8], especially forhigh bandwidth demanding applications. Although, sincesuch cloud CDNs do not exploit the full advantages of cloudcomputing [12], they can be defined as cloud infrastructure-assisted CDN models.

In our work, we have considered this issue and proposedto move away from the centralized cloud to its edge exten-sions (Fog computing) to offer cloud rich services at theedges.

2.2. Information Centric Networking (ICN). InformationCentricNetworking (ICN) [13] is a newnetworking paradigmproposed to provide highly scalable and efficient contentdistribution. ICN, like CDNs, has emerged centered around“content”. The difference is that ICN provides content deliv-ery functions within the communication infrastructure [14],

more specifically, in its network layer. It leverages routerbuffers as in-network caching and performs the requestedrouting as a native network operation based on the con-tent name [13]. In contrast, CDNs provide content deliveryfunctions through overlay networks built over the traditionalInternet [7], which has a host-centric infrastructure not awareof content. Caching in CDNs is provided as an application-layer service, and request routing is mostly performed by athird party, the Domain Name System (DNS) infrastructure[12]. The ICN approach was mainly proposed to switch fromthe traditional IP network, and it is much more adapted tocurrent Internet usage [15] (i.e., users care only about thecontent, or service they want, and not about the machine thathosts it).

Using the ICN approach not only improves contentdistribution, but also has manymotivating advantages [13, 15]against host-centric networks, as listed in Table 1.

A fundamental ICN function is routing content requeststoward a particular node that has the requested content.To achieve this goal, different routing approaches havebeen proposed. They can be classified as name resolutionapproaches and name-based routing approaches [13]. In thispaper, we consider the name-based routing approach inwhich content names are included in the router’s ForwardingInformation Base (FIB) table. This approach eliminates thedelay caused by the resolution process [16] and simplifiescontent delivery. Moreover, it selects the serving node basedon network-related information, considering the server loadand its proximity to users [14, 15]. Therefore, it improves thereliability of content access and uses network resources moreefficiently compared to CDNs that work over the top and arenot aware of the underlying network status.

IICN name-based routing approaches can be designedwithin overlay networks, as in TRIAD [17], or within cleanslate networks, as in Content Centric Networks (CCNs) [18].

Although ICN outperforms CDNs in terms of routingefficiency, it is still lacking scalability to be deployed ona widespread range [13, 15]. Furthermore, the application-awareness of CDNs is more efficient than in-network cachingand is required for improving content delivery performance[19, 20].

In this work, we aim to get the best of both worldsby exploiting ICN name-based routing in the second levelof the proposed architecture, which is the Fog network,while keeping access to conventional CDNs at the first

Page 4: A Novel Fog Computing Based Architecture to Improve the ......Fog computing extends cloud computing to the edge of the network to eliminate the delay caused by transferring data to

4 Wireless Communications and Mobile Computing

Table 2: Comparison of cloud computing and Fog computing.

Cloud Fog

Deployment Centralized. Distributed in regional areas; related tospecific locations.

Distance to Users Faraway. Users connect to it through IPnetworks.

In the proximity. Users often connectthrough single hop connections.

Target User General internet users. Mainly mobile users.

Hardware Scalable storage space and computingpower.

Limited storage, computing power, andwireless interface.

Latency High. Low.

Service Type Global information collected worldwide. Limited localized information servicesrelated to specific deployment locations.

Data Center CloudApplication Hosting,ManagementCore Networking andServicesIP/MPLS, QoS, Multicast,Security, Network Services,Mobile Packet Core

Multi-ServiceEdge3G/4G/LTE/WiFi/Ethernet/PLC

Embedded Systemsand SensorsSmart and less smartthings, Vehicles, MachinesWired or Wireless

IP/MPLS Core

Field Area Network

Smart Things Network

Figure 2: Abstract architecture of Fog computing [21].

level. Because the clean slate approach requires significantand costly modifications, we choose to follow the overlayapproach adopted over the existing Internet infrastructure.

2.3. Fog Computing. Fog computing technology was inventedrecently by Cisco as a promising computing platform tosupport future Internet of Things (IoT) applications, whichare mostly critical (i.e., latency-sensitive) applications [21].Fog computing extends cloud computing to the edge ofthe network to eliminate the delay caused by transferringdata to the remote cloud [22]. Furthermore, it has manycharacteristics for supporting future IoT applications, toname but a few: dense geographical distribution, proximityto end users, mobility support, and real-time interaction. Fogcomputing architecture shows intermediate layers composedof Fog nodes located between traditional cloud data centersand IoT end devices, as depicted in Figure 2.

While Fog nodes provide localized services deployed indifferent locations, the cloud provides global services and actsas a central controller for those distributed Fog nodes. Inaddition, the cloud is like a central information repositoryfrom which the Fog nodes get the requested information fortheir own caches to serve subsequent requests locally [23].Once an end device connects to a Fog node, the Fog nodecan serve it either directly or with assistance from the cloud.

Thus, there is an essential interaction between Fog nodesand the cloud [21], and many applications require both Foglocalization and cloud globalization.

Table 2 summarizes the differences between cloud com-puting and Fog computing according to [23, 24].

Fog computing has a generic computing paradigmaimingto bring cloud services (computing, storing, and networking)closer to physical IoT devices through wired or wirelesstechnologies [21]. In addition, Fog nodes can be wireddevices, such as edge routers and switches; wireless devices,such as access points and cellular base stations; or mobiledevices, such as laptops and smartphones [23, 25, 26]. In ourmodel, we have addressed Fog usage as a content deliverytechnique, more specifically, to provide caching and routingservices. The detailed architecture that we have considered isdescribed in the next section.

To the best of our knowledge, no previous studies haveshown the impact of integrating Fog computing with CDNsystems [26].

3. Proposed Fog-Based CDN Architecture

Our review concludes that CDNs and Fog are Internet-basedinfrastructures, and both have a similar style consisting oforigin servers surrounded by sets of surrogates at the edgesof networks. While CDN surrogates are pure cache serversdeployed on widespread ranges, Fog nodes are intelligent,small, cloud units providing computing, storage, and net-working services in localized sites. Thus, introducing Fogcomputing on CDN systems to provide additional levels ofcontent delivery has high potential for solving the scalabilityissues of CDN edge servers. It furthermore optimizes thedelivery of modern Internet applications. Additionally, ISPscan benefit from our proposed architecture since Fog nodescan reduce traffic transmitted on the links that connect theirnetwork with the Internet backbone and other ISP networks.The most promising advantage of our proposed Fog-basedarchitecture resides in supporting the emerging 5G wirelesstechnology tomeet the requirements of latency-sensitive, IoTapplications/services.

Without disrupting the existing CDN infrastructure, wehave proposed deploying Fog computing layers betweenthe cloud-based CDN layers and the end user layers. Our

Page 5: A Novel Fog Computing Based Architecture to Improve the ......Fog computing extends cloud computing to the edge of the network to eliminate the delay caused by transferring data to

Wireless Communications and Mobile Computing 5

Cloud-based CDN

Cloud Layer

Fog Layer

End users Layer

Location 2Location 1

Fog nodes Fog nodes

CDNserver CDN

serverCDNserverr

Figure 3: Abstract architecture of Fog-based CDNs.

architecture of Fog-based CDNs can be further abstracted toa three-level, hierarchical model, as shown in Figure 3. At thecloud layer, the cloud-based CDN system is deployed, and thecontent is disseminated on theCDN servers located inside thecloud. At the Fog layer, the Fog nodes a redeployed, formingmore edge networks, which are geographically closer to theusers than CDN servers. The last level represents the endusers’ devices utilizing Fog nodes to connect to the Internet.

The communication model considered in the Fog net-work is as follows. Each Fog node is implemented with awireless interface, and the user’s device can directly connectto it through single-hop wireless connection. The user’sdevice uses the nearby Fog node to connect to the Internet.The communication between Fog nodes that exist in thesame location is handled through a mobile ad hoc network(MANET) routing protocol, as suggested by [24].

We have proposed to implement ICN networkingmethodology into the Fog network, aiming to achieve ahighly efficient content delivery at the edges of CDNs. Theproposed architecture should contribute toward improvingCDNperformance by utilizing Fog-based architecture, whereFog nodes act as ICN nodes. Unlike pure ICN architecturesthat require changing basic network operations and likelychanging hardware/software configurations, our proposedarchitecture brings about ICN benefits toward the accessnetwork in the most cost-effective way; it deals with plainIP packets, and the next hop in each node is determinedby consulting IP Forwarding Information Base (FIB) entries.This flexible strategy for deploying ICN functionalities at theedge of the network is certainly beneficial from operationaland economical points of view in comparison to the pure ICNnetwork.

More specifically, we consider that each Fog node hasa cache store and routing table, and the delivery process isrealized via three major phases:

(i) Request routing, using the name-based routingapproach.

(ii) Content routing back to the requester, using theconventional (IP) address-based routing approach.

(iii) Caching the popular pass-by content.For name-based routing, Fog nodes that exist in the same

location exchange information about how to reach givencontent (represented by its name) and then set up the FIBtables with content names as destinations. For the Fog nodestomaintain the same and correct view of others, the FIB tablescontaining the routing and the cached content informationare periodically exchanged.

When a user wants to find specific content, the networksends a request containing the content name to the nearestFog node. Once the Fog node receives the request, it looksfor the content name in its own cached records. If the contentis available in the local cache store, the Fog node sends thecontent to the requester using the conventional (IP) address-based routing approach. Otherwise, the Fog node fetches themissing content from the Fog node that has it cached. Morespecifically, it looks through its FIB routing table to determinewhich Fog node hosts the requested content, and then itforwards the request toward the selected node, using theconventional forwarding (IP-based) mechanism. If multipleFog nodes host the requested content, the nearest node (interms of hop count) will be selected. That is, Fog nodessupport cooperative caching and can work independentlyfrom CDN servers, except when the requested content isnot available in the Fog nodes. In this case, the request isforwarded to the CDN servers.

As content is routed back toward the requester, it willbe cached by the edge Fog node in order to satisfy subse-quent requests from the local cache. Caching all the pass-by contents involves many challenges. On the one hand, itcauses overload on the memory, bandwidth, and processing.On the other hand, it may lead to removing popular contenttomake space for unpopular content, decreasing caching per-formance and user experience. To improve caching efficiency,we have proposed to use a popularity-based caching strategy

Page 6: A Novel Fog Computing Based Architecture to Improve the ......Fog computing extends cloud computing to the edge of the network to eliminate the delay caused by transferring data to

6 Wireless Communications and Mobile Computing

Send the content to requester

Send the request toward theFog node that has the content

Reaches Popularity threshold?

Decide to cache content

End

yes

no

yescontent name is available in

FIB?

Count the number of requests

Start

Receive a request for a named content

content is available in cache store?

no

yes

Send the request toward cloudCDN server

Figure 4: Fog content delivery flowchart.

allowing the Fog nodes to cache only the most popularcontent (MPC) in their deployment locations. The designdetail is given in the next subsection.

3.1. Most Popular Content (MPC) Caching Strategy. Withina unit of time, each Fog node locally counts the numberof requests for content that is not found in its own cache.It maintains a popularity table to store the content namealong with its popularity count (i.e., number of requests).Once a local popularity count for specific content reaches apopularity threshold, the content is tagged as popular andcached by the node.

Caching strategies and replacement policies are closelyrelated and are required to manage the nodes cache [16]. ALeast Recently Used (LRU) policy is one of the most commonexisting replacement policies, and we have proposed extend-ing it in our architecture. Fog nodes maintain a popularitycount of each piece of cached content. Once the buffer of theFog node is full, the Fog node will decide to remove the leastpopular content from the LRU list to make space for the newcontent.

Accordingly, once a user requests cached content, the userwill receive the content directly from the relevant Fog node,rather than the CDN server or the original distant cloudserver, unless the cached content has been replaced.

Like the ICN name-based routing mechanism, cachingstrategy is involved in the routing phase. The next sectionwill discuss, in detail, the design of our content aware routingprotocol, CA-OLSR. The flowchart of the proposed contentdelivery process, as performed by a Fog node, is presented inFigure 4.

4. CA-OLSR Routing Protocol Design

In the previous section, we proposed a Fog network config-uration acting as a MANET. Many routing protocols wereproposed for this MANET [27]. Among them, we haveselected the OLSR [28] protocol since it is a table-drivenprotocol, and it has very satisfying performances due to itsmultipoint relaying (MPR) strategy, which is what a Fogcontent delivery network requires.

We have extended OLSR to be content aware CA-OLSR.While OLSR routes data packet based on the destinationaddress specified in the packet, CA-OLSR differentiatesbetween the request data packet (Request Packet) and thereply data packet (Content Packet) with two different routingstrategies. Request Packet is being routed to requested con-tent based on its ID while Content Packet is routed back tothe requester based on its address.

Page 7: A Novel Fog Computing Based Architecture to Improve the ......Fog computing extends cloud computing to the edge of the network to eliminate the delay caused by transferring data to

Wireless Communications and Mobile Computing 7

Version IHL Type of Service Total LengthIdentification Flags Fragment Offset

TTL Protocol Header ChecksumSource Address

Destination AddressOptions Padding

Code Length Content ID

Figure 5: IP packet header used with CA-OLSR.

Table 3: CA-OLSR routing table.

Destination Address Cached content IDs Next Hop Address Distance

Similar to ICN, our proposed system deals with contentas blocks of packets, but the packets in our system are regulardata packets, i.e., IP packets. As IP packets do not containa field of content ID, we have specified it using the Optionfield. The format of the packet header used with CA-OLSR ispresented in Figure 5.

CA-OLSR performs the main functionalities of OLSR,with some modifications related to content consideration asfollows.

4.1. Neighbor Detection. Each node periodically broadcasts,to all one-hop neighbors, its Hello messages containing theIDs of the content available in its cache store and its neighborsinformation, including the IDs of the content available intheir cache stores. Once a node receives a Hello message,it can detect its one-hop and two-hop neighbors with theircached content IDs, and, correspondingly, the neighbor tableswill be built. In addition, a node can select its MPRs in thesame way as the original OLSR.

4.2. Topology Discovery. MPRs periodically generate TrafficControl (TC) messages, containing information about net-work topology and the content cached in network nodes. TCmessages are flooded byMPRs to all nodes in the network. TCmessages, more specifically, advertise the addresses of MPRselector nodes with their cached content IDs. Once a nodereceives a TC message, it can discover the network topologywith the content locations and build its topology table.

4.3. Routing Table Calculation. Like OLSR, the node buildsits routing table based on the information contained in itsneighbor tables and topology table so the shortest route toeach destination is given. In CA-OLSR, we have extendedrecorded route entries to contain the information of thedestinations with their cached content IDs, as shown inTable 3.

Once the node receives a Request Packet for forwarding,it looks for the requested content ID, specified in the packet,in its routing table to find the destination that has the content.If multiple destinations have the requested content, the nodeselects the closest one.That means the node finds the optimal

route (in terms of hop counts) to the requested content.Content Packets are routed in the same way as the originalOLSR (i.e., based on the packet destination address).

4.4. Applying CA-OLSR in Fog-Based CDN Architecture. Wehave extended CA-OLSR to identify the hierarchical architec-ture of Fog-based CDNs, so it can route unsatisfied requeststo cloud CDN servers. In addition, we incorporate it withthe MPC caching strategy explained in the previous section.Thus, our architecture involves the caching strategy in therouting phase, like the ICN name-based routing mechanism.

To clarify the workflow scenario of the proposed Fog-based CDNs (including routing and caching strategies) andthe communication flow between system items (user/Fognode/cloud CDNs), we present different use cases as follows.In the beginning, the user connects to the nearest Fog nodeand sends a Request Packet containing the ID of the desiredcontent. Then, one has the following:

Use Case 1. A Fog node receives a request for contentavailable in its cache store. The node sends the contentdirectly back to the requester.

Use Case 2. A Fog node receives a request for contentthat is available in another Fog nodes cache store. The nodelooks for the content ID on its routing table and routes theRequest Packet to the nearest node containing the content.The destination node sends back the content to the requesterthrough its connected Fog node.

UseCase 3.AFog node receives a request for content thatis not available in the Fog network. The content ID could notbe found in the nodes cache store nor routing table, so thenode forwards the Request Packet to the cloud CDN server.The cloud CDN sends the content back to the requesterthrough its connected Fog node.

Use Case 4. Within a unit of time, a Fog node receivesmultiple requests for a particular content unavailable in itscache store. The number of incoming requests reaches thepopularity threshold, and the Fog node decides to cache thispopular content on its store so that it can satisfy requests forthat content. Note that, if the buffer of the Fog node is full,the Fog node will decide to remove the least popular content,as explained in Section 3.

Page 8: A Novel Fog Computing Based Architecture to Improve the ......Fog computing extends cloud computing to the edge of the network to eliminate the delay caused by transferring data to

8 Wireless Communications and Mobile Computing

Table 4: Simulation environment parameters.

Network Type StaticPacket Size 512 bytesDuration 1000sConnection Type CBR/UDPSimulation Area 500m ∗ 500mTransmission Range 250mTotal Content in CDN 1000

Replacement Policy LRU (Least Recently Used)+ Least Popular

Popularity Model Zipfs distribution withexponent of 1

5. Performance Evaluation

In this study, we have used the simulation-based performanceevaluation approach to prove the effectiveness of our model.Moreover, we have conducted a comparative performancestudy considering the following architectures:

(i) Our Fog-based CDN uses CA-OLSR (name-basedrouting) along with an MPC caching strategy imple-mented in Fog nodes. We denote this approach byCA-OLSR.

(ii) Pure Fog-based CDN uses the original OLSR (IP-based routing) along with a default caching strategyimplemented in Fog nodes. We denote this approachby NCA-OLSR (Non-Content Aware OLSR).

(iii) Classical cloud-based CDN uses the original OLSRrouting. We denote this approach by Cloud.

Using the network simulator tool NS2 [29], we haveimplemented each architecture by considering it as an inter-domain network.

In our experiments, we have assumed that the popularityof each content follows Zipfs Law [30] to ensure that wehave a high popularity for some content. Regarding theother parameters introduced by the MPC caching strategy,cache size, and popularity threshold, we have selected theirvalues as explained below. The cache size of the Fog nodeis assigned to be 1% to 2% of the CDN cache size to avoidload at the node, since the Fog node has limited storagecapacity. Popularity threshold values are selected according toextensive simulation processes in which Zipfs low exponent,Fog cache size, and total content are fixed, and the resultingcache hit rates are varied.

The environment parameters considered in our simula-tion are reported in Table 4.

Content delivery performance is essentially affected byrouting protocol and caching strategy. Hence, our experi-ments are done to evaluate both as follows.

5.1. Network Performance Analysis. To evaluate the routingprotocol, we have studied the impact of the number ofconnected users and the number of Fog nodes on the networkperformance in terms of data transfer delay and packet

CA-OLSRNCA-OLSRCloud

0

100

200

300

400

500

600

700

800

Del

ay (m

sec)

50 60 70Number of users

80 90 100

Figure 6: Data transfer delay by varying number of users.

delivery ratio (PDR). We have considered the delay fromsending a Request Packet to receiving a Content Packet. PDRis the ratio of the received data size to the sent data size.Through the following routing evaluation experiments, wekeep the caching parameters in terms of popularity thresholdand cache size fixed to five and ten, respectively.

5.1.1. Impact of Number of Connected Users. In this exper-imental setup, the number of users varied from 50 to 100,with an increment of 25 users, whereas the number of Fognodes was fixed at 15 nodes. Other parameters were kept aspreviously mentioned in Table 4.

Figure 6 shows the data transfer delay by varying thenumber of connected users. As expected, when the numberof users increased, the traffic load increased, causing datatransmission delay. This delay is shown clearly in cases ofthe cloud CDN approach, since the user request is alwayssatisfied from the distant cloud CDN server. The delay inthe Fog CDN approaches, CA-OLSR and NCA-OLSR, islower than that in cloud CDN. This is because the userscan receive the content from the distributed nearby Fognodes in a shorter time than they can receive it from single,distant cloud CDN servers. CA-OLSR achieves the lowestdelay compared to NCA-OLSR and cloud CDN approaches.This is because the CA-OLSR approach allows fetching ofthe missed content from other Fog nodes due to its contentaware routing protocol. Furthermore, its caching strategyguarantees that most popular content is available in the Fognodes, which reduces the need for transferring requests tothe distant cloud CDN servers. As shown, the CA-OLSRapproach reduces the delay about 28% and 87% comparedto NCA-OLSR and cloud CDN approaches, respectively. Inaddition, the delay with CA-OLSR increases slowly whenthe load in the network increases, which guarantees veryhigh QoS for real-time applications. Figure 7 shows packetdelivery ratio (PDR) by varying number of connected users.Increased traffic load lowers PDR using the cloud CDN

Page 9: A Novel Fog Computing Based Architecture to Improve the ......Fog computing extends cloud computing to the edge of the network to eliminate the delay caused by transferring data to

Wireless Communications and Mobile Computing 9

CA-OLSRNCA-OLSRCloud

60 70 80 90 100 50Number of users

80

85

90

95

100

105

110

PDR

Figure 7: Packet delivery ratio by varying number of users.

approach, but it does not affect the PDR performance of CA-OLSR and NCA-OLSR approaches. Hence, both Fog CDNapproaches guarantee users will receive most of the sent data,even with high traffic loads. This results from satisfying userrequests from nearby Fog nodes instead of the distant cloudCDN server. Therefore, Fog CDN approaches, CA-OLSR andNCA-OLSR, outperform cloud CDN approaches in terms ofPDR.This result shows the improvement that Fog computingcan introduce to CDN performance. The improved PDR(almost 100%) shows promising results, especially when QoSis concerned, such as in e-health care or any emergencyapplication. As QoS is improved, QoE will improve as well.

5.1.2. Impact of Number of Fog Nodes. In this experiment, thenumber of Fog nodes varied from ten to 20with an incrementof five nodes, whereas the number of users was fixed at50. Other parameters were kept as previously mentioned inTable 4. Although the cloud CDN approach is not affectedby this factor (i.e., the number of Fog nodes), we includedit for the purpose of comparison. Concerning the cloudperformance results, we consider 20 hops between the usersand cloud CDN servers. Figure 8 shows that Fog CDNapproaches outperform cloud CDN approaches in terms ofdata transfer delay, as the content is delivered from the edgeFog nodes rather than fromdistant cloudCDN servers. Basedon this experiment, the benefit of edge computing is evidentsince it has reduced the content delivery delay significantly.In Fog CDN approaches, CA-OLSR and NCA-OLSR, Fognodes can satisfy user requests from their local caches if thecontent is available within the caches. When the requestedcontent is missing from the local cache, CA-OLSR and NCA-OLSR act in different ways. In the NCA-OLSR approach,the missing content is fetched from the distant cloud CDNserver, even if the content is available in nearby Fog nodes.However, the CA-OLSR approach fetches the missing contentfrom the nearest Fog node that has the requested contentcached. For this reason, our CA-OLSR approach outperformsthe NCA-OLSR approach in terms of delay. This result is

Average delay at 20 hops

CA-OLSRNCA-OLSRCloud

15 20 10Number of nodes

30

35

40

45

50

55

60

Del

ay (m

sec)

Figure 8: Data transfer delay by varying number of Fog nodes.

CA-OLSR NCA-OLSR Cloud 2

3

4

5

6

7

8

9Ro

utin

g ta

ble s

ize (

MG

)

Figure 9: Routing Overhead in terms of size of routing table.

very interesting for future Internet applications that requirelow delay, such as audio, video streaming, or any real-timeapplication.

Despite the advantages of CA-OLSR, it involves numer-ous overheads related to name-based routing. More specif-ically, creating and updating CA-OLSR routing tables con-sume more computational resources in terms of memory,bandwidth, and processing because the routing tables includecontent names. Figure 9 shows that the routing tables of CA-OLSR (using the same circumstance of 20 nodes and 50 users)are the largest compared to those of the NCA-OLSR andcloud CDN approaches.

Even though CA-OLSR, NCA-OLSR, and the cloud CDNapproach exchange the same number of Routing Packets tobuild routing tables, as depicted in Figure 10, the size of thesepackets is higher in CA-OLSR because they include contentIDs in Hello and TC messages, which, in turn, lead to morecommunication overhead. This overhead depends on thecontent ID format and the number of content pieces storedin each node. We reduced this overhead in our approach by

Page 10: A Novel Fog Computing Based Architecture to Improve the ......Fog computing extends cloud computing to the edge of the network to eliminate the delay caused by transferring data to

10 Wireless Communications and Mobile Computing

CA-OLSR NCA-OLSR Cloud 17600

17650

17700

17750

17800

17850

17900Ro

utin

g Pa

cket

s

Figure 10: Routing Overhead in terms of number of RoutingPackets.

using a popularity-based caching strategy in which only theMPC is cached in the Fog node.

5.2. Caching Performance Analysis. To evaluate the cachingstrategy of Fog CDN approaches, we have studied the impactof popularity threshold and Fog node cache size on cachingperformance in terms of the following measures:

(i) Local cache hit rate (LCHR): the ratio of requeststhat are satisfied by the local caches of attached Fognodes (HL) versus the total number of requests (R).

𝐿𝐶𝐻𝑅 =𝐻𝐿

𝑅(1)

(ii) Cache hit rate (CHR): the ratio of requests that aresatisfied by all nodes in Fog networks (H) versus thetotal number of requests (R).

𝐶𝐻𝑅 =𝐻

𝑅(2)

(iii) Number of caching operations: computed by thenumber of cached elements.

Note that the cloud CDN approach is not includedin this analysis. Through the following caching evaluationexperiments, we keep the network parameters, in terms ofnumber of nodes and number of users, fixed at 15 nodes and50 users.

5.2.1. Impact of Popularity Threshold. In this experiment,the popularity threshold value varied as one, two, five, andten, whereas the cache store size was fixed at ten. Otherparameters were kept as previously mentioned in Table 4.

As shown in Figure 11, the CA-OLSR approach, with dif-ferent popularity threshold values, outperformed the NCA-OLSR approach because the latter caches content withoutconsidering popularity. The CA-OLSR approach, with apopularity threshold of five, shows the highest local cache hitrate. When the popularity threshold increases from one to

NCA-OLSR Thr#1 Thr#2 Thr#5 Thr#10

NCA-OLSRCA-OLSR

0.2

0.22

0.24

0.26

0.28

0.3

0.32

0.34

0.36

0.38

Loca

l cac

he h

it ra

te (L

CHR)

Figure 11: Local cache hit rate by varying popularity threshold.

NCA-OLSR Thr#1 Thr#2 Thr#5 Thr#10

NCA-OLSRCA-OLSR

0

5000

10000

15000

20000

25000

30000

cach

ing

oper

atio

ns

Figure 12: Caching operations by varying popularity threshold.

five, the local cache hit rate also increases. However, whenthe popularity threshold is raised to ten, the local cachehit rate decreases. A high local cache hit rate is importantfor multimedia applications that consume large amounts ofbandwidth. As suchmultimedia can be delivered fromnearbyFog nodes, the network bandwidth can be saved, and theQoS will satisfy users. As shown in Figure 12, both the NCA-OLSR approach and the CA-OLSR approach with popularitythresholds of one have the highest number of cachingoperations. A popularity threshold set to five lowers cachingoperations by 90% from the threshold of one, although itachieves the highest local cache hit rate, as shown in Figure 11.As the popularity threshold value increases, the number ofcaching operations is reduced, saving memory, processing,and bandwidth resources.

5.2.2. Impact of Fog Cache Size. In this experiment setup, thecache store size of Fog nodes varied at ten, 15, and 20, whereas

Page 11: A Novel Fog Computing Based Architecture to Improve the ......Fog computing extends cloud computing to the edge of the network to eliminate the delay caused by transferring data to

Wireless Communications and Mobile Computing 11

CA-OLSRNCA-OLSR

12 14 16 18 20 10cache size

0.2

0.25

0.3

0.35

0.4

0.45

0.5

0.55

cach

e hit

rate

(CH

R)

Figure 13: Cache hit rate by varying cache size.

CA-OLSRNCA-OLSR

0

5000

10000

15000

20000

25000

30000

cach

ing

oper

atio

ns

12 14 16 18 20 10cache size

Figure 14: Caching operations by varying cache size.

the popularity threshold was fixed at five. Other parameterswere kept as previously mentioned in Table 4.

Clearly, as the size of cache store increased, the Fognodes made more content available in their caches, andthen most of the requests could be satisfied from the Fognetwork, increasing the cache hit rate. Figure 13 shows thatthe NCA-OLSR approach has a lower cache hit rate comparedto the CA-OLSR approach. This is because (1) it lacks thecontent aware routing mechanism, and (2) it caches contentwithout considering its popularity. Consequently, the node inNCA-OLSR often cannot satisfy the most-requested content,neither from its local cache nor from other Fog nodes, whichin turn leads to low cache hit rates compared to CA-OLSR.

Figure 14 shows the number of caching operations byvarying the cache size of Fog nodes. When the cache sizeis small, it will be filled faster. Thus, the node continuouslyreplaces existing content to cachemissing content. Accordingto our replacement policy, the least popular content from

the LRU list is replaced. On the other hand, large Fognodes make larger amounts of content available in theirlocal caches and perform fewer replacements and cachingoperations. This justification applies to NCA-OLSR morethan to CA-OLSR because the CA-OLSR approach assignsa popularity threshold to cache content, and the node cachestore is continuously filled by only the most popular content.Therefore, the CA-OLSR approach can satisfy user requestswith fewer operations compared to NCA-OLSR. Moreover,caching only the most popular content maintains similarnumbers of caching operations, whether the cache store sizeis large or small.

6. Conclusion

In this paper, we presented a novel content delivery network(CDN) based on the Fog computing environment and theInformation Centric Networking (ICN) approach. Fog nodesare introduced at the edge of CDNs to reduce overload onCDN servers. Fog nodes in each location are assumed toprovide local content delivery services using the networkcaching feature and the ICN name-based routing approach.As Fog nodes have limited storage and processing capabilities,we used a popularity-based caching strategy in which onlythe most popular content (MPC) would be cached to avoidthe load at nodes without decreasing content availability.

To meet our goal, we modified the Optimized Link-StateRouting (OLSR) protocol to be content aware (CA-OLSR).Moreover, we integrated CA-OLSR with the MPC cachingstrategy. To validate our design principle, we implementedour proposed architecture and major protocol componentsin a network simulator tool: NS2. Then, we evaluated itsperformance against two similar architectures. The first waspure Fog-based CDN implemented by the conventional, IP-based OLSR routing protocol along with the default cachingstrategy. The second was classical cloud-based CDN, alsoimplemented by the conventional IP-based OLSR routingprotocol.

Through extensive simulation experiments, we showedthat our Fog-based CDN architecture outperforms the othercompared architectures. CA-OLSR delivers content with thelowest delay and highest packet delivery ratio (PDR) forall the simulated numbers of connected users. Moreover,its MPC caching strategy showed high cache hit rates withlow caching operations. Therefore, we have shown that thisarchitecture is a promising solution for delivering real-timeor latency-sensitive applications. In addition, the inherentcharacteristics of Fog computing can offer highQoS for futureInternet applications that will satisfy users QoE.

Our architecture design assumed that the Fog networksare MANET.We also carried out our performance analysis inaMesh network without considering nodemobility. In futurework, we will expand our experiments to analyze mobilecontent delivery.

Finally, although the localized delivery service providedby our architecture reduces delivery latency and inter-ISPnetwork traffic, it involves numerous overheads related tocombining caching and routing at a single entity. On the

Page 12: A Novel Fog Computing Based Architecture to Improve the ......Fog computing extends cloud computing to the edge of the network to eliminate the delay caused by transferring data to

12 Wireless Communications and Mobile Computing

one hand, caching content at a network entity will con-sume its memory, bandwidth, and processing resources. Thislimitation is handled in our architecture by using an MPCcaching strategy that caches less content, saving networkresources. On the other hand, including content names in theForwarding Information Base (FIB) table causes a networkoverhead, resulting from table updating traffic. As futurework, we will consider new parameters to minimize the FIBtable size.

Data Availability

The data used to support the findings of this study areavailable from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work is funded by the King Abdul-Aziz City for Scienceand Technology (KACST), Kingdom of Saudi Arabia [Grantno. PS-36-344].

References

[1] M.Pathan,R. Buyya, andA.Vakali, “Content deliverynetworks:State of the art, insights, and imperatives,” in Content DeliveryNetworks, pp. 3–32, Springer, 2008.

[2] G. Pallis and A. Vakali, “Insight and perspectives for contentdelivery networks,” Communications of the ACM, vol. 49, no. 1,pp. 101–106, 2006.

[3] “Cisco Visual Networking Index: Forecast and Methodology,2016-2021,” Zettabyte Era, 2017.

[4] H. Jiang, J. Li, Z. Li, and J. Liu, “Efficient hierarchical contentdistribution using P2P technology,” in Proceedings of the 200816th International Conference on Networks, ICON 2008, India,December 2008.

[5] F. Alghamdi, A. Barnawi, and S. Mahfoudh, “Fog-Based CDNArchitecture Using ICN Approach for Efficient Large-ScaleContentDistribution,” inAdvances in Information and Commu-nicationNetworks, vol. 887 ofAdvances in Intelligent Systems andComputing, pp. 685–696, Springer International Publishing,2019.

[6] M. Day, B. Cain, G. Tomlinson, and P. Rzewski, “A Model forContent Internetworking (CDI),” RFC Editor RFC3466, 2003,https://tools.ietf.org/html/rfc3466.

[7] H. Yin, X. Liu, G. Min, and C. Lin, “Content delivery net-works: A bridge between emerging applications and future IPnetworks,” IEEE Network, vol. 24, no. 4, pp. 52–56, 2010.

[8] M. Wang, P. P. Jayaraman, R. Ranjan et al., “An overview ofcloud based content delivery networks: research dimensionsand state-of-the-art,” in Transactions on large-scale data- andknowledge-centered systems, vol. 9070 of Lecture Notes in Com-put. Sci., pp. 131–158, Springer, Heidelberg, Germany, 2015.

[9] C. Papagianni, A. Leivadeas, and S. Papavassiliou, “A cloud-oriented content delivery network paradigm: modeling andassessment,” IEEE Transactions on Dependable and SecureComputing, vol. 10, no. 5, pp. 287–300, 2013.

[10] F. Bronzino, R. Gaeta, M. Grangetto, and G. Pau, “An adaptivehybrid CDN/P2P solution for Content Delivery Networks,” inProceedings of the 2012 IEEE Visual Communications and ImageProcessing, VCIP 2012, USA, November 2012.

[11] X. Guan and B.-Y. Choi, “Push or pull? Toward optimal contentdelivery using cloud storage,” Journal of Network and ComputerApplications, vol. 40, no. 1, pp. 234–243, 2014.

[12] M. Pathan, R. K. Sitaraman, and D. Robinson, Eds., Advancedcontent delivery, streaming, and cloud services, John Wiley &Sons, Inc., Hoboken, NJ, USA, 2014.

[13] B. Ahlgren, C. Dannewitz, C. Imbrenda, D. Kutscher, and B.Ohlman, “A survey of information-centric networking,” IEEECommunications Magazine, vol. 50, no. 7, pp. 26–36, 2012.

[14] G. Carofiglio, G. Morabito, L. Muscariello, I. Solis, and M.Varvello, “From content delivery today to information centricnetworking,” Computer Networks, vol. 57, no. 16, pp. 3116–3127,2013.

[15] A. Detti, M. Pomposini, N. Blefari-Melazzi, and S. Salsano,“Supporting the Web with an information centric network thatroutes by name,” Computer Networks, vol. 56, no. 17, pp. 3705–3722, 2012.

[16] F. Zhang, “Comparing alternative approaches for mobile con-tent delivery in information-centric networking,” inProceedingsof the 16th IEEE International Symposium on aWorld ofWireless,Mobile and Multimedia Networks, WoWMoM 2015, USA, June2015.

[17] D. R. Cheriton and M. Gritter, TRIAD: A Scalable DeployableNAT-based Internet Architecture, 2000.

[18] V. Jacobson, D. K. Smetters, J. D. Thornton, M. F. Plass, N. H.Briggs, and R. L. Braynard, “Networking named content,” inProceedings of the 5th ACMConference on Emerging NetworkingExperiments and Technologies (CoNEXT ’09), pp. 1–12, Decem-ber 2009.

[19] J. Chen, H. Xu, S. Penugonde, Y. Zhang, and D. Raychaudhuri,“Exploiting ICN for efficient content dissemination in CDNs,”in Proceedings of the 4th IEEE Workshop on Hot Topics inWeb Systems and Technologies, HotWeb 2016, pp. 14–19, USA,October 2016.

[20] W. You, B. Mathieu, and G. Simon, “How to make content-centric networks interworkwithCDNnetworks,” inProceedingsof the 2013 4th International Conference on the Network of theFuture, NoF 2013, Republic of Korea, October 2013.

[21] F. Bonomi, R. Milito, J. Zhu, and S. Addepalli, “Fog computingand its role in the internet of things,” in Proceedings of the 1stACMMobile Cloud Computing Workshop, MCC 2012, pp. 13–15,Finland, August 2012.

[22] S. Yi, Z. Hao, Z. Qin, and Q. Li, “Fog computing: Platform andapplications,” in Proceedings of the 3rd Workshop on Hot TopicsinWeb Systems and Technologies, HotWeb 2015, pp. 73–78, USA,November 2015.

[23] T. H. Luan, L. Gao, Z. Li, Y. Xiang, and L. Sun, “Fog computing:focusing on mobile users at the edge,” Networking and InternetArchitecture, 2015, https://arxiv.org/abs/1502.01815.

[24] L. M. Vaquero and L. Rodero-Merino, “Finding your way inthe fog: Towards a comprehensive definition of fog computing,”Computer Communication Review, vol. 44, no. 5, pp. 27–32,2014.

[25] I. Stojmenovic, “Fog computing: A cloud to the ground supportfor smart things and machine-to-machine networks,” in Pro-ceedings of the 2014 Australasian Telecommunication Networksand Applications Conference, ATNAC 2014, pp. 117–122, Aus-tralia, November 2014.

Page 13: A Novel Fog Computing Based Architecture to Improve the ......Fog computing extends cloud computing to the edge of the network to eliminate the delay caused by transferring data to

Wireless Communications and Mobile Computing 13

[26] C. Mouradian, D. Naboulsi, S. Yangui, R. H. Glitho, M. J.Morrow, and P. A. Polakos, “A Comprehensive Survey on FogComputing: State-of-the-art and Research Challenges,” IEEECommunications Surveys & Tutorials, vol. 20, no. 1, 2017.

[27] M. Abolhasan, T. Wysocki, and E. Dutkiewicz, “A reviewof routing protocols for mobile ad hoc networks,” Ad HocNetworks, vol. 2, no. 1, pp. 1–22, 2004.

[28] T. Clausen and P. Jacquet, “Optimized Link State Routing Protocol (OLSR),” RFC3626, 2003, https://tools.ietf.org/html/rfc3626.

[29] “The Network Simulator ns-2, 2018,” https://www.isi.edu/nsnam/ns/.

[30] L. Breslau andP. Cao, “On the Implications of Zipfs’,” in Proceed-ings of the In 3rd International WWW Caching Workshop, p. 11,1998.

Page 14: A Novel Fog Computing Based Architecture to Improve the ......Fog computing extends cloud computing to the edge of the network to eliminate the delay caused by transferring data to

International Journal of

AerospaceEngineeringHindawiwww.hindawi.com Volume 2018

RoboticsJournal of

Hindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com Volume 2018

Shock and Vibration

Hindawiwww.hindawi.com Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwww.hindawi.com

Volume 2018

Hindawi Publishing Corporation http://www.hindawi.com Volume 2013Hindawiwww.hindawi.com

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwww.hindawi.com Volume 2018

International Journal of

RotatingMachinery

Hindawiwww.hindawi.com Volume 2018

Modelling &Simulationin EngineeringHindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com Volume 2018

Navigation and Observation

International Journal of

Hindawi

www.hindawi.com Volume 2018

Advances in

Multimedia

Submit your manuscripts atwww.hindawi.com