seamless integration of cloud and fog...

9
Seamless integration of Cloud and Fog networks Igor Duarte Cardoso Instituto de Telecomunicações 3800-193 Aveiro, Portugal Email: [email protected] João Paulo Barraca University of Aveiro 3800-193 Aveiro, Portugal Email: [email protected] Carlos Goncalves NEC Laboratories Europe 69115 Heidelberg, Germany Email: [email protected] Rui L. Aguiar University of Aveiro 3800-193 Aveiro, Portugal Email: [email protected] Abstract—This work provides a way to merge Cloud Com- puting infrastructures with traditional or legacy network de- ployments, leveraging the best in both worlds and enabling a logically centralized control for it. A solution is proposed to extend existing Cloud Computing software stacks so they are able to manage networks outside the Cloud Computing infrastructure, by extending the internal, virtualized network segments. This is useful in a variety of use cases such as incremental Legacy to Cloud network migration, hybrid virtual/traditional networking, centralized control of existing networks, bare metal provisioning, and even offloading of advanced services from typical home gateways into the operator. By using what is called External Drivers, any organization can develop their own driver to support new, specific networking equipment. Our concept solution is prototyped on top of OpenStack, including changes to the API, command line interface and other mechanisms. Test results indi- cate that there are low penalties on latency and throughput, and that provisioning times are reduced in comparison with similar maintenance operations on traditional computer networks. I. I NTRODUCTION Trends like Cloud Computing, which exploit Virtualiza- tion concepts, have dominated the service provision market, making the deployment of services easier and more flexible. Telecommunications providers have also started to make use of Cloud Computing, alone or applied to Network Virtualization or Software-Defined Networking (SDN), with the purpose of simplifying tasks and operations, eventually decreasing time-to-market of new services. With Network Virtualization, providers have ultimate control and flexibility, and Cloud tenants are now able to design and implement their own network topologies, with considerable flexibility, and attach them to their own Virtual Machines (VMs). Furthermore, it enables hybrid environments where networks are composed by remotely located virtual instances and locally located servers. What happens nowadays for existing Cloud Computing stacks is that, to achieve the desired level of elasticity and ease of maintenance, the underlying network resources to be virtualized need to be as homogeneous as possible, while supporting specialized control interfaces (e.g. OpenFlow). Ho- mogeneity of resources, by itself, can indeed alleviate a set of problems improving efficiency in multiple aspects. However, it also limits what kind of use cases can be fulfilled by the Cloud Computing provider in comparison to a deployment consisting of heterogeneous network resources. Given these premises, this work seeks the midpoint between typical Cloud Com- puting infrastructures with Network Virtualization, based on homogeneous networking resources, and traditional network deployments that rely on heterogeneous networking equipment (different types, brands, models), exploiting concepts that are parallel to typical SDN solutions while allowing the inclusion of legacy, or non OpenFlow enabled, devices. We believe that this close integration is beneficial and actually required. This work aims to propose a solution that integrates existing Cloud Computing software stacks with al- ready existing networks composed by devices having minimal management functions. Thus, it aims at extending on existing solutions to provide a better, more complete solution as a whole, with the best of two worlds, regarding integration of legacy networks and virtualization. We wish to make it possible to extend a network segment (usually virtual and deployed inside a Cloud Computing / Network Virtualization software stack) with another network segment that can live anywhere outside the first deployment, keeping control logically cen- tralized just as with existing Cloud resources. Moreover, the always-present objective of keeping heterogeneity a core prop- erty also drives our solution. Without this property, the work essentially becomes impossible to deploy in the real world, precluding the accomplishment of any use case that justifies its own development. This document is organized in five sections. Section II describes the state of the art related to the integration of networks into virtual computation environments, with focus in the integration of legacy devices. Section III explains the core motivation for this work, as well as the main use cases that can be made possible. Section IV proposes a solution able to materialize, or empower, the use cases mentioned. Section V evaluates and analyses the solution as implemented on top of OpenStack. Finally, Section VI concludes, providing some final thoughts on the solution and ideas for future work. II. STATE OF THE ART The work we propose is integrated in current developments related to the integration of the so called Fog Networks. That is, the networking environment that surrounds a datacenter but is not directly managed by the cloud orchestration tool. Several other authors also aimed at the same, or similar, objectives, from which we describe the most relevant. Bruce Davie presents [1], [2] a way for extending vir- tual networks from the datacenter to the outside, spanning both virtual and physical resources, while leveraging logically centralized management and control. The simplest use case achievable with the proposed implementation consists of ex- tending network segments (OSI Layer 2), that interconnect existing VM instances deployed, towards the physical network that encompasses physical machines. The authors have gone even further and implemented services in higher layers, namely 978-1-4799-7899-1/15/$31.00 c 2015 IEEE

Upload: others

Post on 25-Jul-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Seamless integration of Cloud and Fog networksksuweb.kennesaw.edu/~she4/2015Summer/cs7860/Reading/92.pdfFarias et al. propose [3], a way to use existing legacy infrastructures simultaneously

Seamless integration of Cloud and Fog networks

Igor Duarte CardosoInstituto de Telecomunicações

3800-193 Aveiro, PortugalEmail: [email protected]

João Paulo BarracaUniversity of Aveiro

3800-193 Aveiro, PortugalEmail: [email protected]

Carlos GoncalvesNEC Laboratories Europe

69115 Heidelberg, GermanyEmail: [email protected]

Rui L. AguiarUniversity of Aveiro

3800-193 Aveiro, PortugalEmail: [email protected]

Abstract—This work provides a way to merge Cloud Com-puting infrastructures with traditional or legacy network de-ployments, leveraging the best in both worlds and enabling alogically centralized control for it. A solution is proposed to extendexisting Cloud Computing software stacks so they are able tomanage networks outside the Cloud Computing infrastructure,by extending the internal, virtualized network segments. This isuseful in a variety of use cases such as incremental Legacy toCloud network migration, hybrid virtual/traditional networking,centralized control of existing networks, bare metal provisioning,and even offloading of advanced services from typical homegateways into the operator. By using what is called ExternalDrivers, any organization can develop their own driver to supportnew, specific networking equipment. Our concept solution isprototyped on top of OpenStack, including changes to the API,command line interface and other mechanisms. Test results indi-cate that there are low penalties on latency and throughput, andthat provisioning times are reduced in comparison with similarmaintenance operations on traditional computer networks.

I. INTRODUCTION

Trends like Cloud Computing, which exploit Virtualiza-tion concepts, have dominated the service provision market,making the deployment of services easier and more flexible.Telecommunications providers have also started to make use ofCloud Computing, alone or applied to Network Virtualizationor Software-Defined Networking (SDN), with the purposeof simplifying tasks and operations, eventually decreasingtime-to-market of new services. With Network Virtualization,providers have ultimate control and flexibility, and Cloudtenants are now able to design and implement their ownnetwork topologies, with considerable flexibility, and attachthem to their own Virtual Machines (VMs). Furthermore, itenables hybrid environments where networks are composed byremotely located virtual instances and locally located servers.

What happens nowadays for existing Cloud Computingstacks is that, to achieve the desired level of elasticity andease of maintenance, the underlying network resources to bevirtualized need to be as homogeneous as possible, whilesupporting specialized control interfaces (e.g. OpenFlow). Ho-mogeneity of resources, by itself, can indeed alleviate a set ofproblems improving efficiency in multiple aspects. However, italso limits what kind of use cases can be fulfilled by the CloudComputing provider in comparison to a deployment consistingof heterogeneous network resources. Given these premises,this work seeks the midpoint between typical Cloud Com-puting infrastructures with Network Virtualization, based onhomogeneous networking resources, and traditional networkdeployments that rely on heterogeneous networking equipment

(different types, brands, models), exploiting concepts that areparallel to typical SDN solutions while allowing the inclusionof legacy, or non OpenFlow enabled, devices.

We believe that this close integration is beneficial andactually required. This work aims to propose a solution thatintegrates existing Cloud Computing software stacks with al-ready existing networks composed by devices having minimalmanagement functions. Thus, it aims at extending on existingsolutions to provide a better, more complete solution as awhole, with the best of two worlds, regarding integration oflegacy networks and virtualization. We wish to make it possibleto extend a network segment (usually virtual and deployedinside a Cloud Computing / Network Virtualization softwarestack) with another network segment that can live anywhereoutside the first deployment, keeping control logically cen-tralized just as with existing Cloud resources. Moreover, thealways-present objective of keeping heterogeneity a core prop-erty also drives our solution. Without this property, the workessentially becomes impossible to deploy in the real world,precluding the accomplishment of any use case that justifiesits own development.

This document is organized in five sections. Section IIdescribes the state of the art related to the integration ofnetworks into virtual computation environments, with focus inthe integration of legacy devices. Section III explains the coremotivation for this work, as well as the main use cases thatcan be made possible. Section IV proposes a solution able tomaterialize, or empower, the use cases mentioned. Section Vevaluates and analyses the solution as implemented on topof OpenStack. Finally, Section VI concludes, providing somefinal thoughts on the solution and ideas for future work.

II. STATE OF THE ART

The work we propose is integrated in current developmentsrelated to the integration of the so called Fog Networks. Thatis, the networking environment that surrounds a datacenter butis not directly managed by the cloud orchestration tool. Severalother authors also aimed at the same, or similar, objectives,from which we describe the most relevant.

Bruce Davie presents [1], [2] a way for extending vir-tual networks from the datacenter to the outside, spanningboth virtual and physical resources, while leveraging logicallycentralized management and control. The simplest use caseachievable with the proposed implementation consists of ex-tending network segments (OSI Layer 2), that interconnectexisting VM instances deployed, towards the physical networkthat encompasses physical machines. The authors have goneeven further and implemented services in higher layers, namely978-1-4799-7899-1/15/$31.00 c⃝2015 IEEE

Page 2: Seamless integration of Cloud and Fog networksksuweb.kennesaw.edu/~she4/2015Summer/cs7860/Reading/92.pdfFarias et al. propose [3], a way to use existing legacy infrastructures simultaneously

Distributed Logical Routing [2] (OSI Layer 3 (L3)). Despitethe qualities of this solution, it has a few shortcomings.First, it has no integration with an existing Cloud Computingnetwork and compute infrastructure stacks. Second, it requiresspecialized hardware at the physical side of the network.Specifically, it requires switches that support the modern OpenvSwitch Database Management Protocol (OVSDB) protocol, afact that precludes integration with legacy devices.

Farias et al. propose [3], a way to use existing legacyinfrastructures simultaneously with new/experimental/futurenetwork architectures, software and protocols, by making useof OpenFlow[4]. The solution presented intends to keep thelegacy part of the network intact, while enabling new networkfunctionality and concepts through modern technologies, mak-ing it possible for entities to incrementally migrate to newtechnologies. and test new approaches without sacrificing cur-rent infrastructure stability. However, it aims at infrastructureswhere modern networking equipment is already in place. Ifwe take as an example a company having a legacy networkinfrastructure, this solution is unable convert or integrate thatnetwork. Also, Cloud Computing integration has not beenaddressed or discussed, making this work difficult to assess.

Chan and Martin propose [5] a way to leverage a virtualizednetworking environment on top of a physical, legacy network,with the objective of optimizing computer networking labclasses in multiple fronts. This solution is an example ofhow to drastically improve networking lab classes in terms offlexibility, technologies offered, Capital Expenditure (CapEx)and Operating Expenditure (OpEx). However, there was nointegration with Cloud Computing solutions to provide Cloudfeatures. Another disadvantage is that this solution has a lim-ited view of the network, i.e., from the virtual server towardsthe physical devices. Also, logically centralized control ofphysical devices is not really addressed or discussed.

In his Master’s Dissertation, Filipe Manco envisioned anadvanced manner for virtualizing an entire legacy networkinfrastructure and attaching it to a Cloud Computing provider[6]. He mapped physical network elements to logical entities inorder to create a network overlay on top of the legacy network.This design effectively addresses virtualization of legacy net-works composed by heterogeneous networking elements andtechnologies, enabling new OSI Layer 2 (L2) networks on topof existing datacenter resources. Cloud Computing integrationwas also taken into account, having OpenStack been chosenas the proposed software stack. Implementation-related detailscan be found in OpenStack blueprints[7], [8]. However, thearchitecture may become administratively complex, which mayincrease the complexity of problem tracing and performanceevaluation, inverting OpEx expectation. Also, no final imple-mentation was developed and tested to ascertain the feasibilityof this design.

A very recent proposal to OpenStack has been submittedto the OpenStack community, to enable what the authorscall “External Attachment Points” [9]. The root motivationfor this work lies in the fact that there is no well-definedway to connect devices not managed by OpenStack directlyinto a Neutron network. The main use cases presented thatjustify this undertaking are the ability to create L2 gatewaysthat make it possible to extend existing broadcast domains ofNeutron networks, via existing datacenter switches. However,

the proposal has limitations in the following aspects: limitsitself to attachment points created by administrators, whichincreases the complexity of use cases related to legacy toCloud, especially for public Clouds; does not predict howdistantly can network attachments function properly; eventhough it has a high degree of heterogeneity by allowing theuse of switches’ Virtual LANs (VLANs) or Open vSwitch(OVS) gateways, it still is not heterogeneous enough for clientsthat make extensive use of legacy networking equipment.

III. MOTIVATION AND USE CASES

Cloud Computing paired with SDN, where network devicesare represented as virtual instances, provides very interestingand useful features to customers. Cloud tenants or admin-istrators are now able to design and implement their ownnetwork topologies, with considerable flexibility, and attachthem to their own virtual machines. However, in order toachieve the desired level of elasticity and ease of maintenance,the resources to be integrated need to be as homogeneous aspossible. Although that is not really a disadvantage for newdeployments, from the perspective of a Telecommunicationsprovider there are undesirable consequences when providing aservice like this to clients, namely overhead and performanceconcerns [10] due to virtualization, and lack of backwardcompatibility with existing equipment. Also, upgrading allCustomer Premises Equipments (CPEs) is an expensive solu-tion as a single provider may have hundreds of thousands, oreven millions of customers, some with outdated devices.

There are several use cases we envision that could benefitfrom a Telecommunications provider having a scalable solutionintegrating virtual and legacy networks. That is, where routers,CPE or servers are able to be orchestrated together with Cloudresources.

A. Incremental Legacy to Cloud Migration

By changing to a Cloud Computing provider that supportsthe work we present, companies can move all resources tobecome virtual instances except the ones which must currentlybe kept as real hosts. Then, Cloud-provided networks would beextended to merge legacy resources that were not moved to theCloud. This network extensibility only requires calling specialoperations on the Cloud Computing provider, and makingsure there is a reachable point in the legacy network thatis externally configurable/manageable in order to provide thelinking point between the two. From the point of view ofthe company, their datacenter would be physically split, buttopologically united.

B. Centralized Network Control and Flexibility

By having parts of the networking Cloud spread acrossdifferent physical/legacy segments (not in the Cloud, as to say),while also offering the ability to reconfigure these parts via alogically centralized Application Programming Interface (API)with the benefits of Cloud Computing, network administrationbecomes more agile in scenarios where infrastructure keepschanging. One of the most useful realizations of centralizednetworking control and flexibility is the creation of a vir-tual campus. The ability to change network attributes in aglobal manner (deployed across the whole location, company,

Page 3: Seamless integration of Cloud and Fog networksksuweb.kennesaw.edu/~she4/2015Summer/cs7860/Reading/92.pdfFarias et al. propose [3], a way to use existing legacy infrastructures simultaneously

campus), e.g. L3 addressing, is an example of what can beenabled with this work. Another is easily setting up networksfor guests or for research testbeds, on top of any connectivityinterface available. This is a generic use case for SDN, whichwe consider valid in our case, enhanced by the fact that legacydevices can be considered.

C. Virtual Customer Premises Equipment

Some Customer Premises Equipments (CPEs), like HomeGateways (HGWs) or Residential Gateways (RGWs), are usedin households to provide users with broadband Internet accessvia an Internet Service Provider (ISP) as well as modern ser-vices like TV’s channel streams’ rewinding. There have beenprogresses and changes to the typical HGW during the lastyears. Some have defended the idea of making an HGW morecomplex, like Rückert et al. [11], others in making it simpler,such as [12]. Cruz et al. [13] present a detailed framework fora fully virtualized HGW/RGW. Virtual HGWs can becomea reality simple to deploy, offloading some services to theISP side, e.g. the local DHCP server. Thus, ISPs can directlymanage HGWs’, which may prove helpful to reduce OpEx,time to market for new services, fight widespread securitythreats (as stated in [12]), etc. Besides HGWs, enterprise CPEscan likewise be integrated in the same scenario, with the sameadvantages and features as Network Function Virtualization(NFV).

D. Bare Metal Provisioning

Due to the overhead in operating virtual machines in anInfrastructure as a Service (IaaS) infrastructure (there havebeen improvements such as [14]), the ability to achieve heavyworkloads, with high performance, becomes limited or com-promised. Therefore, on scenarios where an IaaS infrastructureis in place, the need to provide bare metal machines, i.e.operating systems installed on top of physical machines, hasincreased. Moreover, bare metal machines require networkconnectivity as well. Instead of implementing and deploying aspecific way to reach and boot these machines via the network,this work allows to reuse the same functionality from VMnetworking for that end.

E. Service Chaining to External Entities

Existing equipment and services can be reused (which goestowards the incremental legacy to Cloud migration) for Ser-vice Chaining purposes (live Firewall or Anti-Virus). Initially,service chains were deployed in a hard way, i.e., specializedappliances which were then interconnected in predefined waysto achieve a desired purpose [15]. Recently, with the adventof SDN and NFV, service chains have become more flexible,with less associated OpEx, while easier to deploy and manage.However, legacy service chains are still the dominant portionof Telecommunications providers’ core network infrastructuresbecause it is impracticable to easily, and quickly, replace themwith modern technology [15]. Consequently, a compromisebetween modern and legacy service chaining becomes aninteresting possibility. Existing hardware, providing specificservices, can thus be reused in a Cloud context alongside“software-defined” service chaining features implemented byrecurring to e.g. traffic steering [16], with minimum effort.

F. An Alternative to Virtual Private Networks

A possible end scenario is the deployment of a L2Virtual Private Network (VPN) with all the major advantagesoffered by this work: flexibility, control, Cloud integration andcompatibility with heterogeneous equipment and technologies.Bringing the Local Area Network (LAN) to the Wide AreaNetwork (WAN) is a possible use case for L2 VPNs whichallows, for instance, to play a LAN-only multiplayer videogame over the Internet.

Certainly, other use cases can be extracted from this work,although some may not differ much in relation to the previousones. In summary, the main points that this work leverages are:a) it allows typically virtualized networks to be extended to theoutside (and vice-versa); b) it fits into a Cloud Computinginfrastructure that provides other desirable advantages likemulti-tenancy or, maybe, service chaining features; c) it allowsbringing external services to the Cloud; and d) it opens upthe possibility to develop advanced services on top of it, likelegacy network state monitoring.

IV. INTEGRATING CLOUD AND FOG NETWORKS

This solution is defined by a set of concepts which,together, are able to materialize or empower the use casespresented earlier. We name them as: Network Segment Exten-sion, Network Connection Awareness and Auto Configuration.These concepts drive the general architecture we devised, andcontribute to the flexibility and capability of our solution.

Concept 1: Network Segment Extension

At the root of our solution is the concept of a L2 NetworkSegment Extension. These are L2 network segments “living”in a virtualized network infrastructure, possibly managed byan IaaS provider, that can be extended beyond that basicinfrastructure. The interpretation for Figure 1 is that thevirtualized network segment is extended beyond the Cloud bybridging it with the remote network segment. Consequently,all hosts, either connected to the virtualized or remote part ofthe network, will be in the same broadcast domain.

Fig. 1. A virtualized network segment extends into a remote network.

Concept 2: Network Connection Awareness

It is beneficial to keep the state of the whole networkup to date. Given that, this work is designed to be awareof all main entities “connected” to the Cloud infrastructure.Specifically, it has knowledge about end devices which areexternally connected, i.e. L3 hosts in the remote networksegment, and the devices that give connectivity to the former.Figure 2 demonstrates this transparency by what an adminis-trator should generally be able to see.

Page 4: Seamless integration of Cloud and Fog networksksuweb.kennesaw.edu/~she4/2015Summer/cs7860/Reading/92.pdfFarias et al. propose [3], a way to use existing legacy infrastructures simultaneously

Fig. 2. An administrator is able to have a complete view over the network.

Concept 3: Auto Configuration

This solution takes care of necessary configuration actionsthat need to be taken on the devices, which are responsiblefor fulfilling network segment extension at the remote infras-tructure’s side. Additional configurations can be made via anetwork state reporting interface which allows the report ofspecific events towards the IaaS provider. These reports areuseful for reactive changes in the network, security policies,post mortem forensics, or simply for other administrativepurposes. The solution is implicitly flexible because it makesavailable a generic interface for customizing each new networksegment, depending on the technologies and configurationoptions available in each device. An example of this flexibilityoccurs when extending networks via a device that is actuallyan wireless Access Point (AP). This solution is flexible to thepoint of setting up different Service Set Identifiers (SSIDs)with different security configurations. Flexibility also meansthat segments may be of different networks or the samenetwork. Finally, the solution is heterogeneous because it isdevised to be compatible with virtually any kind of device.The main idea is that the support for devices is driver-basedand pluggable, as depicted in Figure 3. Given that, thereis a clear separation between the overall features and theirimplementation.

Fig. 3. Heterogeneity attained by developing drivers for devices.

A. Architecture

A Cloud Computing framework, able to orchestrate Cloudinstances and its networks, can be improved in a way thatenables integration of the so called Fog networks, enhancingits instantiation by supporting the use cases discussed in theprevious section. To this end, and by taking in considerationthe previous concepts, a generic solution is proposed. Its mainobjective is to provide a clear and simple method to integrateexternal devices into the virtualization framework, enabling

Fig. 4. Flexibility attained by configuring the device with any optionsupported by its driver.

direct communication between VMs and bare hosts residingin foreign network segments, supported by networking devicesincluding legacy (non OpenFlow enabled) ones. The solutionaims to give Cloud consumers the ability to extend theirvirtual networks out of the datacenter, and vice-versa, in aself-service manner. Furthermore, dedicated components havethe responsibility of dealing with different networking devicesin an abstract and extensible manner. With this design, aCloud Computing software stack can be used to control distantnetwork segments, merging them with virtualized counterpartsin an automated manner. Several root concepts drive the designpattern of this solution. Figure 5 provides an overview of thearchitecture, showing how the entities are organized.

Fig. 5. An overview of the proposed architecture.

The Cloud Computing framework is called a Network Vir-tualization Stack (NVS). An example of a NVS is OpenStack,a Free and Open-Source Software (FOSS) Cloud Computingsoftware stack, along with Neutron, its networking component.The solution we propose is named External Port Extension(EPE) and translates as an extension to a NVS, which enablesexternal ports (or hosts) to be brought into the Cloud. NVSand EPE together form a deployable solution. All the newentities developed can be considered part of the EPE. FromNVS’ sole perspective though, EPE is only an interface forextended functionality. The legacy (non OpenFlow enabled)devices that support foreign networks, which can be anyswitch, router or other equipment with network bridgingcapabilities and a remote management interface accessibleby IP, are called Attachment Devices (ADs). They must beassigned to a specific driver in order to translate the operationsordered by the EPE. The driver is called an External Driver(ED) and has all necessary instructions to allow interaction

Page 5: Seamless integration of Cloud and Fog networksksuweb.kennesaw.edu/~she4/2015Summer/cs7860/Reading/92.pdfFarias et al. propose [3], a way to use existing legacy infrastructures simultaneously

with specific kinds of ADs. The Network Point Controller(NPC) expects all drivers to follow the same interface soconfigurations and operations can be translated to each AD.Each AD must be reachable via the Internet Protocol (IP)network to leverage network segment extension. The driversare managed and instantiated by a component we name asNPC. Thus, it leverages the communication against ADs toexchange state and configurations, further translating abstractoperations into devices’ configurations. Each AD can providemultiple logical entities called Network Attachment Points(NAPs). They can be interpreted as simple, logical L2 bridges,configured inside the parent AD to bridge a set of its interfaceswith a Cloud network remotely reachable by the AD. ALayer 2 Gateway (L2GW)[17] is a possible materializationof a NAP. This entity can be of a special type when it isalso capable of reporting network state information, which canbe utilized to trigger actions inside NVS or EPE for specificreports. Finally, Network External Ports (NEPs) refer to anyexternal IP host: a server, smartphone, sensor, camera, etc.,directly connected/known by the AD, that can be brought intothe domain of a Cloud network. NAPs are made up by a setof NEP resources.

B. Data Model

The data model for this solution requires the support ofADs, NAPs, EPE, networks and network ports (typically forVMs or other Cloud-provided services). The class diagrampresent in Figure 6 illustrates how these entities relate to eachother.

Fig. 6. Class Diagram of solution and directly-related classes.

The first new data entity developed to support this work isthe AD. The data model for this entity is presented in Table I.The IP address attribute is used to reach the device, whiledriver is used to instantiate the proper software componentto interact and configure the device. The NAP data entityis presented in Table II. The identifier attribute is used topass configuration options to the driver responsible for NAP’sdevice, free to be used by the Cloud consumer. The technologyfield guarantees that both the device and the Cloud Computingframework agree in the technology used for extending thenetwork, e.g. Generic Routing Encapsulation (GRE) or VirtualExtensible LAN (VXLAN). Finally, the NEP data entity isin Table III and maps external hosts to the Cloud-managednetwork. The other entities of Networks and Ports are supposedto be part of the NVS, where ports are anything that canbe connected to a Cloud network: a VM or another networkservice.

Name Type Access Default Validationip_address string RW, all ip_addressdriver string RW, all string

TABLE I. DATA MODEL FOR ATTACHMENT DEVICES

Name Type Access Default Validationdevice_id string RW, all N/Aidentifier string RW, all stringtechnology string RW, all stringnetwork_id string RW, all uuid_or_noneindex int RO, all generated intreport_point string RW, all True convert_to _boolean

TABLE II. DATA MODEL FOR NAPS

C. Interfaces

This solution inserts new operations in the NVS, meantfor Cloud tenants or administrators. The first one is attachinga NAP, where an order for extending a network segment isissued. NVS propagates the new request down the architec-ture, reaching NPC, where it will instantiate the desired EDand configure a new AD. Detaching a NAP is the oppositeoperation, but following the same path in the architecture.The rest are simple Create, Read, Update, Delete (CRUD)operations against ADs, NAPs and NEPs. It must also benoted that, when a Network Report Point (NRP) is attached,NPC also calls a special Monitor operation on the ED to carryout network state reporting. To keep this solution open andable to be further integrated, without breaking any existingfunctionality provided by the NVS, existing interfaces werenot changed, only new ones explicitly created. Consumerswill make use of the new features only if they intend to.The programmatic interface acts as an API for tenants oradministrators. The interface specified between NVS and EPEmirrors the operations that consumers can request via theprogrammatic API.

D. Integration with OpenStack

In order to validate and test our solution, a referenceimplementation was developed on top of OpenStack/Neutron,acting as an NVS.

In a deployment perspective, Figure 7 shows OpenStack,extended with an implementation of the EPE solution, andits corresponding API. A total of five Attachment Pointsare spread amongst two ADs. The first AD is an OpenWrt-flashed HGW providing three Attachment Points, two of themmaterialized in different wireless SSIDs and the third onematerialized in a specific Ethernet port. A Cloud shape showshow the clients of the laptops connected to the wirelessAP’s SSID are connected to the rest of the network elements(VMs) living inside OpenStack, at the same broadcast domain.Figure 7 also demonstrates a tunnel established between theOVS’ br-int and the OpenWrt-flashed HGW AD, fulfilling anetwork segment extension. Analogous clouds and tunnels canbe drawn for the remaining Attachment Points of Figure 7,but have been omitted to keep the figure legible. Concerningthe data model, resources of networks and ports were leftuntouched because they already exist on OpenStack, while thenew data classes were added. About the API, the operationsspecified at IV-C were added and the complete data modelexposed via a CRUD interface. Full support was developedfor the Command Line Interface (CLI) project for Neutron,python-neutronclient, to make use of the new opera-tions offered by the EPE.

Page 6: Seamless integration of Cloud and Fog networksksuweb.kennesaw.edu/~she4/2015Summer/cs7860/Reading/92.pdfFarias et al. propose [3], a way to use existing legacy infrastructures simultaneously

Name Type Access Default Validationmac_address string RW, all mac_addressattachment _point_id string RW, all uuid_or_none stringport_id string RW, all uuid_or_none string

TABLE III. DATA MODEL FOR EXTERNAL PORTS

Fig. 7. An example of a Neutron External Port Extension deployment.

E. External Drivers

Two working drivers are developed in this work: OpenWrtand Cisco EtherSwitch. Only the GRE tunneling technologyis available, for both drivers. The first driver interfaces withtypical switches or routers running OpenWrt 1. Compatibilitywith this driver must be checked for each device, becauseOpenWrt runs on a wide array of switches and routers.Connections made by the driver can either be through sshor telnet. Internally, the OpenWrt External Driver makesheavy use of the UCI system [18] to inject network config-urations into the AD. These configurations primarily createnew VLANs, reconfiguring the integrated switch and wirelessradios to assign ports and SSIDs to these VLANs. RegardingCisco EtherSwitch IOS driver, it interfaces with the CiscoEtherSwitch line of switching modules. Connections are madethrough telnet. Internally, VLANs are created and thenassociated to a bridge group [19]. Besides that, GRE tunnelsare created and associated to the bridge group as well.

V. EVALUATION AND ANALYSIS

We proceeded to evaluate our solution by carrying outtests to measure different characteristics of the implementation,considering its deployment on a simplified real world scenario.We consider several hosts, connected to the Neutron managednetworks as External Ports, having access to the ExternalNetwork like any other host, such as a Nova virtual instance.In our scenario there is no routing of packets between theserver running OpenStack and the AD that provides NAPs,to focus on measurements that are, as much as possible, partof the network segment extension. Figure 8 shows this testscenario where the OpenStack server is directly connected to

1https://openwrt.org

the AD that will provide NAPs. It must be noted that, foreach physical machine and VM that form the Test Scenario, aspecific amount of main memory is reserved, and there is noovercommit, so that memory swapping does not occur.

Fig. 8. Representative diagram of the test scenario

A Neutron network is already provisioned inside the Open-Stack node presented in Figure 8, with two Nova instancesrunning. At the right-hand side, an OpenWrt switch/router isdepicted. The server running OpenStack has an Intel Core i5-2450M CPU. Main memory totals 8 GB at 1333 MHz (dual-channel). The operating system in use is Arch Linux runningon top of the Linux kernel 3.17.1 (64-bit). Finally, it usesVirtualBox 4.3.18 to run an Ubuntu VM, on top of whichDevStack2 is installed. This VM is set to acquire networkconnectivity by bridging against the host’s Network InterfaceController (NIC). All code developed in this work has beenapplied on top of Neutron’s stable/icehouse git branchas of mid-October 2014, which is the basis for all tests. TheVM running DevStack is an Ubuntu 14.04.1 LTS on top ofUbuntu’s linux-generic kernel 3.13.0-37 (64-bit). One CPUcore is assigned to this VM and a total of 5 GB of mainmemory is allocated to it. Both Intel VT-x and Intel EPThardware virtualization technologies 3, compatible with thehost’s CPU, are enabled under VirtualBox.

The VM instances running in Nova are Ubuntu 14.04.1LTS images taken from Ubuntu Cloud Images 4, daily build20141016. Depending on the test, either one or two of thesemachines were provisioned under Nova. Each VM has atotal of 512 MB of main memory allocated to it and thekernel is Ubuntu’s linux-generic kernel 3.13.0-37 (32-bit).The Virtual NIC (vNIC) driver in use by the instance isthe virtio_net 5 for improved network performance invirtualized guests/instances. Because these instances are al-ready provisioned inside a virtualized environment, the defaulthypervisor (KVM6) does not make use of hardware accel-eration, instead defaulting to Tiny Code Generator (TCG)7.The Attachment Device is a NETGEAR WNDR3700v1 having64 MB of Random-Access Memory (RAM). Barrier Breaker14.07 is the OpenWrt version, which uses the Linux kernel3.10.49. The remote computer is a traditional PC with thelatest Ubuntu 14.04.1 LTS, linux-generic 3.13.0-37 (32-bit),installed and all packages updated to the latest version as of

2http://devstack.org/3http://ark.intel.com/Products/VirtualizationTechnology4http://cloud-images.ubuntu.com/trusty/current5http://www.linux-kvm.org/page/Virtio6http://www.linux-kvm.org7http://wiki.qemu.org/Documentation/TCG

Page 7: Seamless integration of Cloud and Fog networksksuweb.kennesaw.edu/~she4/2015Summer/cs7860/Reading/92.pdfFarias et al. propose [3], a way to use existing legacy infrastructures simultaneously

20th October, 2014. The CPU is an Intel Celeron D with aclock speed of 2.66 GHz. Main memory totals 960 MB at 400MHz.

A. Attach and Detach Latency

The first aspect to test is how long administrators/tenantsneed to wait before their orders for attaching or detachingAttachment Points become effective. Two processes have beenmeasured: Attach and Detach. In order to faithfully measurethese times, with a guaranteed upper bound of precision, atimer is started in an additional computer directly connectedto the Device at the exact moment the API request to attach anetwork is sent to Neutron. This timer has been programmedto keep sending network pings to Neutron’s virtual router IPaddress in the network to be attached. Pings are sent every100 milliseconds and, as soon as a reply is obtained, the timeris stopped, and the total latency is obtained. Thus, the upperbound in precision when measuring the setup latency is 100milliseconds. Similarly, for the Detach latency, the timer stopsas soon as pings stop being received.

Both tests of Attach and Detach latency have been repeateda total of 10 times each. Tables IV and V present averageand standard deviance of the Attach and Detach latency tests,respectively.

Average 8.80 sStandard Dev. 0.25 s

TABLE IV. ATTACH TIME.

Average 3.97 sStandard Dev. 0.23 s

TABLE V. DETACH TIME.

Attach times have been observed to be around the 9 secondsmark. Waiting 9 seconds to insert a new, external, networksegment to a virtual network managed by OpenStack, is asignificant achievement when looking at the workarounds thatusually need to be made. For instance, and considering thatthis work is best suited for existing network equipment, usuallywithout support for SDN protocols such as SDN, rearrangingnetworks so that a special host becomes available in a specificnetwork may take anywhere from some hours to weeks. Aboutthe Detach time, it is around the 4 seconds mark, less thenhalf of setup’s time. What has been said for the Attachtime also applies in this case. The difference in the Detachtime is justified by the fact that the AD must restart theOpenWrt network service to apply all configurations, imposingaround five more seconds to get connectivity back. Whendetaching, the additional time is ignored because the deviceshuts connectivity as soon as the network service is orderedto be restarted.

B. Traffic Latency

Traffic Latency can be of two types: Local and Remote.Local Latency measures the latency between two VMs hostedby Nova in a standard out-of-the-box DevStack deployment.The relevancy of this test lies in the overall comparison oflatency between different VMs hosted in a single node andlatency between one of these VMs and an External Portphysically distant from the Nova node, or Remote Latency.

Both tests of Local and Remote Latency (including fromOpenStack itself to one of its Nova VMs) have been carriedout by sending a total of 100 pings between hosts. Locallatency tests are hereafter named “VM - VM” based on the

fact that communication takes place between Nova VMs,bidirectionally. Similarly, Remote latency tests are named“VM - PM”. Tables VI and VII present average and standarddeviance variables of the Local and Remote Latencies’ tests,respectively.

Average 1.10 msStandard Dev. 0.34 msTABLE VI. LOCAL

LATENCIES’.

Average 1.42 msStandard Dev. 0.24 ms

TABLE VII. REMOTELATENCIES’.

Tables VI and VII state the average and standard deviances.What is most important to notice in these tests is how theaverage latency changes when moving from a VM scenario toa VM to External Port scenario. The average of the former is1.10 milliseconds while the latter is 1.42 milliseconds, an in-crease of approximately 29%. The first result is not influencedby the work proposed as there are no dependences betweenthem. Given that both VMs are run by the same Compute/Novanode and the External Port computer is physically locatedoutside this node and its network (and behind an OpenWrtEthernet switch), this delay increase is not very significant andis within expectations for such a network deployment scenario.

C. Traffic Throughput

Traffic Throughput tests aim to measure the overallthroughput between different combinations of the kinds ofnodes involved: Nova instances and External Ports. Unlessotherwise stated, the User Datagram Protocol (UDP) buffersize is 160 KiB, UDP datagram size is 1470 KiB, TransmissionControl Protocol (TCP) client’s window size is 48.3 KiBand TCP server’s window size is 85.3 KiB. Local TrafficThroughput measures the average throughput between twoVMs hosted by Nova. This test is analogous to the LocalLatency test, except that it is meant to measure throughputinstead of latency. Besides, it is decoupled in two differentsub-tests: one for TCP and another for UDP. Analogously,there is a Remote Traffic Throughput test.

For the testing procedure, iperf 8 is used. One of theinstances is set listening for incoming connections. At theother VM, iperf is set as a client to send traffic to the firstinstance In the case of UDP, the following command is used:iperf -u -c 10.0.0.5 -b 1000M. About the RemoteTraffic Throughput test, because the hosts are not twins any-more, it is actually undertaken two times: one having iperflistening at the Nova VM and the other at the computer hostedas an External Port. This way, throughput can be analyzedboth in an upstream manner and a downstream manner. TheUDP iperf command when VMs act as clients is different:iperf -u -c 10.0.0.5 -b 1000M -l 1430, where-l 1430 specifies a custom datagram size. The reason foradding a non-default datagram size is that OVS would other-wise drop the datagrams instead of fragmenting them to fit theGRE tunnel.

Both tests of Local, Remote downstream and Remoteupstream Traffic Throughput have been iterated 20 times.Local traffic throughput tests are hereafter named “VM→VM”based on the fact that traffic is sent from one Nova VMto another one, which one is not important given their twin

8https://iperf.fr/

Page 8: Seamless integration of Cloud and Fog networksksuweb.kennesaw.edu/~she4/2015Summer/cs7860/Reading/92.pdfFarias et al. propose [3], a way to use existing legacy infrastructures simultaneously

nature and network symmetry. Tables VIII, IX, X, XI, XII andXIII present average and standard deviance variables of TCPand UDP’s Local, Remote Downstream and Remote UpstreamThroughput tests, respectively.

Average 290, 1 MbpsStandard Dev. 20, 0 Mbps

TABLE VIII. LOCAL TCPTRAFFIC THROUGHPUT.

Average 26, 7 MbpsStandard Dev. 1, 6 Mbps

TABLE IX. LOCAL UDPTRAFFIC THROUGHPUT.

Average 90, 0 MbpsStandard Dev. 1, 0 MbpsTABLE X. REMOTE

UPSTREAM TCP TRAFFICTHROUGHPUT.

Average 58, 9 MbpsStandard Dev. 4, 6 MbpsTABLE XI. REMOTEUPSTREAM UDP TRAFFIC

THROUGHPUT.

Average 77, 5 MbpsStandard Dev. 2, 3 MbpsTABLE XII. REMOTE

DOWNSTREAM TCP TRAFFICTHROUGHPUT.

Average 63, 1 MbpsStandard Dev. 3, 3 Mbps

TABLE XIII. REMOTEDOWNSTREAM UDP TRAFFIC

THROUGHPUT.

Throughput between VMs for UDP is the lowest becausethere is a higher processing load. The actual reason forUDP being this heavy when packets are being sent/receivedbetween VMs, to the point it becomes a bottleneck, can beattributed to iperf’s default UDP buffer size of 160 KiBwhich fills up very quickly and stresses the machine. Alltests have shown sustained network speed, with quite lowstandard deviances. The use of tunnels does not negativelyimpact network performance/throughput more than it doestoday. In Cloud Computing datacenters with multiple computenodes, there is likewise an impact on the throughput betweenVMs there provisioned, due to traffic encapsulation. The ADbeing used is the one that may instill limitations, due toprocessing overhead and available internal bandwidth, but thatis dependent on the hardware used.

D. Traffic Overhead

Traffic Overhead is an implicit test. Although it is importantto (explicitly) measure traffic overhead in this implementation,the only driver tested relies on the GRE protocol to achievenetwork segment extension. As such, traffic overhead (for thetotal packet size) is always fixed.

For traffic that consists in smaller packets, the trafficoverhead proportion is greater. Sinha et al. provide a technicalreport on the distribution of packet sizes on the Internet inOctober 2005 [20]. They observed a strong mode of 1300B packets (L2 data length) in some cases. Besides, packetsizes seemed to follow a mostly bimodal distribution for 40 Bpackets and 1500 B packets (at 40 % and 20 % of packets,respectively). Table XIV summarizes traffic/packet overheadsfor the cases described as well as for other typical packet kinds,with Ethernet data sizes as given by the GRE overhead formulapresent in Equation 1, where n is the Ethernet data length, inbytes. Figure 9 generalizes the formula for all packet sizes upto 1500 bytes.

f(n) =n+ 60

n+ 18− 1 , n ∈ [1, 1458] (1)

Packet description Size OverheadSequential transfer 1500 B 2.92 %1500 B packet 1500 B 4.95 %1300 B packet 1300 B 3.19 %DHCP Discover 328 B 12.14 %Internet Control Message Protocol (ICMP) Echo Request 84 B 41.18 %TCP Ack IPv6 72 B 46.67 %DNS Query 60 B 53.85 %TCP Ack IPv4 52 B 60.00 %40 B packet 40 B 72.41 %

TABLE XIV. EXAMPLES OF PACKETS AND ASSOCIATED GREOVERHEAD.

Fig. 9. Packet overhead per individual packet size, using GRE.

It has been shown that the overhead beyond L3, i.e. IPdata length, is approximately 2.92% for a typical networkdeployment relying on an Maximum Transmission Unit (MTU)of 1500. This means that, for a sequential file transfer over thenetwork, approximately 2.92% more packets will be sent (andprobably acknowledged, depending on the transport protocol).Even though packet overhead may be too high for somepackets, proportionally, it does not invalidate the deployabilityof this work with the GRE technology, unless in very specificscenarios with tight requirements. For large packets, they areusually followed by more packets of the same kind, usually forstream transfers, keeping overhead below 10 %. For mediumpackets, overhead will be more significant but still usuallybelow 10 %. For small packets, where packet overhead (andtraffic overhead for that matter, as multiple packets maybe transferred in a short time span) is proportionally moresignificant, it may not be a concern anyway because the packetsare small. However, if the link becomes congested with smallpackets, throughput will be much lower than the link supports.

VI. CONCLUSION

With this work we presented a novel way of combiningthe advantages of modern Cloud Computing software stacks,with the benefits of having an heterogeneous, bare-metal oreventually legacy network deployed. Given the ability of CloudComputing to provide network connectivity as a service totenants, coupled with inherent properties like flexibility and on-demand service, we could make both parts integrate and inter-operate in a seamless way, with control and administrationin a logically centralized manner. Results have shown thatthese advantages are not eclipsed by major problems, againempowering the feasibility of the work. The work hereindetailed was supported by describing real and current use casesto Telecommunications providers.

Page 9: Seamless integration of Cloud and Fog networksksuweb.kennesaw.edu/~she4/2015Summer/cs7860/Reading/92.pdfFarias et al. propose [3], a way to use existing legacy infrastructures simultaneously

We believe that this work could be further improved by theaddition of a robust manner to detect new External Ports viathe External Driver, for instance by recurring to Link LayerDiscovery Protocol (LLDP) or Simple Network ManagementProtocol (SNMP). Another possible addition to this work isinherent support for High Availability (HA). Extreme robust-ness and security improvements (not just by relying on EDs’capabilities) are amongst other desirable characteristics. Theability to integrate this work with an orchestration project, suchas OpenStack’s Heat, is another path. Furthermore, being ableto leverage Service Function Chaining (SFC) with ExternalPorts, which might be implemented by integrating the TrafficSteering work presented in [16] is an interesting undertaking.

REFERENCES

[1] B. Davie, “Network Virtualization Gets Physi-cal,” 2013. [Online]. Available: http://blogs.vmware.com/cto/network-virtualization-gets-physical/

[2] B. Davie, R. Fortier, and K. Duda, “Physical Networks in the VirtualizedNetworking World,” 2014. [Online]. Available: http://blogs.vmware.com/networkvirtualization/2014/07/physical-virtual-networking.html

[3] F. N. N. Farias, J. J. Salvatti, E. C. Cerqueira, and A. J. G. Abelem,“A proposal management of the legacy network environment usingOpenFlow control plane,” in Proceedings of the 2012 IEEE NetworkOperations and Management Symposium, NOMS 2012, 2012, pp. 1143–1150.

[4] N. Mckeown, T. Anderson, H. Balakrishnan, G. M. Parulkar, L. L.Peterson, J. Rexford, S. Shenker, J. S. Turner, and S. Louis, “OpenFlow:enabling innovation in campus networks,” Computer CommunicationReview, vol. 38, pp. 69–74, 2008.

[5] K. C. Chan and M. Martin, “An integrated virtual and physical networkinfrastructure for a networking laboratory,” in 2012 7th InternationalConference on Computer Science & Education (ICCSE 2012). IEEE,2012, pp. 1433 – 1436.

[6] F. Manco, “Network infrastructure control for Virtual Campus,” MScDissertation, Universidade de Aveiro, 2013.

[7] ——, “Appendix D - Campus Network Extension, Blueprint,” Networkinfrastructure control for Virtual Campus, p. 12, 2013.

[8] ——, “Appendix E - ML2 External Port Extension, Blueprint,” Networkinfrastructure control for Virtual Campus, p. 6, 2013.

[9] K. Benton, “Neutron External Attachment Points,” 2014.[Online]. Available: https://review.openstack.org/\#/c/87825/13/specs/juno/neutron-external-attachment-points.rst

[10] M. A. Vouk, “Cloud Computing Issues, Research and Implementa-tions,” in ITI 2008 - 30th International Conference on InformationTechnology Interfaces. IEEE, Jun. 2008, pp. 31–40.

[11] J. Ruckert, R. Bifulco, M. Rizwan-Ul-Haq, H.-J. Kolbe, andD. Hausheer, “Flexible traffic management in broadband access net-works using Software Defined Networking,” in 2014 IEEE NetworkOperations and Management Symposium (NOMS), 2014, pp. 1–8.

[12] N. Feamster, “Outsourcing home network security,” in Proceedings ofthe 2010 ACM SIGCOMM workshop on Home networks - HomeNets’10, 2010, pp. 37–42.

[13] T. Cruz, P. Simoes, N. Reis, E. Monteiro, and F. Bastos, “An architecturefor virtualized home gateways,” in Proceedings of the 2013 IFIP/IEEEInternational Symposium on Integrated Network Management, IM 2013,2013, pp. 520–526.

[14] A. Gordon, N. Amit, N. Har’El, M. Ben-Yehuda, A. Landau, A. Schus-ter, and D. Tsafrir, “ELI: bare-metal performance for I/O virtualization,”in ASPLOS ’12 Proceedings of the seventeenth international conferenceon Architectural Support for Programming Languages and OperatingSystems4, 2012, pp. 411–422.

[15] W. John, K. Pentikousis, G. Agapiou, E. Jacob, M. Kind, A. Manzalini,F. Risso, D. Staessens, R. Steinert, and C. Meirosu, “Research directionsin Network Service Chaining,” in SDN4FNS 2013 - 2013 Workshop onSoftware Defined Networks for Future Networks and Services, 2013.

[16] C. Goncalves and J. Soares, “Traffic Steering blueprint,” 2014.[Online]. Available: https://review.openstack.org/#/c/92477/7/specs/juno/traffic-steering.rst

[17] Y.-H. C. Y.-H. Chang, “Wide area information accesses and the infor-mation gateways,” Proceedings of lst IEEE International Workshop onCommunity Networking, 1994.

[18] OpenWrt, “The UCI System,” 2014. [Online]. Available: http://wiki.openwrt.org/doc/uci

[19] Cisco, “Configuring Transparent Bridging,” 2005. [Online]. Avail-able: http://www.cisco.com/c/en/us/support/docs/ibm-technologies/source-route-transparent-srt-bridging/10676-37.html

[20] R. Sinha, C. Papadopoulos, and J. Heidemann, “Internet PacketSize Distributions: Some Observations,” USC/Information SciencesInstitute, Tech. Rep. ISI-TR-2007-643, May 2007. [Online]. Available:http://www.isi.edu/~johnh/PAPERS/Sinha07a.html