d2.32 - specification of the infrastructure virtualisation ... · orchestration and management of...

119
T-NOVA | Deliverable D2.32 Specification of the Infrastructure Virtualisation, Management and Orchestration - Final © T-NOVA Consortium 1 Deliverable D2.32 Specification of the Infrastructure Virtualisation, Management and Orchestration - Final Editor Michael J. McGrath (INTEL) Contributors Vincenzo Riccobene (INTEL), Pedro Neves, José Bonnet, Jorge Carapinha, Antonio Gamelas (PTIN), Dora Christofi, Georgios Dimosthenous (PTL), Beppe Coffano, Luca Galluppi, Pierangelo Magli, Marco Di Girolamo (HP), Letterio Zuccaro, Federico Cimorelli, Roberto Baldoni (CRAT), George Xilouris, Eleni Trouva, Akis Kourtis (NCSRD), Zdravko Bozakov, Panagiotis Papadimitriou (LUH), Jordi Ferrer Riera (i2CAT), Piyush Harsh, Irena Trajkovska (ZHAW), Kimon Karras (FINT), Paolo Comi (ITALTEL) Version 1.0 Date October 28 th , 2015

Upload: doanhuong

Post on 28-Jul-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

1

DeliverableD2.32

SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

Editor MichaelJ.McGrath(INTEL)

Contributors VincenzoRiccobene(INTEL),PedroNeves,JoséBonnet,JorgeCarapinha,AntonioGamelas(PTIN),DoraChristofi,GeorgiosDimosthenous(PTL),BeppeCoffano,LucaGalluppi,PierangeloMagli,MarcoDiGirolamo(HP),LetterioZuccaro,FedericoCimorelli,RobertoBaldoni(CRAT),GeorgeXilouris,EleniTrouva,AkisKourtis(NCSRD),ZdravkoBozakov,PanagiotisPapadimitriou(LUH),JordiFerrerRiera(i2CAT),PiyushHarsh,IrenaTrajkovska(ZHAW),KimonKarras(FINT),PaoloComi(ITALTEL)

Version 1.0

Date October28th,2015

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

2

ExecutiveSummaryThe specification presented in this document represents the final version for theInfrastructureVirtualisation,ManagementandOrchestration layersof theT-NOVAsystem.ThisspecificationutilisestherequirementsdescribedinpreviousdeliverablestogetherwiththelatestNetworkFunctionsVirtualisation(NFV)andvirtualisationrequirementsdefinedbyvariousindustrybodiesincludingtheETSIISGNFVandITU-T,aswellasexcerptsofrelevantpartsoftheETSIISGMANOWGarchitectureandassociatedFunctionalEntities(FEs).The concepts of virtualisation and NFV/SDN have a key influence on the design andimplementationoftheInfrastructureVirtualisation,ManagementandOrchestrationlayers.These approaches have respective pros and cons of both in the context of the T-NOVAsystem. TheOrchestration and Infrastructure Virtualisation andManagement (IVM) layersbuildtheseconceptstocreateafunctionalarchitecturethatformsakeyconstituentoftheoverallT-NOVAsystemarchitecture.TherelationshipbetweentheT-NOVAOrchestratorandtheNFVparadigmplaysakeyroleinitsdesign.TheOrchestratorneedstoaddressthechallengesassociatedwiththedeliveryofa suitable solution to support the deployment and management of virtualised networkservices (NS). The elucidation of associated functional requirements plays a key role inhelpingtoaddressesthesechallenges.Categorisingtherequirementsalsohelpsinensuringa suitable level of granularity which makes sure all aspects of system functionality areappropriately captured and documented. These requirements are then used to identifysystemcomponentssuchasinterfaces,cataloguesetc.andtheirassociatedfunctionalities.The IVM layer is responsible to the provisioning and management of the infrastructurewhich provides the execution environment for T-NOVA’s network services. The overalldesignofthelayerisdrivenbyavarietyofrequirementssuchasperformance,elasticityetc.The IVM layer is comprised of the Network Functions Virtualised Infrastructure (NFVI),Virtual InfrastructureManager(VIM)andWANInfrastructureConnectionManager(WICM)functional entities which are linked together with the other T-NOVA system componentsthrough various internal and external interfaces. The NFVI is comprised of compute,hypervisorandnetworkconstituentdomainswhoseobjectivesandrequirementsneedtobecarefullyconsideredinthecontextoftheoverallIVMdesignandimplementation.With any architectural design it is important to interrogate it in order todetermine if thedesignappropriately supports itsassociated requirementsanddesigngoals.The referencearchitectures for the IVM and Orchestration layers were interrogated and validated at afunctional level through the development of NS and virtualised network function (VNF)sequence diagrams. These workflow diagrams illustrate the key actions and interactionswithin the layers during standard operational activities related to the deployment andmanagementofVNF’sandNSs.In the design and implementation of any sophisticated system, gaps in availabletechnologieswillalmostcertainlyemerge. If significant thesegapsaffordopportunities fornew research and innovation. A focused gap analysis was carried out to determine thecurrent technology gapswhichaffect theT-NOVA system togetherwithpotential steps toaddressthem.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

3

TableofContents

1.INTRODUCTION..............................................................................................................8

1.1.VIRTUALISATION.................................................................................................................81.1.1 TheVirtualisationConcept................................................................................81.1.2 TheProsandConsofNFVDeployments............................................................9

1.2 THET-NOVASOLUTION...............................................................................................101.2.1 TheT-NOVAOrchestrationPlatform(PTIN)....................................................111.2.2 TheT-NOVAIVMPlatform...............................................................................12

2.THET-NOVAORCHESTRATIONLAYER...........................................................................14

2.1.ORCHESTRATIONLAYEROVERVIEW......................................................................................142.2.ORCHESTRATORREQUIREMENTS.........................................................................................16

2.2.1.NFVORequirementsTypes....................................................................................172.2.1.1.NSLifecycle...................................................................................................................................182.2.1.2.VNFLifecycle.................................................................................................................................182.2.1.3.ResourceHandling........................................................................................................................192.2.1.4.MonitoringProcess.......................................................................................................................192.2.1.5.ConnectivityHandling...................................................................................................................202.2.1.6.PolicyManagement......................................................................................................................202.2.1.7.Marketplace-specificInteractions................................................................................................20

2.2.2.VNFMRequirementsTypes...................................................................................212.2.2.1.VNFLifecycle.................................................................................................................................212.2.2.2.MonitoringProcess.......................................................................................................................21

2.3.FUNCTIONALORCHESTRATORARCHITECTURE........................................................................222.3.1.ReferenceArchitecture..........................................................................................222.3.2.FunctionalEntities.................................................................................................23

2.3.2.1.NetworkFunctionVirtualisationOrchestrator(NFVO).................................................................232.3.2.2.VirtualNetworkFunctionManager(VNFM).................................................................................262.3.2.3.RepositoriesandCatalogues.........................................................................................................28

2.3.3.ExternalInterfaces.................................................................................................282.3.3.1.InterfacebetweentheOrchestratorandtheNetworkFunctionStore........................................292.3.3.2.InterfacebetweentheOrchestratorandtheMarketplace..........................................................292.3.3.3.InterfacebetweentheOrchestratorandtheVIM........................................................................302.3.3.4.InterfacebetweentheOrchestratorandtheTransportNetworkManagement.........................312.3.3.5.InterfacebetweentheOrchestratorandtheVNF........................................................................31

3.THET-NOVAIVMLAYER...............................................................................................33

3.1.INTRODUCTION............................................................................................................333.2.OBJECTIVESANDCHARACTERISTICSOFTHET-NOVAIVMLAYER.................................333.3.T-NOVAIVMLAYERREQUIREMENTS............................................................................343.4.VIRTUALINFRASTRUCTUREMANAGER(VIM)........................................................................35

3.4.1.WANInfrastructureConnectionManager.............................................................363.4.2.NFVICompute........................................................................................................363.4.3.NFVIHypervisor.....................................................................................................363.4.4.NFVIDCNetwork...................................................................................................37

3.5.T-NOVAIVMARCHITECTURE............................................................................................373.5.1.ExternalInterfaces.................................................................................................383.5.2.InternalIVMInterfaces..........................................................................................40

3.6.NFVIANDNFVI-POP........................................................................................................423.6.1.ITResources...........................................................................................................42

3.6.1.1.ComputeDomain..........................................................................................................................433.6.1.2.HypervisorDomain.......................................................................................................................47

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

4

3.6.2.InfrastructureNetworkDomain............................................................................483.7.VIRTUALISEDINFRASTRUCTUREMANAGEMENT......................................................................49

3.7.1.ITResourceManagementandMonitoring............................................................513.7.1.1.HypervisorManagement..............................................................................................................513.7.1.2.ComputingResourcesManagement............................................................................................52

3.7.2.InfrastructureNetworkResourcesManagementandMonitoring........................533.8.WANINFRASTRUCTURECONNECTIONMANAGER..................................................................543.9.WICM:CONNECTINGVNFSTOTHEWAN...........................................................................55

3.9.1.1.SDN-enabledNetworkElements..................................................................................................563.9.1.2.LegacyNetworkElements.............................................................................................................57

4 T-NOVAVNFSANDNSSPROCEDURES.......................................................................58

4.1 VNFRELATEDPROCEDURES...........................................................................................584.1.1 On-boarding....................................................................................................584.1.2 Instantiation....................................................................................................594.1.3 Monitoring.......................................................................................................634.1.4 Scale-out..........................................................................................................644.1.5 Termination.....................................................................................................65

4.2 NSRELATEDPROCEDURES.............................................................................................684.2.1 On-boarding....................................................................................................684.2.2 Instantiation....................................................................................................684.2.3 WANConnections............................................................................................714.2.4 Supervision......................................................................................................714.2.5 Scale-out..........................................................................................................734.2.6 Termination.....................................................................................................76

4.3 NS,VNFANDINFRASTRUCTUREMONITORING.................................................................76

5 GAPANALYSIS..........................................................................................................80

5.1 COMPUTE...................................................................................................................805.2 HYPERVISOR................................................................................................................805.3 SDNCONTROLLERS......................................................................................................815.4 CLOUDCONTROLLERS...................................................................................................825.5 NETWORKVIRTUALISATION...........................................................................................835.6 NFVORCHESTRATOR...................................................................................................85

6 CONCLUSIONS..........................................................................................................87

ANNEXA-ORCHESTRATORREQUIREMENTS....................................................................89

A.1 INTERNALREQUIREMENTS.............................................................................................90A.1.1 NFVORequirements........................................................................................90

A.1.1.1 NSLifecyclerequirements......................................................................................................90A.1.1.2 VNFLifecyclerequirements....................................................................................................91A.1.1.3 ResourceHandlingRequirements..........................................................................................92A.1.1.4 MonitoringProcessrequirements..........................................................................................92A.1.1.5 ConnectivityHandlingrequirements......................................................................................93A.1.1.7 Marketplace-specificinteractionsrequirements...................................................................94

A.1.2 VNFMRequirements........................................................................................94A.1.2.1 VNFLifecyclerequirements....................................................................................................94A.1.2.2 MonitoringProcessrequirements..........................................................................................95

A.2 INTERFACEREQUIREMENTS............................................................................................96A.2.1 InterfacewithVIM...........................................................................................96A.2.2 InterfacewithVNF...........................................................................................98A.2.3 InterfacewithMarketplace.............................................................................99

ANNEXB-VIRTUALISEDINFRASTRUCTUREMANAGEMENTREQUIREMENTS..................101

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

5

B.1 VIRTUALINFRASTRUCTUREMANAGEMENTREQUIREMENTS..............................................101B.2 WANINFRASTRUCTURECONNECTIONMANAGERREQUIREMENTS.....................................103B.3 NFVINFRASTRUCTUREREQUIREMENTS.........................................................................104

B.3.1 Computing.....................................................................................................104B.3.2 Hypervisor......................................................................................................107B.3.3 Networking....................................................................................................108

ANNEXC-TERMINOLOGY..............................................................................................111

C.1 GENERALTERMS........................................................................................................111C.2 ORCHESTRATIONDOMAIN...........................................................................................111C.3 IVMDOMAIN...........................................................................................................112

LISTOFACRONYMS........................................................................................................113

REFERENCES...................................................................................................................118

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

6

IndexofFigures

Figure1-1High-levelviewofoverallT-NOVASystemArchitecture........................................11Figure2-1:NSsandVNFsComplexOrchestrationOverview..................................................15Figure2-2:T-NOVAOrchestratorReferenceArchitecture......................................................22Figure2-3:NSOrchestrator(Internal&External)Interactions...............................................24Figure2-4:VirtualisedResourcesOrchestrator(Internal&External)Interactions................26Figure2-5:VNFManager(Internal&External)Interactions...................................................27Figure 3-1: T-NOVA infrastructure virtualisation and management (IVM) high levelarchitecture.............................................................................................................................38Figure3-2:ComputeDomainHighLevelArchitecture............................................................46Figure3-3:Hypervisordomainarchitecture...........................................................................48Figure3-4:HighlevelarchitectureoftheInfrastructureNetwork.........................................49Figure3-5:T-NOVAVIMhighlevelarchitecture.....................................................................50Figure3-6:VIMNetworkControlArchitecture.......................................................................53Figure3-7-End-to-endcustomerdatapathwithoutVNFsandwithVNFs............................54Figure3-8-WICMoverallvision.............................................................................................56Figure4-1:VNFOn-boardingProcedure.................................................................................59Figure4-2:VNFInstantiationProcedure(Orchestrator’sView).............................................60Figure4-3:VNFInstantiationProcedure(IVM’sView)............................................................61Figure4-4:VNFMonitoringProcedure(Orchestrator’sView)................................................63Figure4-5:VNFSupervisionProcedure(IVM’sView).............................................................64Figure4-6:ScalingoutaVNF...................................................................................................65Figure4-7VNFTerminationProcedure–Orchestrator’sView...............................................66Figure4-8:VNFTerminationProcedure–IVM’sView............................................................67Figure4-9:NSOn-boardingProcedure...................................................................................68Figure4-10:NSInstantiationProcedure(Orchestrator’sView)..............................................69Figure4-11:NSInstantiationProcedure(IVM’View).............................................................70Figure4-12:WICMsequencediagram....................................................................................71Figure4-13:NSMonitoringProcedure...................................................................................72Figure4-14:Scaling-outaNS..................................................................................................73Figure4-15:NSScale-out........................................................................................................75Figure4-16:NSTerminationProcedure..................................................................................76Figure4-17:CommunicationofmonitoringinformationacrosstheT-NOVAsystem............77

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

7

IndexofTables

Table3.1:ExternalInterfacesoftheT-NOVAIVM..................................................................39Table3.2:InternalinterfacesoftheIVM................................................................................41Table4.1:Monitoringmetricsperinfrastructuredomain......................................................77Table5.1:Gapanalysisinthecomputedomain.....................................................................80Table5.2:GapanalysisintheHypervisordomain..................................................................80Table5.3:GapanalysisregardingSDNControllers.................................................................81Table5.4:GapanalysisregardingCloudControllers...............................................................82Table5.5:GapanalysisregardingNetworkVirtualisation......................................................84Table5.6:GapanalysisregardingOrchestration....................................................................85Table6.1:OrchestratorRequirements–NFVO-NSLifecycle.................................................90Table6.2:OrchestratorRequirements–NFVO-VNFLifecycle...............................................91Table6.3:OrchestratorRequirements–NFVO-ResourceHandling......................................92Table6.4:OrchestratorRequirements–NFVO-MonitoringProcess.....................................92Table6.5:OrchestratorRequirements–NFVO-ConnectivityHandling.................................93Table6.6:OrchestratorRequirements–NFVO-Marketplacespecific...................................94Table6.7:OrchestratorRequirements–VNFM-VNFLifecycle..............................................94Table6.8:Orchestratorrequirements–VNFM-MonitoringProcess.....................................95Table6.9:RequirementsbetweentheOrchestratorandVIM................................................96Table6.10:RequirementsbetweentheOrchestratorandVNF..............................................98Table6.11:RequirementsbetweentheOrchestratorandtheMarketplace..........................99Table6.12:IVMRequirements–VIM...................................................................................101Table6.13:IVMRequirements–WICM................................................................................103Table6.14:IVMrequirements–Computing.........................................................................104Table6.15:IVMRequirements–Hypervisor........................................................................107Table6.16:IVMRequirements–Networking.......................................................................108Table6.17:GeneralTerms....................................................................................................111Table6.18:OrchestrationDomainTerminology...................................................................111Table6.19:IVMDomainTerminology...................................................................................112

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

8

1. INTRODUCTION

ThisdeliverableoutlinestheoutputsandtheresultsoftheactivitiescarriedoutinTasks2.3and 2.4 in Work Package 2 (WP2). These outputs and results are focused on theinfrastructure virtualisation layer as well as of the management and orchestration layerwithintheT-NOVAsystem.

1.1. Virtualisation

Virtualisation is a general term that can apply to a variety of different technologyapproachessuchashardware,operatingsystem,storage,memoryandnetwork.ItisthekeyenablertechnologythatallowstraditionalphysicalnetworkfunctionstobedecoupledfromfixedappliancesandtobedeployedontoindustrystandardserverslargeDataCentres(DCs).This approach is providing operators with key benefits such as greater flexibility, fasterdeliveryofnewservices,abroaderecosystemenhancinginnovationinthenetworketc.

1.1.1 TheVirtualisationConcept

Fromacomputingperspectivevirtualisationabstractsthecomputingplatformand,indoingso, hides its physical characteristics fromusers or applications.Datingback to the1960’s,theconceptofvirtualisationwasfirstintroducedwiththeAtlasComputerwiththeconceptof virtual memory, and paging techniques for system memory. IBM’s M44/44X projectbuildingontheseinnovationsdevelopedanarchitecturewhichfirstintroducedtheconceptof virtualmachines (VMs). Their approachwas based on a combination of hardware andsoftware allowing the logical slicing of one physical server into multiple isolated virtualenvironments [1].Virtualisationhasnowevolved from its initialmainframeorigins tonowbeingsupportedbytheX86architectureandbeingadoptedbyothernon-computingdomainsuchasstorageandnetworking.

The term Full Virtualisation describes the techniquewhere a complete simulation of theunderlyinghardwareisprovided.ThisapproachhasitsoriginsinIBM’scontrolprogramsfortheCP/CMSoperatingsystem.TodaythisapproachisusedtoemulateacompletehardwareenvironmentintheformofaVM,inwhichaguestOperatingSystem(OS)runsinisolation.Fullvirtualisationwasn’tcompletelypossiblewiththex86architectureuntiltheadditionofIntel’sVTandAMD-Vextensionsin2005-2006.Infact,fullx86virtualisationreliesonbinarytranslation to trap and virtualise the execution of certain sensitivity “non-virtualisable”instructions.Withthisapproach,criticalinstructionsarediscoveredandreplacedwithtrapsinto the Virtual Machine Manager (VMM), also called a hypervisor, to be emulated insoftware.

Virtualisation isnowfoundinapplicationsforotherdomainssuchasstorage,andnetworktodeliversimilarbenefitstothoserealisedinthecomputedomain.

• Storagevirtualisation refers toaprocessbywhichseveralphysicaldisksappear tobe a single unit. Virtualised storage is typically block-level rather than file-level,meaningthatitlookslikeanormalphysicaldrivetocomputers.Thekeyadvantagesof the approach are: (i) easier management of heterogeneous storageenvironments, (ii) better utilisation of resources, (iii) greater flexibility in theallocationofstoragetoVMs,

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

9

• Network virtualisation comes in many forms like Virtual Local Area Networks(VLANs),LogicalStorageAreaNetworks(LSANs)andVirtualStorageAreaNetworks(VSANs) that allow a single physical Local Area Networks (LAN) or Storage AreaNetworks (SAN) architecture to be carved up into separate networks withoutdependence on the physical connection. Virtual Routing and Forwarding (VRF)allowsseparateroutingtablestobeusedonasinglepieceofhardwaretosupportdifferentroutesfordifferentpurposeswhilevirtualswitchingsupportsL2EthernetconnectivitybetweenVMs.Thebenefitsofnetworkvirtualisationareverysimilartoservervirtualisation,namelyincreasedutilisationandflexibility.

These technologies in the form of cloud computing are now being rapidly adopted bynetwork operators in their carrier network domains in order to consolidate traditionalnetworkdevicesontostandardhighvolumex86servers,switchesandstorageintheformofVNFs.Indoingso,theyallowserviceproviderstotransformtheirnetworkfunctionsintoanelastic pool of resources while seeking compatibility with network and operationalmanagement tools. Building on cloud DCs allows operators to create an orchestrationenvironment for the management and control of their compute, network and storageresources.ForVNFstofunctionproperlytheconfigurationofthenetworkunderneaththemiscritical.ToprovisionoradaptVNFstochangingnetworkconditionsorcustomerrequestsrequirestheabilitytoconfigureoradaptnetworkroutesinahighlyexpeditiousmanner.

The advent of Software Defined Networking (SDN) with its support for programmaticprovisioningtransformsservicedeliveryfromweekstoamatterofminutesorevenseconds.SDN isbasedaroundanewnetworkingmodelwherecontrolof thenetwork isdecoupledfrom the physical hardware allowing a logically centralised software program (a networkcontroller) to control the behaviour of an entire network. The use of centralised networkcontrolandacommoncommunication layerprotocolacrosstheswitchingelements inthenetwork can enable increased network efficiency, centralised traffic engineering, improvetroubleshootingcapabilitiesandtheabilitytobuildmultiplevirtualnetworksrunningoveracommonphysicalnetworkfabric.InSDN,networkelementsareprimarilyfocusedonpacketforwarding,whereas switching and routing functions aremanagedby centralisednetworkcontroller which dynamically configures network elements using protocols such asOpenFlow or OVSDB. SDN is starting to be deployed in data centre and enterpriseenvironments e.g. Google. Virtual networks to support VNF deployment can be deleted,modified or restored in amatter of seconds inmuch the samemanner thatweprovisionvirtualmachinesincloudenvironments.

Virtualisationanditsadoptioninthekeyconstituentelementsofnetworksanddatacentreshascreatedanagilityforserviceprovidersthatwasnotpreviouslypossible.Virtualisationofinfrastructure,networksaswellas theapplicationsandservices that runon topwillallowserviceproviderstorapidlytransformtheirnetworksandtoembracenewinnovations.

1.1.2 TheProsandConsofNFVDeployments

AshighlightedbytheETSIISGNFVinitsfirstwhitepaper[2],thescenariowhichdefinesthesituationfacedbymostnetworkoperatorsnowadays,relatestothephysicalcomponentsoftheirnetworks,whicharecharacterisedbytheuseofawiderangeofproprietaryhardwareappliances. This problem of appliance and technology diversity continues to grow foroperatorsasnewequipmentisaddedtopreviousgenerationsofequipmentinthenetwork.

This leadstosignificantchallengesrelatedtothelaunchofnewservices, increasingenergycosts and capital investments coupledwith the difficulty of finding peoplewith themostappropriate skills to handle the design, integration and operation of increasingly complex

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

10

hardware-based appliances. In addition, the trend towards shorter operational lifespan ofhardwarealsoaffectsrevenues,leadingtosituationswherethereisnoreturnoninvestmentorwherethereisnotimeforinnovation.

As previously outlined in the T-NOVA project scope [3], Network Functions Virtualisation(NFV) will address these challenges by leveraging standard Information Technology (IT)virtualisation technologies to consolidate various network equipment types onto industrystandardhigh volume (SHV) servers, switches and storage located inDCs,NetworkNodesand in the end user premises. In this context, NFV refers to the virtualisation of networkfunctionscarriedoutbyspecialisedhardwaredevicesandtheirmigrationtosoftware-basedappliances,whicharedeployedontopofcommodityIT(includingCloud)infrastructures.

VirtualisingNetworkFunctionspotentiallyoffersmanybenefits,including:

• Reductioninbothequipmentcostsandpowerconsumption,• Reducedtimetomarket,• Availabilityofnetworkappliancesthatsupportmultiple-versionsandmulti-tenancy,

withtheabilitytoshareresourcesacrossservices,• Targetedserviceintroductionbasedongeographyorcustomertype,whereservices

canbequicklyscaledup/downasrequired,• Enablingawidevarietyofeco-systems,• Encouragingopennesswithintheecosystem.

One of the challenges in the deployment of NFV in the carrier domain is to leverage theadvantages of the IT ecosystem while minimising any of the associated disadvantages.Standardhighvolumeserversandsoftwaremustbemodifiedtomeetthespecificreliabilityrequirements in the telecoms environment, including 99.999 percent uptime availability.Thismissioncritical levelof reliability isakeyrequirementanddifferentiates traditional IT(just reboot the system!) and Telecom (where downtime or poor performance is notacceptable)environments.Tomeetdesigngoalswithoutsacrificingperformance,softwareapplications must be specifically designed or rewritten to run optimally in virtualisedTelecomenvironmentstomeetcarriergraderequirements.Otherwise,applicationsportedto virtualised environments may experience significant performance issues and may notscaleappropriatelytotherequirednetworkload.Anadditionalchallengeforvirtualisationina Telecomnetworkenvironment is the requirement todeliver low latency tohandle real-time applications such as voice and video traffic. In addition to performance, otheroperational characteristics that are crucial to successful deployments include:maturity ofthe hypervisor; Reliability, Availability, and Serviceability (RAS); scalability, security,managementandautomation;supportandmaintainability.

DeployingNFValsoincursotherwell-definedrisks,e.g.scalabilityinordertohandlecarriernetwork demands;management of both IT and network resources in support of networkconnectivity services andNetwork Functions (NFs) deployment; handling of network faultand management operations; Operations Supporting System (OSS) / Business SupportingSystem (BSS) backwards compatibility in migration situations; interoperability required toachieve end-to-end services offerings, including end-to-end Quality of Service (QoS). Inaddition, essential software appliances should achieve performance comparable to theirhardwarecounterpartswhichiscurrentlynotalwayspossibledueavarietyofreasonssuchastheperformanceofthevirtualisationtechnologies.

1.2 TheT-NOVASolution

TheT-NOVAprojectisfocusedonaddressingsomeofthekeychallengesofNFVdeploymentin Telco environments by designing and implementing an integrated architecture, which

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

11

includesanovelopen-sourceOrchestrationplatform.Thisplatformisexplicitlydedicatedtotheorchestrationof IT (i.e. CPU,memoryand storage) andend-to-endnetwork resourcesforNFVs,aswellastheautomatedprovisioning,management,monitoringandoptimisationof Network Functions-as-a-Service (NFaaS). The orchestration of the resources availableacross the NFV Infrastructure Points-of-Presence (NFVI-PoPs) is achieved through the T-NOVA Infrastructure Virtualisation and Management platform (IVM). IVM comprises of anumberof components thatoffer functionalities,whichcollectivelyprovide thevirtualisedcompute,storageandnetworkconnectivityrequiredtohostVNFs.

TheoverallT-NOVAsystemarchitectureisshowninFigure1-1whichincludestwoplatformsspecified in the present deliverable: the T-NOVA Orchestration and the T-NOVA IVMplatforms.

Figure1-1High-levelviewofoverallT-NOVASystemArchitecture

(Source:D2.21[4])

1.2.1 TheT-NOVAOrchestrationPlatform(PTIN)

The T-NOVA architecture has been conceived using a layering approach where theOrchestration layer ispositionedbetween theServiceand the InfrastructureVirtualisationand Management layers. This layering approach, together with the envisaged high-levelmoduleswithintheOrchestratorplatform,isillustratedinFigure1-1.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

12

The functionalities provided by the T-NOVA Orchestrator are required to extend beyondtraditionalcloudmanagement,astheT-NOVAscopeisnotrestrictedtoasingleDataCentre(DC). The Orchestrator therefore needs to manage and monitor Wide-Area Networks(WANs) aswell as distributed cloud (compute/storage) services and resources in order tocouplebasicnetworkconnectivityserviceswithadded-valueNFs.

Orchestrationplatformcapabilities that could improve thedeploymentofVNFs inprivate,heterogeneouscloudincludeamongothers:

• Applicationassignmenttohardwareplatformscapableofimprovingitsperformancethough specific features, such as special purpose instructions or accelerators i.e.allocationofaSingleRootI/OVirtualisation(SR-IOV)virtualfunctiontoVMsrunningVNFsthatcanbenefitfromthecapability;

• ExploitationofEnhancedplatformawareness(EPA) information,which isextractedfromeachNFVI,inordertooptimisetheresourcemappingalgorithms.

• Support of live-migration (wherever applicable, taking into account the serviceprovidedbyeachVNF).

The Orchestrator’s requirements together with its detailed conception and description intermsofFunctionalElements(Fes)constitutetheoutputsofTaskT2.3whicharedescribedinSection3.

1.2.2 TheT-NOVAIVMPlatform

TheIVMlayerintheT-NOVAsystemisresponsibleforprovidingtheexecutionenvironmentfor VNFs. The IVM is comprised of a Network Function Virtualised Infrastructure (NFVI),Virtualised Infrastructure Manager (VIM) and a WAN Infrastructure ConnectionManagement(WICM).TheIVMprovidesfullabstractionoftheNFVIresourcestoVNFs.TheIVM achieves this by supporting separation of the software that defines the networkfunction(theVNF)fromthehardwareandgenericsoftwarethatconstitutetheNFVI.Controlandmanagement of the NFVI is carried out by the VIM in unison with the Orchestrator.While the IVMprovidesorchestrationof thevirtualisedresources in the formofcompute,storageandnetworking,responsibilityfortheorchestrationoftheVNFsissolelyafunctionoftheOrchestrationlayergivenitssystemwideviewoftheT-NOVAsystemandcentralisedcoordinationroleinthesystem.

A major challenge for vendors developing NFV-based solutions is achieving near-nativeperformance(i.e.,similartonon-virtualised)inavirtualisedenvironment.Onecriticalaspectis minimising the inherent overhead associated with virtualisation, and there has beensignificantprogressthankstoanumberofkeyinnovations.Anexampleishardware-assistedvirtualisation inCPUs, suchas Intel’sXeonmicroprocessorswith IntelVT-x,which reducesVMcontextswitchingtime,amongotherthings.

Another challenge is ensuring the orchestration layer fully exploits the capabilities of theserversitmanages.Typicalorchestrationlayerproductscanidentifyinfrastructuralfeatures(e.g.,CPUtype,RandomAccessMemory(RAM)sizeandhostoperatingsystem);however,some orchestrators are unaware of attached devices, like acceleration cards or networkinterface cards (NICs) with advanced capabilities. In such cases, they are unable toproactivelyloadanapplicationontoaplatformcapableofacceleratingitsperformance,asinassigning an IP security (IPsec) VPN appliance to a server with cryptographic algorithmaccelerationcapabilities.Other featuresof theplatformmaybeof interest, i.e. themodelandversionofCPU,thenumberofcores,andotherspecificfeatures.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

13

The lackofplatformand infrastructuralawareness isamajordrawbacksincemanyvirtualappliances have intense I/O requirements and could benefit from access to high-performance instructions, accelerators and Network Interface Cards (NICs) for workloadssuchascompression,cryptographyandtranscoding.ThisisakeyfocusinWP3(Task3.2)andWP4 (Task 4.1). Undoubtedly, making the orchestration layer aware of the innatecapabilities of the devices attached to server platforms can help maximise networkperformance.

TheoutputsofTask2.4withrespecttotheoverallintegratedarchitectureoftheIVMlayerarepresentedinSection3.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

14

2. THET-NOVAORCHESTRATIONLAYER

This section describes the Orchestration layer, starting with an overview of its maincharacteristics, challenges and framework (subsection 3.1); followed by a description ofrequirements associated with its FEs (subsection 3.2), and finally a description of itsfunctionalarchitecture(subsection3.3).

2.1. OrchestrationLayerOverview

NFVisanemergingconcept,whichreferstothemigrationofcertainnetworkfunctionalities,traditionally performed by dedicated hardware elements, to virtualised IT infrastructureswhere they are deployed as software components.NFV leverages commodity servers andstorage to enable rapid deployment, reconfiguration and elastic scaling of networkfunctionalities.

Decouplingthenetworkfunctionssoftwarefromthehardwarecreatesanewsetofentities,namely:

• VirtualNetworkFunctions(VNFs):software-basednetworkfunctionsdeployedovervirtualisedinfrastructure;

• Network Functions Virtualized Infrastructure (NFVI): virtualised hardware thatsupportsthedeploymentofnetworkfunctions;

• Network Service (NS): chain of VNFs and/or Physical Network Functions (PNFs)interconnectedthroughvirtualnetworklinks(VLs).

Since VNFs, NFVIs, NSs and the relationships between them did not exist before theNFVparadigm, handling them requires a new and different set of management orchestrationfunctions.

VNFs require more agile management procedures when compared with legacy PNFsdeployed over dedicated appliances. Besides the traditional management proceduresalready in place for PNFs, in charge of BSSs/OSSs, such as customer management,accountingmanagementandSLAmanagement,VNFsrequirenewmanagementprocedures,e.g.toautomaticallycreate,toupdateand/ortoterminateVNFsandNSs.Furthermore,theautomaticdeploymentandinstantiationproceduresassociatedwithaspecificVNFneedtobe inplaceaswell as themonitoringandautomatic scalingproceduresduring the serviceruntimephase.

AnotherchallengebroughtaboutbytheNFVparadigmisthemanagementofthevirtualisedinfrastructure. In fact, one of themain advantages of virtualising network functions is toenable the automatic adjustment of NFVI resources according to the network functiondemands.Toachievethis, theVNFspecificrequirements,accordingtothecontractedSLA,havetobemappedtotherequiredvirtualised infrastructureassets(compute–e.g.virtualand physical machines, storage and networking – e.g. networks, subnets, ports andaddresses). The mapping procedures should also consider the network topology,connectivity and network QoS constraints, as well as function characteristics (e.g. somefunctionsmayrequirelowdelay,lowlossorhighbandwidth).SincevirtualisedresourcescanbecentralisedinasingleNFVI-PoPordistributedacrossseveralNFVI-PoPs,themanagementandorchestrationentitieswillalsohavetodecidewhatisthemostappropriatedNFVI-PoPorNFVI-PoPstodeploythefunction.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

15

Besides theVNFsandtheNFVI-PoPsmanagementchallenges,newandmorecomplexNSswillbeprovidedbasedonthecombination/chainingofseveralVNFs.Therefore,inadditionto the managing and orchestrating of each VNF and of the associated NFVI-PoP,orchestration and management procedures also have to be defined at the service level.ThesewillcoordinateseveralVNFs,aswellastheirassociationthroughVirtualnetworkLinks(VLs).Moreover,sincetheNSscanalsobecomposedbyPNFs,theinterconnectionsbetweenthese and theVNFs are also required. TheNS composition includes the constituentVNFs,PNFsandVLs,intheVNFForwardingGraph(VNFFG).EachNScanhaveoneormoreVNFFGs,ifthereareconditionstohavealternativesintermsofpathcreation,whichcanbeusedasbackups.

Figure 2-1 illustrates the entities introduced by the NFV paradigm, as well as theirrelationships.

Figure2-1:NSsandVNFsComplexOrchestrationOverview

TheNSpresentedinFigure2-1iscomposedbythefollowingentities:

• FourVNFs:A,B,CandD;

• OnePNF;

• FiveVLs:1(interconnectingVNFAandPNF),2(interconnectingVNFAandVNFB,3(interconnecting VNF A and VNF C), 4 (interconnecting VNF B and VNF D) and 5(interconnectingVNFCandVNFD).

TheVNFsaredeployedovertwodifferentNVFI-PoPs:

• NFVI-PoPI:supportsVNFAandDdeployments;

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

16

• NFVI-PoPII:supportsVNFBandCdeployments.

TwoVNFFGsareillustrated:

• VNFFGI:deliverstheNSthroughthePNF–VNFA–VNFC–VNFDnetworkingpath;

• VNFFGII:deliverstheNSthroughthePNF–VNFA–VNFB–VNFDnetworkingpath.

Internally,theVNFscanbecomposedbyoneormoresoftwarecomponents,alsoknownasVirtualNetworkFunctionComponents (VNFCs).EachVNFC is typicallydeployed inasingleVirtualMachine(VM),althoughotherdeploymentprocedurescanexist.AsVNFs,VNFCscanbe instantiated in a single NFVI-PoP or distributed across several NFVI-PoPs. The VNFCsinterconnections aremade through dedicatedVLs. Figure 2-1 illustrates the internals of aspecific(VNFD).Thelattersoftwarecomponents,namelywebserver(VNFCA),applicationserver(VNFCB)anddatabase(VNFCC),interconnectedthroughVLs(VL6andVL7).

Ontopofalltheseentities(e.g.NS,VNF,VNFC,VL,NFVI-PoP,etc.)standstheorchestrator,which has responsibility for managing the complexity associated with the NSs and VNFslifecycle management (e.g. on-boarding/deployment, instantiation, supervision, scaling,termination),includingtheinternalsoftheVNFs(notillustratedinthefigure).

In summary, the T-NOVAOrchestratorplatform is focusedon addressing twoof themostcriticalissuesinNFV:

1. AutomateddeploymentandconfigurationofNSs/VNFs;

2. Management and optimisation of networking and IT resources for VNFsaccommodation.

To address the complex management processes related with the NSs and VNFs, theOrchestratorissplitintwomainFEs:

1. NFV Orchestrator (NFVO): manages the virtualised NSs lifecycle procedures,includingthenetworkinglinksthatinterconnecttheVNFs;

2. VNFManager(VNFM):managestheVNFslifecycleprocedures.

TheT-NOVAOrchestratorwillalsobeabletodeployandmonitorT-NOVAservicesbyjointlymanagingWAN resources and cloud (compute/storage) assets (DCs). Indeed, the T-NOVAOrchestratorgoesbeyondtraditionalcloudmanagement,sinceitsscopeisnotrestrictedtoa single DC; it needs to jointlymanageWAN and distributed cloud resources in differentinterconnectedDCs inorder to couple thebasic network connectivity servicewith added-valueNFs.

Further details regarding these T-NOVA Orchestrator entities and functionalities areprovided in the following subsections. The VNF related concepts and architecturalcomponentsarediscussedextensivelyinDeliverableD2.41[5].

2.2. OrchestratorRequirements

As alreadyoutlined in subsection2.1, the T-NOVAOrchestrator is composedby twomainbuildingblocks:theNFVOandtheVNFM.

The NFVO orchestrates the subset of functions that are responsible for the lifecyclemanagementofNetworkServices (NSs). In addition, it is also responsible for the resourceorchestrationoftheNFVIresourcesacross:

• AsingleVIM,correspondingtoasingleNFVI-PoP,and/or

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

17

• Multiple VIMs, corresponding to multiple NFVI-PoPs, by using a specialized VIMdesignatedbyWICM.

The VNFM is the functional block that is responsible for the lifecyclemanagement of theVNFs.

Thedeploymentandoperationalbehaviourof theOrchestrator is captured indeploymenttemplates,wherethemostimportantforthissubsectionaretheNetworkServiceDescriptor(NSD)andtheVirtualNetworkFunctionDescriptor (VNFD).Othertemplatesarealsoused,e.g.,VirtualLinkDescriptor (VLD),andtheVNFForwardingGraph (VNFFGD),whichwillbefurtherdetailedinsubsection2.3.

This subsection details the Orchestrator requirements that have been identified after aresearch study involving several sources, e.g. use cases defined in D2.1 [6], ETSI ISG NFVrequirements[7],ITU-TrequirementsforNV[8],aswellasexcerptsofrelevantpartsoftheETSIISGMANOarchitectureandassociatedFEs[9].

ThelistofrequirementsforeachFEmaybefoundinAnnexA,where27requirementshavebeenidentifiedfortheNFVOand6requirementsfortheVNFM.However,itshouldbenotedthatnoneoftheserequirementsimposesanyspecificsolutionattheimplementationlevel,whichwillbeperformedinWP3/4.

Taking into account that the list of requirements is quite extensive, the entire set ofrequirementshasbeenclassifiedanddividedintotypesasindicatedintheremainingpartofthecurrentsubsection.

2.2.1. NFVORequirementsTypes

NetworkServicesundertheresponsibilityoftheNFVO,arecomposedbyVNFsand,assuch,are defined by their functional and behavioural specification. In this context, the NFVOcoordinates the lifecycle of VNFs that jointly realise a NS. This coordination includesmanaging the associations between the different VNFs thatmake-up part of the NS, andwhenapplicablebetweenVNFsandPNFs,thenetworktopologyoftheNS,andtheVNFFGsassociatedwiththeNS.

The operation of NSs defines the behaviour of the higher Orchestration layer, which ischaracterised by performance, dependability, and security specifications. The end-to-endnetworkservicebehaviouristheresultofcombiningindividualnetworkfunctionbehavioursaswell as the behaviours of the compositionmechanisms associatedwith the underlyingnetworkinfrastructurelayer,i.e.theIVMlayer.

Intermsofdeploymentandoperationalbehaviour,therequirementsofeachNSarecarriedinadeploymenttemplate,theNSD,andstoredduringtheNSon-boardingprocessintheNScatalogue, for future selection once the instantiation of the service takes place. The NSDfullydescribestheattributesandrequirementsnecessarytoimplementaNS, includingtheservicetopology,i.e.constituentVNFsandtherelationshipsbetweenthem,VLs,VNFFGs,aswellasNScharacteristics,e.g.intermsofSLAsandanyotherinformationnecessaryfortheNSon-boardingandlifecyclemanagementofitsinstances.

As the NS is the main responsibility of the NFVO, the NS lifecycle constitutes the mostrelevanttechnicalarearegardingtheNFVOclassificationintermsofrequirements.

As indicated below, the other requirement types are related with the VNF lifecyclemanagement with respect to the actions and procedures taken by the NFVO, which alsoincludes the secondFE that constitutespartof theOrchestrator, togetherwith theNFVO:

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

18

the VNFM. The actions and procedures associated with the VNFM’s behaviour, and inparticularthoserelatedtotheVNFlifecycle,willbefurtherdiscussedinsubsection2.2.2.

Regarding the remaining requirement types, it should be noted that there is one typerelatedto theNFVO,whichhandles themanagementof theresources located in theVIMand in the WICM; another one related with the policy management; and another one,specifictotheT-NOVAsystem,whichisconcernedwiththemostrelevantinteractionswiththeMarketplace,thelayerimmediatelyabovetheOrchestrationlayer.

Finally, therearestill two further types that relate toNS lifecycleoperations:connectivityhandling and themonitoringprocess. Adecisionwas taken to create separate groups forthese two (sub)types in order to emphasize the importance they play in the overalloperationoftheOrchestrator.

2.2.1.1. NSLifecycle

A Service Provider (SP) may choose one or more VNFs to compose a new NS, byparameterisingthoseVNFs,selectingaSLA,etc.within thecontextof theT-NOVAsystem.TheNFVOisthennotifiedofthecompositionofthisnewNS,bythereceptionofarequestthatincludesaNSD,whichisvalidatedintermsofdescription.

In a similar process, when a Customer subscribes to a NS, the Marketplace notifies theNFVO, which instantiates the NS according to its NSD description, agreed SLA and thecurrent status of the overall infrastructure usagemetrics.Upon a successful instantiation,the Orchestrator notifies the Marketplace, thus triggering the accounting process of thesubscribedNSaswellasofthecustomer.

After these steps the NFVO becomes responsible for NS lifecycle management, wherelifecycle management refers to a set of functions required to manage the instantiation,maintenanceandterminationofaNS.

2.2.1.2. VNFLifecycle

The NFVO performs its capabilities by using the VNFM operation in what concerns thehandlingoftheVNFlifecycle.AlthoughtheVNFMistheFEinchargeofthemanagementoftheVNFlifecycle,asdescribedinsubsection2.2.3,someoperationsrequiretheinterventionoftheNFVO.

Therequirement typespecified in thecurrentsubsectionrefersprecisely to thosepartsoftheVNFlifecyclemanagementthatareperformedbytheNFVO.

Inthiscontext,FunctionProviders(FPs)publicisetheirVNFsintheNetworkFunctionStore(NF Store). This implies the use of a VNFD describing the infrastructure (computation,storage,networkinfrastructureandconnection)neededfortheVNFtobeinstantiatedlateronbyarequestsenttotheOrchestrator.

After a validation of theVNFD, theNFVOpublicises theVNF to theMarketplace as beingreadytobepartofaNS.AssociatedwiththeVNFD, theremaybepotentiallyaVMimagethatwillmakepartofthedeploymentofsuchVNF.

As the FPprovidesnewer versions this process is repeated. If andwhen the FPwishes towithdraw the VNF, the reverse process is executed taking into consideration the currentstatusofNSexploitingtheunderdeletionVNF.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

19

2.2.1.3. ResourceHandling

The NFVO is the Orchestrator FE that performs the resource handling of the subset ofOrchestratorfunctionsthatareresponsibleforglobalresourcemanagementgovernance.

In terms of scope, the following domains and associated IT virtualised resources aremanagedbytheNFVO:Compute, i.e.virtualprocessingCPUsandvirtualmemory;Storage,i.e.virtualstorage;andNetwork,i.e.virtuallinksintra/interconnectingVNFswithintheDataCentre Network (DCN). In T-NOVA, the NFVO also manages the resources of the WICMnetworkdomain.

ThegovernancedescribedaboveisperformedbymanagingthebetweentheVNFinstancesand theNFVI resourcesallocated to thoseVNF instancesandbyusing the InfraResourcescatalogueaswellasinformationreceivedfromtheVIMandfromtheWICM.

According to the characteristicsofeach service (agreedSLA)and the currentusageof theinfrastructure (computation, storage infrastructure and connectivity), there is an optimalallocationfortherequiredinfrastructure.

Thisoptimalinfrastructureallocationwillbetheresponsibilityofanallocationalgorithm(orasetofalgorithms)thatwillbedefined,inWP3.

2.2.1.4. MonitoringProcess

Oneof the key aspects of the T-NOVAproject is not only the ability to optimally allocateinfrastructures for a NS, but also to react, in real time, to the current performance of asubscribedNS,sothattheagreedSLAismaintained.Toaccomplishthesetwoaspects,it iscrucial that ameaningful set of infrastructure (computational, storage, infrastructure andconnectivity)usagemetricsbecollected.

NS metrics must be defined together with the SLA(s) to be provided with every NSinstantiation.

It is expected that the data to be collected will be significant with a high frequency ofchange,soadequatestrategieswillhavetobedesignedtosupportcollectinglargevolumesofdata.

Assuch,duringtheNS lifecycle, theNFVOmaymonitortheoveralloperationofaNSwithinformationprovidedbytheVIMand/orbytheWICM,ifsuchrequirementswerecapturedintheNSdeploymenttemplate.

SuchdatamaybeusedtoderiveusageinformationforNFVIresourcesbeingconsumedbyVNF instancesorgroupsofVNF instances.For instance,theprocessmay involvecollectingmeasurementsaboutthenumberofNFVIresourcesconsumedbyNFVIinterfaces,andthencorrelatingNFVIusagerecordstoVNFinstances.

BeyondtheinfrastructureusagemetricssetsofNSusagemetrics,needtobedefineduponservice composition, inorder to allow trackingof theagreed SLAand todetermine if it isbeingmaintainedornot.

Thesemetricsaremoreserviceorientedthaninfrastructureoriented,andarebuiltontopofinfrastructure usage metrics. For instance, a metric such as “the current number ofsimultaneous sessions” is something that the infrastructure cannot measure, but the“currentmaximumnetworklatency”issomethingavailableattheinfrastructurelevel,whichmightmakesenseattheservicelevelaswell.ThechoicebetweenwhichmetricstotrackismadebytheMarketplace,atservicecompositiontime.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

20

Thecollectionofthesemeasurementmetricsmaybereportedtoexternalentities,e.g.theCustomer,theSPortheFP,viatheMarketplace,ifsuchrequirementswerecapturedintheNSdeploymenttemplate.

Inaddition, this informationmaybecomparedwithadditional information included in theon-boardedNSandVNFdeploymenttemplates,aswellaswithpoliciesapplicabletotheNSthat can be used to trigger automatic operational management of the NS instance, e.g.automaticscalingofVNFinstancesthatarepartoftheNS.

2.2.1.5. ConnectivityHandling

TheNFVOhasanabstractedviewofthenetworktopologyandinterfacestotheunderlyingVIMsandWICMsinordertohandleconnectivityservicesbyperformingthemanagementoftheNSinstances,e.g.,create,update,query,deleteVNFFGs.

Connectivitymanagementmustbehandledover the samedomainsas those indicated forresourcehandling.

2.2.1.6. PolicyManagement

Policies are defined by conditions and corresponding actions/procedures, e.g. a scalingpolicymay state execution of specific actions/procedures if the required conditions occurduring runtime. Different actions/procedures defined by the policy can be mutuallyexclusive, which implies a process of selection of a particular action/procedure (or set ofactions/procedures)tobeexecutedeitherautomaticallyormanually.

In the context of T-NOVA, once declared, a policy may be bound to one or more NSinstances, VNF instances, and NFVI resources. Policy management always implies somedegreeofevaluationfortheNSinstancesandVNFinstances,e.g.,intermofpoliciesrelatedwithaffinity/anti-affinity,lifecycleoperations,geography,regulatoryrules,NStopology,etc.

In addition, policy management also refers to the management of rules governing thebehaviour of Orchestrator functions, e.g., management of NS or VNF scaling operations,accesscontrol,resourcemanagement,faultmanagement,etc.

Associatedwiththepolicymanagementterminology is theconceptofpolicyenforcement,i.e.policesaredefinedbycertainentitiesandarethenenforcedinotherentities,whichmayintheirturnenforcetheminadditionalentities.

IntheT-NOVAcontext,policiesmaybedefinedbyexternalentities,e.g.theCustomer,theSP or the FP, and are then enforced into the NFVO, via theMarketplace. In its turn, theNFVOmayenforcethemintotheVNFM.

Policyenforcementmaybestaticoron-demand.

2.2.1.7. Marketplace-specificInteractions

TheMarketplaceistheT-NOVAlayerthatinterfaceswithexternalentities,e.g.,Customers,SPs and FPs. In the T-NOVA global architecture, it interacts with the Orchestration layerthroughaninterfacewhoserequirementsaredefinedinsubsection2.4.

The deployment and behaviour of the Marketplace imposes requirements that theOrchestration must fulfil in order to offer those external entities an entire set offunctionalities,whicharedefinedinD2.41[5].

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

21

TherequestmadebytheMarketplaceforthoserequirementsaswellasthecorrespondentresponsesfromtheOrchestratorare,mostofthetime,implicitinthecurrentdescriptionoftheOrchestratorrequirements.

However,forsomeofthoserequirementsthatarebasedwithinD2.1[6],e.g.publishingtheoutcomeof theNS instantiation,publishingNSmetrics,or reportingusagemetrics, itwasdecidedtocreateaseparategroupinordertohighlighttheirprocessingmechanisms.

Forinstance,thevariouskindsofmetricsdescribedabovemaybeusedbybusiness-orientedprocessesresiding intheMarketplace,namelytostartandstoptrackingoftheusageaNSforbillingpurposes.

2.2.2. VNFMRequirementsTypes

The deployment and operational behaviour requirements of each VNF is captured in adeploymenttemplate,theVNFD,andstoredduringtheVNFon-boardingprocessintheVNFcatalogueaspartofaVNFPackage,forfutureuse.Thedeploymenttemplatedescribestheattributes and requirements necessary to realise such the VNF and captures, in anabstractedmanner,therequirementstomanageitslifecycle.

TheVNFMperformsthelifecyclemanagementofaVNFbasedontherequirementsincludedinthistemplate.Assuch,theVNFlifecycleconstitutesthemostrelevanttypeintheVNFMclassificationofrequirementsinrelationtotheprocedurestakeninthisglobalprocess.

AsalsodecidedfortheNFVO,thereisstillafurthertypethatconstitutes,infact,anareaofoperationthatbelongstotheVNFlifecycle:themonitoringprocess.Onceagain,thereasonbehind the creation of a separate group for this (sub)type is related to emphasising theimportanceofitsruleintheOrchestrator’soperation.

2.2.2.1. VNFLifecycle

The VNFM is responsible for the VNF lifecyclemanagement,where lifecyclemanagementreferstoasetoffunctionsrequiredtomanagetheinstantiation,maintenance,scalingandterminationofaVNF.TheVNFlifecycleisdefinedindeliverableD2.42[10].

2.2.2.2. MonitoringProcess

Duringthe lifecycleofaVNF,theVNFManagementfunctionsmaymonitorKeyParameterIndicator (KPIs) of a VNF, if such KPIs were captured in the deployment template. Themanagementfunctionsmayusethisinformationforscalingoperations.Scalingmayincludechangingtheconfigurationofthevirtualisedresources(scaledown,e.g.,addCPU,orscaleup, e.g., remove CPU), adding new virtualised resources (scale out, e.g., add a new VM),shuttingdownandremovingVMinstances(scalein),orreleasingsomevirtualisedresources(scaledown).

So, every VNF will usually provide its own usage metrics to the VNFM, which will be, ingeneral, specific to the function the VNF provides, although theymight be based on theinfrastructureontopofwhichtheVNFhasbeendeployed.

ThetreatmentoftheinformationcollectedduringtheVNFmonitoringprocessisverysimilarto the one described for the NS process and may result in reports being sent externalentities, via theMarketplace, and/or to trigger automatic operationalmanagementof theVNFinstance,e.g.automaticscaling.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

22

2.3. FunctionalOrchestratorArchitecture

This subsection describes the Orchestrator reference architecture, including its functionalentitiesaswellasexternalinterfaces.

2.3.1. ReferenceArchitecture

The Orchestrator reference architecture, as well as the interfaces with the externalFunctionalEntities (FEs) isdepicted inFigure2-2. Indetail, theorchestrator interactswiththeMarketplace,whichistheT-NOVAdomainresponsibleforaccounting,SLAmanagementandbusinessfunctionalities.BesidestheMarketplace,theOrchestratoralsointerfaceswiththe IVM, and in particular with the VIM, for managing the data centre network/ITinfrastructure resources, as well as with the WICM for WAN connectivity management.Finally,theOrchestratorinteractswiththeVNFitself,whichintheT-NOVAscopeislocatedintheIVMdomain,toensureitslifecyclemanagement.

Internally, the T-NOVA Orchestrator consists of two main components and a set ofrepositories. One of the core elements is the NFVO, acting as the front-end with theMarketplaceandorchestratingalltheincomingrequeststowardstheothercomponentsofthearchitecture.FurtherdetailsrelatingtotheNFVOandtheassociatedincomingrequestsare available in subsection 2.3.2.1. To support the NFVO operation procedures, a set ofrepositoriesisidentifiedinordertostorethedescriptionoftheavailableVNFsandNSs(VNFCatalogueandNSCatalogue),theinstantiatedVNFsandNSs(NS&VNFInstances),aswellastheavailableresourcesinthevirtualisedinfrastructure(InfrastructureResourcesCatalogue).Further details about the orchestrator repositories are provided in subsection 2.3.2.3.Finally, the NFVO also interacts with the other core element, the VNFManager (VNFM),responsible for the VNF-specific lifecycle management procedures, as described insubsection2.3.2.2.

Figure2-2:T-NOVAOrchestratorReferenceArchitecture

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

23

2.3.2. FunctionalEntities

ThissubsectiondescribesthefunctionalentitiesoftheOrchestratorarchitecture.

2.3.2.1. NetworkFunctionVirtualisationOrchestrator(NFVO)

ThemainfunctionoftheNFVOistomanagethevirtualisedNSslifecycleanditsprocedures.Since the NSs are composed by VNFs, (PNFs, VLs and VNFFGs, the NFVO is able todecompose each NS into these constituents. Nevertheless, although the NFVO has theknowledge of the VNFs that compose the NS, it delegates their lifecyclemanagement toanotherdedicatedFEoftheOrchestratordomain,designatedbyVNFM.

A description of the main deployment templates must be taken into account whendeterminingthebestconnectivitypathstodeliveraserviceisprovided:

• aVNFFGDisadeploymenttemplatethatdescribesatopologyoftheNSoraportionof the NS, by referencing VNFs and PNFs as well as VLs that used forinterconnection. In addition to the VLs, whose descriptor is described below, aVNFFGcanreferenceotherinformationelementsintheNSsuchasPNFsandVNFs.A VNFFG also contains a Network Forwarding Path (NFP), i.e. an ordered list ofConnectionPointsformingachainofNFs,alongwithpoliciesassociatedtothelist,

• aVLDisadeploymenttemplatewhichdescribestheresourcerequirementsthatareneededforestablishinga linkbetweenVNFs,PNFsandendpointsoftheNS,whichcouldbemetbychoosinganoptionbetweenvariouslinksthatareavailableintheNFVI.However, theNFVOmust first consult theVNFFG inorder todetermine theappropriate NFVI to be used based on functional (e.g., dual separate paths forresilience)andotherneeds(e.g.,geographyandregulatoryrequirements).

Inadditiontotheorchestrationofthevirtualisedserviceleveloperations,whichallowstheabstractionof service specificities fromthebusiness/operational level– in this case theT-NOVAMarketplace– theNFVOalsomanagesthevirtualised infrastructureresource leveloperations aswell as the configuration/allocation of transport connectionswhen twoormore distinct DCs are involved. Hence, it coordinates the resourcereservation/allocation/removaltospecificNSsandVNFsaccordingtotheavailabilityofthevirtualisedinfrastructures,alsoknownasdatacentres.

Toaddressthetwomainfunctionalitiesabovementioned,theNFVOisarchitecturallysplitintwo modules, namely the Network Services Orchestrator (NSO) and the VirtualisedResourcesOrchestrator(VRO),furtherdescribedbelow.

NetworkServiceOrchestrator

TheNSOisoneofthecomponentsoftheNFVOwiththeresponsibilityformanagingtheNSlifecycleanditsprocedures.Moreprecisely,thefollowingtasksfallundertheresponsibilityoftheNSO:

• NSs and VNFs on-boarding: management of Network Services deploymenttemplates, also known asNSDescriptors andVNFPackages, aswell as of theNSsinstancestopology(e.g.,create,update,query,deleteVNFForwardingGraphs).On-boarding of a NS includes the registration in the NS catalogue therefore ensuringthatall thetemplates(NSDs)arestored,seeNSon-boardingproceduredetailed insubsection5.2.1;

• NSinstantiation:triggerinstantiationofNSandVNFinstances,accordingtotriggersand actions captured in the on-boarded NS and VNF deployment templates. In

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

24

addition,managementofthe instantiationofVNFs, incoordinationwithVNFMsaswellasvalidationofNFVIresourcerequestsfromVNFMs,asthosemayimpactNSs,e.g.scalingprocess,seeNSinstantiationproceduredetailedinsubsection5.2.2;

• NSupdate:supportNSconfigurationchangesofvariouscomplexitysuchaschanginginter-VNFconnectivityortheconstituentVNFs;

• NS supervision: monitoring and measurement of the NS performance andcorrelationoftheacquiredmetricsforeachserviceinstance.Dataisobtainedfromthe IVM layer (performance metrics related with the virtual network linksinterconnecting the network functions) and from the VNFM (aggregatedperformancemetricsrelatedwiththeVNF,seeNSsupervisionproceduredetailedinsubsection5.2.3;

• NS scaling: increase or decrease of theNS capacity according to per-instance andper-service auto-scaling policies. The NS scaling can imply eitherincreasing/decreasing of a specific VNF capacity, create/terminate new/old VNFinstances and/or increase/decrease the number of connectivity links between thenetworkfunctions;

• NStermination: releaseofaspecificNS instancebyremovingtheassociatedVNFsandassociatedconnectivitylinks,aswellasthevirtualisedinfrastructureresources,(seeNSterminationproceduredetailedinsubsection4.2.5).

Inadditiontotheselifecyclerelatedprocedures,theNSOalsoperformspolicymanagementandevaluationfortheNSinstancesandVNFinstances,e.g.,policiesrelatedwithscaling.

Figure 2-3 provides an illustration about the NSO interactions within the T-NOVAOrchestratorandwiththeremainingT-NOVAexternalentities:

Figure2-3:NSOrchestrator(Internal&External)Interactions

From the external perspective, it interacts with the Marketplace for operational andbusinessmanagementpurposesasfollows:

• Exchange provisioning information (e.g., requests, modifications/updates,acknowledgements)abouttheNSs(throughtheT-Da-Orinterface);

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

25

• Providestheorchestratorwith informationoneachNS instanceSLAagreement. InturnheorchestratorsendsSLA-relatedmetricstotheMarketplace(throughtheT-Sl-Orinterface);

• DelivertotheMarketplaceusageaccounting informationwithrespecttoVNFsandNSs(throughtheT-Ac-Orinterface);

• Provides the orchestrator with information about the NSs composition. Theorchestrator delivers to the Marketplace information about the available VNFs(throughtheT-Br-Orinterface).

Internally,theNSOhasthefollowingcommunicationpoints:

• NS Catalogue: collects information about the NSs (NSD), including the set ofconstituent VNFs, interconnecting network links (VLD) and network topologyinformation(VNFFGD);

• VNFCatalogue:storestheVNFDduringtheon-boardingprocedures;

• NSandVNFInstances:storesinformationabouttheNSinstancesstatus;

• VirtualisedResourcesOrchestrator (VRO):exchangesmanagementactions relatedtovirtualisedresourcesand/orconnections,eitherwithinthedatacentrescope(e.g.compute,storageandnetwork)and/oronthetransportnetworksegment;

• Virtual Network Function Manager (VNFM): exchange lifecycle managementactionsrelatedwiththeVNFs.

VirtualisedResourcesOrchestrator

TheVirtualisedResourcesOrchestrator(VRO)istheresourcelayermanagementFunctionalEntityoftheNFVOmainblock.Itisresponsibleforthefollowingactions:

• Coordinateresourcereservation/allocation/removalandestablishtheplacementforeachVMthatcomposestheVNF(andtheNS);

• InteractwiththeWANelementsforconnectivitymanagementactions;

• Validate NFVI resource requests from VNFMs, as those may impact the way therequested resources are allocated within one NFVI-PoP or across multiple NFVI-PoPs.WhethertheresourcerelatedrequestscomesdirectlyfromtheVNFMorfromtheNFVOisimplementationdependent;

• Manage the relationship between the VNF instances and the NFVI resourcesallocatedtothoseVNFinstances;

• CollectusageinformationoftheNFVIresources;

• CollectperformanceinformationaboutthenetworklinksinterconnectingtheVNFs;

• CollectperformanceaboutthevirtualisedinfrastructureresourcessupportingNSs.

ThefollowingvirtualisedresourcesaremanagedbytheVRO:

• Compute:virtualprocessingCPUsandvirtualmemory;

• Storage:virtualstorage;

• Network:virtuallinksintra/interconnectingVNFswithintheDCN.

Figure2-4providesanillustrationwithfurtherdetailsabouttheVROinteractionswithintheT-NOVAOrchestratorandwiththeremainingT-NOVAexternalentities:

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

26

Figure2-4:VirtualisedResourcesOrchestrator(Internal&External)Interactions

From an external perspective, it interacts with the VIM and theWICM for the followingpurposes:

• Virtualised Infrastructure Manager: to enforce resourcereservation/allocations/removal and to collect monitoring information about thevirtuallinksinterconnectionsoftheVNFs,throughtheT-Or-Viinterface;

• Transport Network Manager: enforces resource/connectivity decisionsallocations/removals and to collect monitoring information about the transportnetworkelements,throughtheT-Or-Tminterface.

Internally,theVROinteractswiththefollowingblocks:

• NetworkServicesOrchestrator:exchangesresourcereservation/allocation/removalmanagementactionsrelatedwithaspecificNS,foralltheconstituentVNFs;

• Infrastructure Resources catalogue: queries and stores information about thevirtualisedandnon-virtualisedinfrastructureresources;

• Virtual Network Function Manager: exchanges resourcereservation/allocation/removal management actions, in the case the resourcemanagementishandledbytheVNFM.

2.3.2.2. VirtualNetworkFunctionManager(VNFM)

The VNFM is responsible for the lifecycle management of the VNF. Each VNF instance isassumedtohaveanassociatedVNFM.AVNFMmaybeassignedthemanagementofasingleVNFinstance,orthemanagementofmultipleVNFinstancesofthesametypeorofdifferenttypes.TheOrchestratorusestheVNFDtocreateinstancesoftheVNFitrepresents,andtomanage the lifecycle of those instances.AVNFDhas a one-to-one correspondencewith aVNFPackage,anditfullydescribestheattributesandrequirementsnecessarytorealizesucha VNF. NFVI resources are assigned to a VNF based on the requirements captured in theVNFD (containing resource allocation criteria, among others), but also taking intoconsiderationspecificrequirementsaccompanyingtherequestforinstantiation.

ThefollowingmanagementproceduresarewithinthescopeoftheVNFM:

• Instantiate:createaVNFonthevirtualisedinfrastructureusingtheVNFon-boardingdescriptor,aswellas theVNF feasibilitycheckingprocedure,seeVNF instantiationproceduredetailedinsubsection4.1.2;

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

27

• Configure:configuretheinstantiatedVNFwiththerequiredinformationtostarttheVNF. The request may already include some customer-specificattributes/parameters;

• Monitor:collectandcorrelatemonitoringinformationforeachinstanceoftheVNF.ThecollectedinformationisobtainedfromtheIVMlayer(virtualisedinfrastructureperformance information) and from the VNF (service specific performanceinformation),seeVNFmonitoringproceduredetailedinsection4.1.3;

• Scale: increase or decrease the VNF capacity by adding/removing VMs (out/inhorizontalscaling)seeVNFscale-outproceduredetailedinsubsection4.1.4;

• Update:modifyconfigurationparameters;

• Upgrade:changesoftwaresupportingtheVNF;

• Terminate: release infrastructure resources allocated for the VNFs, see VNFterminationproceduredetailedinsubsection4.1.5.

Figure2-5providesanillustrationwithfurtherdetailsontheVNFMinteractionswithintheT-NOVAOrchestratorandwiththeremainingT-NOVAexternalentities:

Figure2-5:VNFManager(Internal&External)Interactions

Fromtheexternalperspective,itinteractswiththeVNFandwiththeVIMwiththefollowingpurposes:

• VirtualNetworkFunction(VNF):configuresVNFspecificinformation(throughtheT-Ve-Vnfminterface)andreceivesVNFrelatedmonitoringinformation;

• Virtual Infrastructure Management (VIM): collects monitoring information aboutthevirtualisedinfrastructureresourcesallocatedtotheVNF(throughtheT-Vi-Vnfminterface).

Internally,theVNFMinteractswiththefollowingcomponents:

• Network Services Orchestrator (NSO): receive VNF instantiation requests for aspecificNSandprovideVNFmonitoringinformation;

• VNF Catalogue: collects information about the VNFs internal composition (VNFD),including the VNF Components (VNFCs), software images (VMs) andmanagementscripts;

• Virtualised Resources Orchestrator (VRO): exchanges resourcereservation/allocation/removal management actions, in cases where themanagementishandledbytheVNFM.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

28

2.3.2.3. RepositoriesandCatalogues

To support the T-NOVA Orchestrator lifecycle management operations, the followingcataloguesaredefined:

• NSsCatalogue(NSCatalogue);

• VNFsCatalogue(VNFCatalogue);

• NSsandVNFsInstancesRepository;

• InfrastructureResourcesRepository.

NSCatalogue

Represents the repository of all the on-boarded NSs in order to support the NS lifecyclemanagement:

• NS Descriptor (NSD): contains the service description template, including SLAs,deploymentflavours,referencestothevirtuallinks(VLDs)andtheconstituentVNFs(VNFFG);

• Virtual LinkDescriptor (VLD): contains thedescriptionof the virtual network linksthatcomposetheservice(interconnectingtheVNFs);

• VNFForwardingGraphDescriptor(VNFFGD):containstheNSconstituentVNFs,aswellastheirdeploymentintermsofnetworkconnectivity.

VNFCatalogue

Represents the repository of all the on-boarded VNFs in order to support its lifecyclemanagement:

• VNFDescriptor(VNFD):containstheVNFdescriptiontemplate,includingitsinternaldecompositioninVNFCs,deploymentflavoursandreferencestotheVLDs;

• SoftwareimagesoftheVMslocatedintheIVMlayer.

NSandVNFInstances

Represents the repository of all the instantiated NSs and VNFs, which can becreated/updated/releasedduringthelifecyclemanagementoperations.

InfrastructureResources

Represents the repository of available, reserved and allocated NFVI-PoP resources, alsoincludingtheonesrelatedtotheWANsegment.

2.3.3. ExternalInterfaces

In this section the external interfaces of the Orchestrator are described. However it isimportantwithin theperspectiveof theT-NOVAarchitecture tounderstandthecontext inwhich the term interface is used as is its relationship to reference points a commonarchitecturallocususedwithinthenetworkingdomain

Innetworktermsareferencepoint isanabstractpoint inamodelofnetworkorprotocol.Thisreferencepointessentiallyservestopartitionfunctionsorconfigurationsandsoassistsin the description of a network model as well as serving as a point of interoperabilitybetweendifferentpartsof thenetwork [11]. Inanetworkingcontext,an interfacemayor

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

29

may not be associatedwith any given reference point. An interface typically represents aprotocollevelconnectionwhichmayormaynotbemappedtoareferencepoint.

This strict delimitation between the definition of reference points and interfaces in thecontextofNFVandSDNgiven thehybridisationofnetworkingand IT technologies canbechallenging,i.e.inthenetworkdomain,thisframeworkisstrictlydefined,anditisthereforenormalpracticetoretaintheuseofthetermreferencepoint,whileintheITdomain,thereisamoreflexibledemarcationbetweentechnologiesleadingtoadegreeofhybridisation.Asa consequence in the IT domain the term interface is used in amore flexiblemanner toencompassreferencepointsalso.

While strictly speaking the separation of the terms should be technically maintained theapproachadopted in thisdeliverable is toutiliseabroaderandmore flexibledefinitionofinterfacesandreferencepointsgiventheexpectedone-to-onemappingofreferencepointsandinterfacesinthecontextoftheproposedT-NOVAarchitecture.AdditionallyinterfacesintheT-NOVAsystemmaynotnecessarilybe tiedspecifically toaprotocolbut ratheractaspoint of information exchange through APIs. Hencewithin the context of this deliverableinterfaces areenvisioned toencompassboth thearchitectural characteristicsof interfacesandreferencespointsgivenfusionoftheITandnetworkingdomains.

Havingclarifiedtheuseoftheterm,thedescriptionoftheOrchestrator’sexternalinterfaceswillstarttobeprovidedbymeansofaveryshortreferenceonsecurity,whichisacommonarea that affects all the interfaces. It will be followed by an introduction to the interfacerequirementsthatarepresentedinAnnexA.2inatabularformat.

Regardingthecommonissue,andconsideringthemostgenericscenariosinwhichtherolesdescribed in D2.1 [6] are played by distinct entities all the external interfaces beingdescribedinthissectionmustsupportat leastaminimumdegreeofsecurity.Thedecisionon the exact degree of security for each implemented interfacewill be taken later in theproject’stimeline.

2.3.3.1. InterfacebetweentheOrchestratorandtheNetworkFunctionStore

The interfacebetweentheOrchestratorandtheNetworkFunctionStore (NFStore)servestwopurposes:

• For the NF Store, tonotify the Orchestrator about new, updated andwithdrawnVNFs;

• FortheOrchestrator,toretrievefromtheNFStoreandstoreintheVNFCataloguetheVNFDandVMsimagesthatneedtobeinstantiatedtosupportthatVNF.

This“two-phase”interactionbetweentheOrchestratorandtheNFStore,insteadofjustoneinwhichtheNFStorecouldpasstheOrchestratortheVNFDescriptorandVMimages,allowsfortheoptimisationofresourcesonbothsidesoftheinterface.OntheNFStoresidethisisjustanotificationtotheOrchestrator,andontheOrchestrator’sside,thedownloadoftheVMimages isonlycarriedoutwhentheVNF is instantiatedpreventingunnecessaryuseofresources.UploadingoftheVNFDtotheVNFcatalogueisexecutedatthestartoftheVNFon-boardingprocess.

2.3.3.2. InterfacebetweentheOrchestratorandtheMarketplace

TheinterfacebetweentheOrchestratorandtheMarketplaceservesthefollowingpurposes:

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

30

• Provide available VNFs: involves SP browsing and selection of VNFs, as well ascomposition ofmarket services by theMarketplace, followed by a request to theOrchestrator;

• Publishanewnetworkservice:relatedtotherequestforthestorageofinformationrelated toanewserviceby theMarketplace, included in theprovisionof theNSD(seesubsection2.2).ThisprocessisalsocalledNSon-boardingbytheOrchestratorFEs;

• RequestforaNetworkService:afteraCustomer’soraSPsubscriptionofaservice,asubsequentrequestfromtheMarketplaceisgenerated,whichincludesintheNSDalltheVNFstheNStobedeployedneeds,aswellasalltheVMsthoseVNFsneedintheVNFD,theavailableinfrastructureanditscurrentusage;

• Change configuration of a deployed network service upon a request from theMarketplace;

• Providenetworkservicestatetransitions:notificationprovidedbytheOrchestratortotheMarketplace;

• Providenetworkservicemonitoringdata:notificationprovidedbytheOrchestratortotheMarketplace;

• Terminate a provisioned network service: upon a request from theMarketplacewhenthereisanexplicitsolicitation.

2.3.3.3. InterfacebetweentheOrchestratorandtheVIM

TheinterfacebetweentheOrchestratorandtheVIMservesthefollowingpurposes:

• Allocate/release/update resources: upon a request from the Orchestrator to(re)instantiate,updatetheconfigurationof,orreleasearesource(VMorconnectionwithinthesameDC);

• Reserve/releaseresources:uponanexpectedfutureneedfromtheOrchestratortoinstantiateorreleaseareservedresource(VMorconnectioninthesameDC).Thisrequirement makes sense in scenarios where allocating resources from scratch iscomplexor too time consuming for thepurpose inmind.Reserved resourcesmayhavelowerpricesthaneffectiveallocatedonesandbecomefastertoallocatewhentimecomes;

• Add/update/deleteSWimage:wheneveranew,updateorremovalofaVMimageis needed in the process of allocating, updating or removing a new VNF/VNFCinstance;

• RetrieveinfrastructureusagedatatoNSO:informationprovidedbytheVIMtotheOrchestrator,NSOFE,so thatoptimalallocationofNS instances ispossibleandanadequate level of metrics can be reported to the Marketplace, if allowed byinformationincludedintheNSD;

• Retrieve infrastructure usage data toVNFM: information provided by theVIM totheOrchestrator,VNFMFE, so thatoptimalallocationofVNF instances ispossibleandanadequate levelofmetrics canbe reported, viaNSO, to theMarketplace, ifallowedbyinformationincludedintheVNFD;

• Retrieve infrastructure resourcesmetadata to VRO: information provided by theVIM to the Orchestrator, VRO FE, so that optimal allocation of NS instances ispossible,according to thecharacteristicsof the supporting infrastructure (e.g., the

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

31

availabilityofspecializedcomponents,suchasGraphicalProcessingUnits(GPUs)orDigitalSignalProcessors(DSPs),aswellasthemaximumnumberofvCPUsorGiga-bytesofRAM,whichwillinfluencetheallocatingalgorithmfordeterminingthemostappropriateresources);

• ManageVM’sstate:informationprovidedbytheVIMtotheOrchestrator,VNFMFE,so that atomicoperations, suchas redeployment/withdrawalof anentireVNF, ontheallocatedVMsare feasible (dependingonthe implementationapproach, theseoperationscanalsobedonebytheIVMlayer).

2.3.3.4. InterfacebetweentheOrchestratorandtheTransportNetworkManagement

TheinterfacebetweentheOrchestratorandtheTransportNetworkManagementservesthefollowingpurposes:

• Allocate/release/update transport connection: upon a request from theOrchestrator to (re)instantiate, update the configuration of or release a transportconnection(betweentwodistinctDCs);

• Reserve/release transport connection: upon an expected future need from theOrchestratortoinstantiateorreleaseatransportconnection(betweentwodistinctDCs).Thisrequirementmakessenseinscenarioswhereallocatingconnectionsfromscratch is complex or too time consuming. Reserved connectionsmay have lowerpricesthaneffectiveallocatedonesandbecomefastertoallocatewhentimecomes;

• Retrieve transport connection usage data to NSO: information provided by theWICM to the Orchestrator, NSO FE, so that optimal allocation of transportconnectionsbetweentwoormoredistinctDCsispossibleandanadequatelevelofmetricscanbereported,togetherwithVNFanNSlevelmetrics,totheMarketplace;

• Retrievetransportconnectionmetadata:informationprovidedbytheWICMtotheOrchestrator, VRO FE, such that the optimal allocation of transport connectionsbetweentwoormoredistinctDCsismadepossible,accordingtothecharacteristicsofthesupportinginfrastructuree.g.,maximumnumberofvLinksallowed,maximumbandwidth,etc.;

• Manage transport connection state: information provided by the WICM to theOrchestrator,NSOFE,sothatatomicoperations,suchasredeployment/withdrawalofanentireVNFwithindifferentDCsarefeasible(dependingontheimplementationapproach,theseoperationscanalsobeexecutedbytheIVMlayer).

2.3.3.5. InterfacebetweentheOrchestratorandtheVNF

TheinterfacebetweentheOrchestratorandtheVNFservesthefollowingpurposes:

• Instantiate/terminateVNF:sentbytheVNFMasarequest,wheneveraninstanceofaNSofwhichtheVNFisacomponentistobelaunchedorremoved.RemovalofaVNFinstancecanonlybedoneifthereisnoNSinstanceusingthatVNF.

• Retrieve VNF instance run-time information: sent by the VNFM, so that VNF SLAmetricscanbecheckedandtheSLAcanbefulfilled;

• ConfigureaVNF:sentbytheVNFM,sothatopenconfigurationparameterscanbefulfilledlaterorchangedaftertheVNFinstanceisalreadyrunning;

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

32

• ManageVNFstate:sentbytheVNFM,sothattheOrchestratorisabletostart,stop,suspendalreadyrunningVNFinstances;

• Scale VNF: sent by the VNFM, so that VNF scaling is feasible. All the VNF scalinginformationisavailableintheVNFD.VirtualisedresourcesareavailablethroughtheVRO.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

33

3. THET-NOVAIVMLAYER

3.1. INTRODUCTION

T-NOVA’s InfrastructureVirtualisationandManagement (IVM) layerprovides the requisitehosting and execution environment for VNFs. The IVM incorporates a number of keyconcepts that influence theassociated requirements andarchitecture for the layer. Firstlythe IVM supports separation between control and data planes and networkprogrammability. The T-NOVAarchitecture leverages SDN for designing, dimensioning andoptimising control- and data-plane operations separately, allowing capabilities from theunderlyinghardware tobeexposed independently. Secondly the IVM isbasedaround theuse of clusters of SHV computing nodes in cloud computing configurations to supportinstantiationofsoftwarecomponentsintheformofVMsforNFVsupport,offeringresourceisolation, optimisation and elasticity. This configuration should support automateddeployment of VNFs from the T-NOVAmarketplace anddynamically expansion/resizing ofVMs as required by SLAs. Building on physical IT and network resource domains the IVMprovides full abstraction of these resources to VNFs. Finally the IVM must expose thenecessaryexternaland internal interfaces tosupportappropriate integration.Theexternalinterfaces provide connectivity with the T-NOVA Orchestration layer in order to executerequests from theOrchestrator and secondly toprovide information suchasperformancemetricsrelatingtotheinfrastructureandVNFsbeinghostedinorderfortheOrchestratortomakeeffectivemanagementdecisions.Theinternalinterfacesprovideconnectivitybetweenthe internal domains of the IVM to ensure the requests for the creation, deployment,management and termination of VNF services and their host VMs can be executedappropriatelyamongtheconstituentinfrastructureandcontrolcomponents.

For an architectural perspective the IVM is comprised byNFVI, VIM andWICM functionalentities.TheNFVI inturn iscomposedofCompute,HypervisorandNetworkDomains.TheVIMiscomprisedofcompute,hypervisorandnetworkcontrolandmanagementcapabilities,whiletheWICMworksasasingleFE.AlltheFE’swithintheIVMimplementnorthboundandsouthbound interfaces to providemanagement, control andmonitoring of the compositeinfrastructure, both physical and virtualised. Secondly these interfaces provide the keyintegrationcapabilitieswithintheoverallT-NOVAsystemarchitecture.

The following sections describe the key objectives and characteristics of the IVM and itsconstituent components, along with their requirements. These requirements where thenutilisedtogetherwithT-NOVAD2.1[6]andD2.22[4]todefinethearchitectureofIVMinamanner that addressed these requirements and the overall goals of T-NOVA. Thearchitecture for T-NOVA is presented as an overall integrated architecture together withdetaileddescriptionsofthearchitectureFEsandinterfacesoftheconstituentdomains.

3.2. OBJECTIVESANDCHARACTERISTICSOFTHET-NOVAIVMLAYER

TheT-NOVAIVMisconsideredtomanageamixtureofphysicalandvirtualnodesandwillbeused to develop, implement and showcase T-NOVA’s services. The IVM will be fullyintegratedwith theT-NOVAOrchestrator toensure that requirements for thedeploymentandlifecyclemanagementofT-NOVAVNFservicescanbecarriedoutinanappropriateandeffectivemanner. The IVMshouldbe sufficiently flexible to supporta varietyofuse casesbeyond those explicitly identified in T-NOVA (see D2.1 [6]). As mentioned previously,infrastructurevirtualisationplaysakeyroleinachievingthisvisioninT-NOVA.Virtualisation

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

34

andmanagementofthevirtualisedresourcesextendsbeyondthecomputeandstoragetoinclude network infrastructure in order to fully exploit the capabilities of the T-NOVAarchitecture.VirtualisationoftheDCnetworkinfrastructureallowsdecouplingofthecontrolfunctionsfromthephysicaldevicestheycontrol. InthisregardT-NOVAis implementinganSDN control plane for designing, dimensioning andoptimising the control- anddata-planeoperations separately, allowing capabilities from the underlying hardware to be exposedindependently.

InsummarythekeyobjectivesfortheT-NOVAIVMareasfollows:

• Supportfortheseparationofcontrolanddataplaneandnetworkprogrammabilityatleastatcriticallocationswithinthenetworksuchasthenetworkaccess/borders,

• Utilisation of commodity computing nodes in cloud configurations to support theinstantiationofsoftwarecomponents intheformofVMs,containersorunikernelsforNFVsupport,offeringresourceisolation,optimisationandelasticity,

• Use of L2 Ethernet switched networks (subnets) to provide physical networkconnectivitybetweenservers,

• Eachserver supportsvirtualisationandhostsanumberofVMs (virtualappliances)belonging to their respective vNets. Virtual switch instances or real physical SDN-capable switcheshandlenetwork connectivity among theVMseitheron the sameserveroramongtheserversco-locatedinthesameDC,

• InterconnectionofL2subnets insideandoutsideDC’sboundariesviaaL3network(IP routers). This interdata centre connectivity is provisioned throughappropriateWANingressandegresspoints.

• Virtualisation of compute and network resources allows the T-NOVA system todynamically scale out VMs. This accommodates sudden spikes inworkload traffic;the instantiation of network elements as VMs into clusters of nodes facilitateshorizontal scaling1 (hostingofmanyVM instances into thesamecluster)accordingto function requirements and traffic load) (see T-NOVA requirements/use cases –D2.1[6]).

3.3. T-NOVAIVMLAYERREQUIREMENTS

TherequirementscaptureprocessfocusedonidentifyingthedesiredbehavioursoftheIVM.The requirements identified focus on the entities within the IVM, the functions that areperformed to change states or object characteristics, monitoring of state and the keyinteractionswiththeT-NOVAOrchestrationlayer.Noneoftheserequirementsspecifieshowthesystemwillbeimplemented.ImplementationdetailsarelefttotheappropriatetasksinWP3/4astheimplementation-specificdescriptionsarenotconsideredtoberequirements.ThegoaloftherequirementswastodevelopanunderstandingofwhattheIVMneeds,howit interacts with Orchestration layer, its relationship to the overall T-NOVA architecturedescribedinD2.22[4].AdditionallytheusecasesincludedinD2.1[6]werealsoconsideredandcross-referencedwithIVMrequirementswhereappropriate.

Theinitialphaseofelicitingrequirementsincluded:• Reviewing available documentation including drafts and final versions ofD2.1 and

ETSI’s specification of the Network Functions Virtualisation (NFV); ArchitecturalFramework.

1TheT-NOVAinfrastructureincloudbasedwhichtypicallyonlysupportshorizontalscaling.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

35

• Reviewing the high level T-NOVA architecture that was developed by Task 2.2 togather informationonhowtheusersandserviceproviderswillperformtheirtaskssuch as VNF deployment, scale out etc., and to better understand the keycharacteristics of the T-NOVA system that will be required to realise user goalsincludingthoseatasystemlevel.

Theadoptedapproachwas generally in-linewith the Instituteof Electrical andElectronicsEngineer(IEEE)guidelinesforrequirementsspecification.AsimilarprocesswasusedinTasks2.1 and2.2. Requirementswereprimarily anchored to theexisting T-NOVAuse cases andthe interactions with Orchestrator both in terms of the actions and requests that theOrchestrator would expect the IVM to execute. Additionally the data/information that isrequired by the Orchestrator to successful deploy and manage VNF services wereconsidered. Identified requirements were primarily functional in nature since they wererelated to the behaviour that the IVM is expected to exhibit under specific conditions. InadditionETSI’sNFVVirtualisationFrameworkrequirementswerealsoconsidered,inordertoensure approach scope and coverage for the requirements that have been specified. Thefollowingarethekeycategoriesofrequirementsthatwereconsidered:

• Portability• Performance• Elasticity• Resiliency

• Security• ServiceContinuity• ServiceAssurance• OperationsandManagement

UsingasystemengineeringapproachthehighlevelarchitecturefortheIVMwaspreviouslydescribedin[4].Eachcomponentoftheoverallsystemwasspecifiedintermsofhigh-levelfunctional block and the interactions between the functional blocks are specified asinterfaces.Thisapproachidentifiedthefollowingfunctionalblocks:

• VirtualisedInfrastructureManagement(VIM),• WANInfrastructureConnectionManager(WICM),• InfrastructureElements,consistingofComputing,HypervisorandNetworking.

Therequirementspresented inthefollowingsectionarerelatedtothesefunctionalblocksandweredevelopedusingthepreviouslydescribedmethodology.Theserequirementswereusedasa foundational input into thedevelopmentof theoverall IVMarchitectureand itsconstituentfunctionalblocks,whichispresentedinsubsection3.4.

A detailed specification of the requirements for each module within the scope of the T-NOVA IVM architecture can be found in Annex B. A total of 66 final requirements wereidentifiedanddocumentedrelatingtotheVIM,NFVI(compute,hypervisor,DCnetwork)andWICM.ItshouldbenotedthatrequirementsthatrelatetobasicexpectedbehavioursofthevariousdomainscomponentshavebeenexcludedinordertofocusonrequirementsthatarespecificallyneededbytheT-NOVAsystem.Analysisoftheserequirementshasidentifiedthefollowingconclusionsforeacharchitecturalmodule.

3.4. VirtualInfrastructureManager(VIM)

TheVIMisrequiredtomanageboththeIT(computeandhypervisordomains)andnetworkresources by controlling the abstractions provided by the Hypervisor and Infrastructurenetwork domains. It also implements mechanisms to efficiently utilise the availablehardwareresourcesinordertomeettheSLAsofNSs.TheVIMisalsorequiredtoplayakeyrole in the VNF lifecycle management. Additionally, the VIM is required to collectinfrastructure utilisation/performance data and to make this data available to theOrchestratorinordertogenerateusage/performancestatistics,aswellastriggeringscaling

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

36

ofVNFsifnecessary.ThespecificsofhowthemetricsareprovisionedandprocessedatboththeVIMandOrchestratorlayerscanvaryandwilltypicallybeimplementationspecific.ThedetailsoftheT-NOVAimplementationarebeingdeterminedinWP3/4.ToaccomplishthesegoalstheVIMneedsthefollowingcapabilities:

• TheNetworkControlcapabilityintheVIMexploitsthepresenceofSDNfeaturestomanagetheinfrastructurenetworkdomainwithinanNFVI-PoP;

• Hardware abstraction in the Compute domain for efficient management ofresources; however, the abstraction process should ensure that platform specificinformationrelevanttotheperformanceofVNFs isavailableforresourcemappingdecisions;

• Virtual resource management in the Hypervisor domain to provide appropriatecontrolandmanagementofVMs;

• Strong integration between the three sub-domains above through appropriateinterfaces;

• Integration with the Orchestrator via well-defined interfaces to provideinfrastructure related data to the Orchestrator and to receive management andcontrolrequestsfromOrchestratorforexecutionbytheVIM.

3.4.1. WANInfrastructureConnectionManager

TheWICMisexpectedtoprovidethelinkbetweenWANconnectivityservicesandtheNFVIhosting VNFs including connectivity to NSs allocated in more than one NFVI-PoP.Connectivity should take form of VLAN to WAN connections through ingress and egresspointsateachNFVI-PoPinvolvedintheNSservice.Thisconnectivityhastobeprovidedinaconfigurablemanner (i.e. supports a high level of customisation).Moreover, to setup thisconnectivity, cooperation between theWICMand theNFVINetwork domain is needed inordertoallocatethetrafficovertheinter-DCandWANnetworksinanappropriatemanner.

3.4.2. NFVICompute

TheNFVIComputedomainshouldbeabletoprovideanappropriatedperformancelevelfortheVNFsthatarebeendeployedintermsofperformancewhileensuringoptimalutilisationoftheavailablecomputeresource.Moreover,thecomputenodesandthehypervisorshouldworkinanintegratedandperformantmanner.ThecomputedomainshouldcollectmetricsontheperformanceofthephysicalresourcesandmakethemavailableattheVIMlevelforsubsequent exposure to the Orchestration layer. Finally, the T-NOVA Compute domainshould have the capability, if required by a network service, to support heterogeneouscompute resources, such as Graphical Processing Unit (GPUs), Field Programmable GateArray(FPGAs),Multi-IntegratedCores(MICs)etc.

3.4.3. NFVIHypervisor

TheNFVIHypervisordomain shouldbeable to implementhardware resourceabstraction,virtual resource lifecycle management mechanisms which are coordinated by theOrchestrator via theVIM, and toprovide to theVIMmonitoring informationwhilehavingminimalimpactontheVNFworkloadperformance.Additionaldetailsonthestepinvolvedinthecollection,processingandutilisationofmetrics in theT-NOVAsystemcanbe found insubsection4.1.3.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

37

3.4.4. NFVIDCNetwork

The NFVI DC Network domain should implement an SDN approach to provide networkvirtualisationcapabilities insideaNFVI-PoP(creationofmultipledistinctdomainsoveronesingle physical network using VLANs), network programmability through the separationbetween Control Plane (CP) and Data Plane (DP). Moreover it should support transporttunnelling protocols of L2 packets over L3 networks, to assist theWICM in setting up thecommunication/betweendifferentNFVI-PoPs.ItshouldalsobeabletogatherperformancedataandsendthemtotheVIMNetworkControlmodule.

3.5. T-NOVAIVMArchitecture

As mentioned above, the T-NOVA IVM layer comprises of three key architecturalcomponentsnamelytheNFVI,theVIMandtheWICM.

Thehigh-levelarchitecture,whichhaspreviouslybeendescribedinD2.22[4],wasdesignedtoalignwiththeETSIMANOarchitecturefeaturingcorrespondingcomponentstotheNFVIandtheVIM.However,theadditionoftheWICMisaT-NOVAspecificfeature.

Theapproachadopted in thedesignandelucidationof the IVMfocusedon the functionalcharacteristics of the overall IVM architecture and its sub domains. Careful considerationwas given to decoupling the functional characteristics from implementation-orienteddesigns.ThisallowedustofocusonwhattheIVMneedstodorathertoavoidtheinclusionof implementation-orientated functionality. A good example of where this approachgenerated challenges was with the VIM architecture where there was a tendency togravitate towards technology solutionsas ameans toeasilyencapsulate functionalneeds.Howevercarefulconsiderationofthekeyinputswasimportantinfullydecouplingfunctionalneeds from implementation details to ensure that T-NOVA IVM architecture remainstechnology-agnostic but at same timeprovides appropriate guidance and structure to theactivitiesinWP3/4.

The key inputs that were considered during the architecture design process were thefollowing:

• D2.1(Usecaseandrequirements)[6],

• D2.22(OverallSystemArchitectureandInterfaces)[4],

• DGSNFV-INF001v1.1.1-InfrastructureOverview[12],

• DGSNFV-INF003v1.1.1-ArchitectureoftheComputeDomain[13],

• DGSNFV-INF004v1.1.1-ArchitectureoftheHypervisordomain[14],

• DGSNFV-INF005v1.1.1-Infrastructurenetworkdomain[15],

• DGSNFV-INF007v1.1.1-InterfacesandAbstractions[16],

• DGSNFV-MAN001v1.1.1-ManagementandOrchestration[9],

• DGSNFV-REL001v1.1.1-ResiliencyRequirements[17],

• DGSNFV-SWA001v1.1.1-VNFArchitecture[18].

TheIVMarchitecturehasbeendefinedinaccordancetoasystemsdesignprocesswhichwasused to identify the components,modules, interfaces, and data necessary for the IVM inordertosatisfytherequirementsoutlinedintheprevioussubsectionandthosedescribedin

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

38

D2.1[6]andD2.22[4].Figure3-1showstheoverallarchitectureoftheVIM,asdiscussedsofar.

3.5.1. ExternalInterfaces

ThekeyexternalinterfacesfortheIVMareoutlinedinTable3-1.TheseinterfacesprimarilydealwiththeconnectionbetweentheIVMandtheT-NOVAOrchestrator;howeverthereisalsoanexternalinterfacebetweentheWICMandthetransportnetwork.

TheinterfacesbetweentheOrchestratorandVIMsupportanumberofkeyfunctionswithinT-NOVA. The functions supported by the interfaces can be categorised as eithermanagement or control. As shown in the IVM architecture in Figure 3-1, two specificinterfaces have been identified, mapping to the interfaces identified in the ETSI MANOarchitecture.

Figure3-1:T-NOVAinfrastructurevirtualisationandmanagement(IVM)highlevelarchitecture

The first interface is the VNFM – VIM Interface (T-Vi-Vnfm) and is responsible for theexchange of infrastructure monitoring information either through explicit request by theOrchestrator or through periodic reporting initiated by the VIM. The types of dataexchanged over this interface include detail information on the status, performance andutilisation of infrastructural resources (such CPU, storage, memory, etc.). Data will alsoencompass networking information relating to a specific VNF such as NIC level networktraffic from thehostingVMor interVMnetwork traffic, if aVNF isdeployedacrossmorethanoneVM.

FinallyVNFperformancedatawillalsobeexchangedoverthisinterface.Collectivelythedatawill be used by the VNF Manager within the T-NOVA Orchestrator to track VNF serviceperformancebycomparisonwithspecificKPIsinordertoensureSLAcompliance.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

39

The second interface identified is the NFV Orchestrator – VIM interface (T-Or-Vi). ThisinterfaceisresponsibleforhandlingrequestsfromtheNFVOrchestratorwithrespecttothefulllifecycleofaNS.Typicalexamplesofrequestssentoverthisinterfacewouldinclude,forexample, parts of the on-boarding, scaling and termination procedures of a NS. Thisinterface will be used by the NFV Orchestrator to send resource relatedcommands/informationtotheVIMsuchasresourcereservation/allocationorconfigurationdefinitions of VMs (e.g. HEAT templates in an OpenStack Cloud environment or networkrequirementssuchasthespecificationofthe interconnectionsbetweenVNF instances, i.e.networktopology).

Additionally, this interface will be utilised also to exchange specific types of monitoringinformation suchasdata related to thenetworkconnectionsbetweenNS instanceseitherwithinadata centreorwithin intradata centre connections that arephysicallydispersed.This interface will also include also detail information on the status, performance andutilisationof infrastructuralresources,networkinginformationrelatedtoaspecificVNF,aswellasVNFperformancedata.This interfaceisalsousedbytheVIMtoreportbacktotheNFVOrchestratortheoutcomeofallreceivedrequests.

TheWICMprovidesmanagement capabilitiesofnetwork connectivitybetweenNFVI-PoPs.The NFV Orchestrator – WAN Infrastructure Connection Manager (T-Or-TM) interfacesupport requests from the NFV Orchestrator to provide connectivity to either SDNControlled or non SDN control transport networks (such as IP or MPLS based networks)typicallyforinterDCs(MANorWAN).Thesenetworksarenon-virtualisedinnature.

TheWICMInterface–Externalnetworks(T-Tm-Tr)istheexplicitnetworkconnectiontothetransport network (WAN). The implementation of this interface will vary based on theprotocol the network is using. More than one interface may also be implemented ifconnectivitytodifferenttypesoftransportnetworksisrequired.

Table3.1:ExternalInterfacesoftheT-NOVAIVM

T-NovaName T-NOVAReference

ETSIISGNFVFrameworkReferencePoint

ReferencePointType

DescriptionandComment

VirtualNetworkFunctionManagement–VIMInterface

T-Vi-Vnfm Vi-Vnfm ManagementInterface

ThisinterfaceisresponsiblefortheexchangeofinfrastructuremonitoringinformationeitherthroughexplicitrequestbytheOrchestratororthroughperiodicreportinginitiatedbytheVIM.Thetypesofdataexchangedoverthisinterfaceincludestatus,performanceandutilisationofinfrastructuralresources

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

40

NFVOrchestrator–VIMInterface

T-Or-Vi Or-Vi OrchestrationInterface

ThisinterfaceallowstheNFVOrchestratortorequest/reserveandconfigureresourcestotheVIMandfortheVIMtoreportthecharacteristics,availability,andstatusofinfrastructureresources.

NFVOrchestrator–WideAreaNetworkInterface

T-Or-Tm - OrchestrationInterface

ThisinterfacestheWICMwiththeNFVOrchestratorandisusedtomanagetheset-up,teardownandmonitoringofconnectionsintransportnetworks.

WideAreaNetworkInterface–Externalnetworks

T-Tm-Tr Ex-Nf TrafficInterface

ThisinterfacestheWICMwithexistingWideAreaNetworks(SDN-enabledornon-SDN-enabled)andareusedtoimplementrequestsreceivedfromtheOrchestratorviatheOr-Tminterface.

3.5.2. InternalIVMInterfaces

The key internal interfaces of the IVM are outlined in Table 3-2. The interface for NFVImanagementincludingitsfunctionalentitiesisprovisionedviatheT-Nf-Viinterface.Itisthisinterface, which will be utilised to establish trust and compliance of the underlyinginfrastructure specifically the Hypervisor domain via the T-Nf-Vi/H implementation of theinterface,thecomputedomainviatheT-Nf-Vi/CinterfaceandthenetworkdomainviatheT-Nf-Vi/Ninterface.AfulldescriptionoftheseinterfacesispresentedinTable3-2.Apossibledeploymentconfigurationfor theVIMcouldbeprovidedrunning itwithinahostedVM(itcanbevirtualised).Forthisspecificconfiguration,theT-Nf-Vimanagement interfacemightbeabstractedasaSWA-5interface.However,evenifthisconfigurationispossible,itisnotdesirable,duetosecurityconcernsandFEresponsibilities.TherearealsoreliabilityconcernsregardingvirtualisingtheVIMonthesame infrastructurethat it ismanaging. Inascenariowherethehypervisordomainof theNFVI requiresarestart, theVIMwill lose itsability tooperate and continue to manage the NFVI therefore the VIM should run on separatedhardwareplatformseithervirtualisedornote.

OneoftheinterfacesthatareinternaltotheNFVIistheSWA-5interface2,whichisusedforresources,suchasavirtualNIC,avirtualdiskdrive,virtualCPU,etc.ExaminingFigure3-1,thisinterfaceinvolvesboththeT-Vn-Nf/VNandT-Vn-Nf/VM.Itisnotintendedforuseasamanagementinterface,insteaditisprimarilyintendedtologicallyfenceoffresponsibilities,2SWA-5correspondstoVNF-NFVIcontainerinterfaces:ThisisasetofinterfacesthatexistbetweeneachVNFandtheunderlyingNFVIthusSWA-5describestheexecutionenvironmentforadeployableinstanceofaVNF.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

41

but isalso intendedforsecurityconsiderations. In fact, reasonablestepsmustbetakentopreventunauthorisedaccess,fromwithinaVM,fromattackingtheunderlyinginfrastructureand obtaining management privileges, thus being able to possibly shut down the entiredomain,includingallotheradjacentVMs,redirecttraffic,etc.

Table3.2:InternalinterfacesoftheIVM

T-NovaName

T-NOVAReference

ETSINFVFrameworkReferencePoint

INFReferencePoint

ReferencePointType

DescriptionandComment

VIM-NetworkInterface

T-Nf-Vi/N

Nf-Vi

[Nf-Vi]/N

Management,Orchestrationand MonitoringInterface

ThisinterfaceisusedforthemanagementofInfrastructureNetworkdomainresources.

VIM-HypervisorInterface

T-Nf-Vi/H [Nf-Vi]/H

Management,Orchestrationand MonitoringInterface

ThisinterfaceisusedforthemanagementoftheHypervisordomainresources.

VIM–ComputeInterface

T-Nf-Vi/C [Nf-Vi]/C

ManagementandOrchestrationInterface

ThisinterfaceisusedforthemanagementofComputedomainresources.

Hypervisor–NetworkInterface

T-Vl-Ha/Nr Vl-HA [Vl-Ha]/Nr ExecutionEnvironment

ThisinterfaceisusedtocarryinformationbetweentheHypervisorandtheInfrastructureNetworkdomains.

Hypervisor-ComputeInterface

T-Vl-Ha/CSr VI-Ha [Vl-Ha]/CSr Execution

Environment

ThisinterfaceisusedtocarryexecutioninformationbetweentheComputeandtheHypervisordomain.

Compute–NetworkInterface

T-Ha/CSr-Ha/Nr VI-Ha Ha/CSr-

Ha/Nr TrafficInterface

ThisinterfaceisusedtocarryexecutioninformationbetweentheComputeandtheNetworkdomain.

VirtualMachine–VNFC

T-Vn-Nf/VM Vn-Nf [Vn-Nf]/VM VNF

Execution

Thisinterfaceisusedtocarryexecution

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

42

Interface Environment environmentinformationforeachVNFCinstance.

VirtualNetwork–VirtualNetworkInterface

T-Vn-Nf/VN [Vn-Nf]/VN

VNF

ExecutionEnvironment

ThisinterfaceisusedtocarryexecutionenvironmentinformationbetweenVNFCinstances.

3.6. NFVIandNFVI-PoP

TheexecutionenvironmentforVNFsisprovidedbytheNFVIdeployedinvariousNFVI-PoPs.TheNFVI-PoPacts singlegeographic location i.e. aDCwhereanumberofNFVI-nodesarelocated.ANFVI-PoP is responsible for providing the infrastructural buildingblocks tohostandexecuteVNFservicesdeployedbytheT-NOVAsysteminaparticularlocation.TheNFVIcomprises of the IT resources in the form of the Compute and Hypervisor domains andnetwork resources in the formofNetwork domain, as shown in Figure 3-1. TheNFVI canutilise these domains in a manner that supports extension beyond a single NFVI-PoP tomultipleNFVI-PoPsasrequiredtosupporttheexecutionofagivenNS.

The NFVI-PoP is expected to support the deployment of VNFs in a number of potentialconfigurations.Thesedeploymentswill rangefromasingleVNFdeployedatasingleNFVI-PoP,tomultipleVNFsfromdifferentVNFprovidersinamulti-tenantmodelatoneNFVI-PoP.Additionally, theNFVImayneed to supportVNFsdeployedatmore thanoneNFVI-PoP toinstantiate the required NS. Interconnectivity between these PoPs is provisioned andmanaged in the case of T-NOVAby theWICMmodule,which is similar in function to theWANControllerintheETSIMANOarchitecture[9](seesubsection3.8).

ThenetworkaccesscapacityrequiredataparticularNFVI-PoPwilldependonthenetworkserviceworkloadtype,thenumberandcapacityoftheVNFs instantiatedontheNFVI.Themanagement and orchestration of virtualised resources should be able to handle NFVIresources in a single NFVI-PoP as well as when distributed across multiple NFVI-PoPs.Management of the NFVI is provided by the VIM through domain interfaces (T-Nf-Vi) asshown in Figure 3-1. The VIM also provides the intermediate interfaces between theOrchestratorandtheNFVI(T-Or-ViandT-Vi-VNFM).TheNFVIwillexecuterequestsfromtheOrchestratorviatheVIMrelatingtothelifecyclemanagementofVNFssuchasdeploymentscalein/outandtermination.

ThefollowingsectionsdescribethearchitectureandtherespectiveinternalcomponentsoftheNFVI,namelythecompute,hypervisorandnetworkdomains.TheinterfacesrequiredtoimplementanoverallfunctionalarchitecturefortheT-NOVANFVIsystemarealsodescribed.

3.6.1. ITResources

TheT-NOVA ITResourcesencompasses thecomputeandhypervisordomainsof theNFVI.These domains have their origins in traditional enterprise IT environments and morerecently in the deployment of cloud computing environments. In order to support thedevelopment ofNFV architectural approaches in carrier grade environments, IT resourcesandcapabilitieshavebeenembraced intheseenvironmentstosupportthedeploymentofVNFs. However the functionality, capabilities and how these IT resources are composed

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

43

within virtualised network architectures need to be carefully considered. The enterpriseoriginsofthesetechnologiesoftenhaveinherentgapsincapabilitysuchasline-ratepacketprocessing performance limitations. These gaps can influence architectural decisions andmayrequireinnovativesolutionstoaddressanyidentifiedgapsforVNFservicedeployment.The following sections discuss the compute and hypervisor domain architectureconsiderations and the proposed approach in the context of the T-NOVA systemarchitecture.

3.6.1.1. ComputeDomain

TheComputeDomainisoneofthreedomainsconstitutingtheNFVIandconsistsofservers,NICs, accelerators, storage, racks, and associated physical components within the rack(s)related to the NFVI, including the networking Top of Rack (ToR) switch. The Computedomainmaybedeployedasanumberofphysicalnodes(e.g.ComputeandStorageNodes)interconnectedbyNetworkNodes(devices)withinanNFVI-PoP.

Traditionally the compute environment within the telecoms domain has beenheterogeneousbasedaroundavarietyaroundmicroprocessorarchitectures suchasMIPS,PowerPC, SPARC, etc. with tight coupling between the microprocessor and the softwareimplementation.ManytraditionaltelecommunicationsystemsarebuiltinC/C++technologywith high interdependence on the underlying processing infrastructure and a specificinstructionset.

AsthefirstgenerationofcommercialVNFshavebecomeavailablebasedonadaptionofthesoftware from previous fixed appliances to a version that can run in virtualised X86environments some performance difficulties have been encountered. While virtualisationdecouplessoftwareandhardwarefromadeploymentpointofview,itdoesnotdoitfromadevelopment point of view. Selection of an appropriate cross compiler for the targetplatform (e.g. X86)may address someof the issues.However in order to achieveoptimalperformance,aproperredesignofthesoftwaremayberequiredtoensureappropriateuseofspecificcapabilities,forexamplehyperthreadinginX86processors.Theapplicationmayalso need to use certain software libraries to improve the performance of certain actionssuchaspacketprocessing.HoweverVNFsrunningonSHVserversgenerallywillhavesomeoftradetrade-offbetweenflexibility,easeofdevelopment&deployment,performanceetc.incomparisontocustomhardwareplatforms.

Thecomputedomainarchitectureshouldhavethecapability tosupportdistributedvirtualappliances that can be hosted across multiple compute platforms as required by specificSLAs of network services. Moreover storage technologies and management solutions areincluded in the domain and show a large degree of variability in terms of differenttechnologies; scalability and performance (see start of the art review). Depending on theworkloadsanduse-casesthechoiceofstoragetechnology is likelytobespecific tocertainworkloadtypes.

Anotherimportantobjectiveofthecomputedomainistoexposehardwarestatisticsofthecompute node with high temporal resolution. The VIM communicates directly to thecomputedomainandthroughthehypervisortoaccessallthehardwaremetrics,whichcanbestaticordynamicinnature.

Staticmetricsexposecomputenodecharacteristicswhichdonotchangeorchangeslowly(e.g.onceaday).Thesemetricsultimatelyactasafirstorderfilterforselecting/provisioninganode fordeployingaVNF. Staticmetrics areobtained from readingOSandACPI tables.The Advanced Configuration and Power Interface (ACPI) specification provides an openstandard for device configuration and power management by the operating system.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

44

Additionalmetricsmaybestoredinlocalstructuresprovisionedbyservervendorsorsystemadministrators. For example compute node performance index, energy efficiency index,geographiclocation,specificfeaturesenabled/supported,securitylevel,etc.

AnOrchestratorcanidentifyacandidateplatformbasedonstaticmetrics,howeverinorderto actually instantiate aVNF additional dynamicmetrics are required, e.g. CPUutilisation,memory, I/Oheadroomcurrentlyavailableetc.Thesemetricscouldbeprovidedonaper-query basis or the compute node could proactively update hypervisor domain at regularintervals.

HeterogeneousComputeArchitectures

A VNF developed for a target compute architecture needs to be fully optimised andvalidated prior to rollout. This process will also need to be repeated on a per computearchitecturebasis.Whilethe initial focushasbeenonX86computearchitectures, recentlytherehasbeeninterestintheuseofco-processorssuchGPUsorFPGAstoacceleratecertainVNFworkloads: someworkloadscanexperienceperformancebenefitsbyhavingaccess todifferentprocessingsolutionsfromavarietyofsiliconvendorsincludingnetworkprocessorsand general-purpose co-processors. The move towards more heterogeneous cloudcomputingenvironmentsisstartingtogainattention,asstandardX86processorsmayhaveperformancelimitationsforcertainworkloadstaskse.g.highspeedpacketprocessing.Thishas ledDCequipmentvendors to investigate theuseofalternativecomputearchitecturesprovisioned in either in a standalonemanner or coupledwith X86 processors to enhancetheir offerings. In defining heterogeneous compute domain architectures for T-NOVA, wedivideddevicesintotwomaincategories:

• Deviceswhich can only operate in conjunctionwith a host CPU, like GPUs,multi-integratedcores(MIC)andMicron’sAutomataprocessor;

• Devices which can operate in a stand-alone fashion, like FPGAs and FPGA SoCs(althoughFPGAsandFPGASoCscanalsoactasdevicesattachedtoaCPU-controlledsystem).

Forthefirstclassofdeviceswecanderiveacomputenodearchitecture,wherethecomputenode is complemented by a co-processor, which can be any of the four technologiesmentioned above (in the FPGA and FPGA SoC cases, theywill, act as slave devices to theprocessor).Theextenttowhichtheacceleratorresourcesthemselvesarevirtualised is leftto each specific implementation, though such a solution is known to improve theperformance of the accelerator hardware. It must be noted that such a solution is onlyavailable forGPUs (e.g.nVidia’sGRID),butnot forFPGAsor theautomataprocessor.Thisgeneral architecture leaves a lot of the implementation choices open. For example theinterconnectionoftheCPUandtheco-processorcouldbeimplementedeitheroverPCIeoroveradirectlinklikeQuickPathInterconnect(QPI)ordivisionofthememorybetweentheCPU and the co-processor. In any scenario, the system must be able to adhere to therequirementsforthecomputenodesasoutlineddowninAnnexB.

ThesecondclassofdevicesisbasedaroundanFPGASoC,whichisanFPGAthatintegratesoneormoreprocessorcoresinthesamesilicondie.DevicesliketheseareavailablefromallmajorFPGAvendors.Inthiscase,theprocessingsystemontheFPGASoCrunsaHypervisoronwhichOS’andapplicationsareexecuted.Toalargeextentthesameconsiderationsasinthe previous scenario apply, both in terms of interconnection of components andvirtualisation of accelerator resources. The important difference here is the degree ofintegration, since the whole heterogeneous compute node resides within one physicaldevice.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

45

KeyComponentsoftheComputeDomain

ThemaincomponentsoftheComputedomain’sarchitectureare:

• CPUandAccelerator:Ageneral-purposecomputearchitecture isconsideredbasedon commercial x86 server clusters. Additionally co-processors cards/FPGAs andGPUs are also considered for application specific workloads or functions such aspacket processing. As outlined in state of the art review the CPU nodes willincorporate technologies to support virtualisation of the actual CPU such as VT-x.ConnectionstoI/OdeviceswillusetechnologiessuchasVT-d.Specificco-processorsinclude acceleration chip for classification, Crypto, DPI/Regular Expression,Compression/Decompression, Buffer management, Queue management, Workscheduler,Timermanagement,Trafficmanagement,addresstranslation);

• NetworkInterfaces:ThenetworkinterfacecouldeitherbeaNICwhichconnectstotheprocessorviaPCIeorthenetworkinterfacecapabilitymayberesidenton-boardtheserver.ProvisioningofvirtualisednetworkconnectivityandaccelerationwillusetechnologiessuchasVT-c.

• Storage:Storageencompasseslarge-scalestorageandnon-volatilestorage,suchasharddisksandsolid-statedisk(SSD)whichcanbewithlocallyattachedornetworkedinconfigurationssuchasSAN.Forsomepurposes,itisnecessarytohavevisibilityofthe different storage hierarchy level (Cache Storage, Primary Storage, SecondaryStorage,ColdStorageorArchivedStorage)eachonecharacterisedbyspecificlevelsof latency, costs, security, resiliency and feature support. However, for manyapplications,thedifferentformsofstoragecanbeabstracted,especiallywhenoneformofstorageisusedtocacheanotherformofstorage.Thesecachescanalsobeautomatedtoformatieringfunctionforthestorageinfrastructure.

CollectivelythesetechnologiesenablethehypervisortoabstractthephysicalresourcesintovirtualresourceswhichcanbeassembledintoVMsforhostingVNFs.ItshouldbenotedthatasingleComputePlatformcansupportmultipleVNFs.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

46

Figure3-2:ComputeDomainHighLevelArchitecture

ComputeDomainInterfaces

Thecomputedomainpresentsthreeexternalinterfaces:

• TheT-Nf-Vi/C-usedbytheVIMtomanagethecomputeandstorageportionoftheNFVI.Itisthereferencepointbetweenthemanagementandorchestrationfunctionsincomputedomainandthemanagementandorchestrationfunctionsinthevirtualinfrastructuremanagement(VIM);

• TheT-[Vi-Ha]/CSr interface is the interfacebetween thecomputedomainand thehypervisordomain.Itisprimarilyusedbythehypervisor/OStogaininsightintotheavailablephysicalresourcesofthecomputedomain;

• TheT-HA/CSr-Ha/Nr interface is used to carryexecution informationbetween theComputeandtheNetworkdomain.

Orchestration and management of the NFVI is strictly implemented via the T-Nf-Viinterfaces. The implementation of the interfacemustmatch the requirements outlined inAnnexBinadditiontohavingthegeneralcharacteristicsofbeingdedicatedandsecure.Thisinterface isutilisedtoestablishtrustandcompliancebetweentheVIMandtheunderlyingcomputeinfrastructure.Theinterfaceisexposedbymanagementfunctionsofthecomputenode and allows both control and monitoring of the compute domain. With regard tomonitoring,agentsareinstalledbothatthehostOS(tomeasurephysicalresources)aswellas(optionally)attheguestOSs(tomeasurevirtualisedresourcesassociatedwithaspecificVNFC).ThelattermetricsareofparticularinteresttoT-NOVAoperations,sincetheyprovideanindicationoftheresourcesconsumedbyaVNFCinstanceandaredirectlyusedforservicemonitoring. Heterogeneous compute resources need to provide additional methods toextracttheappropriatemetricsfromthedevicesandtosendthemviatheT-Nf-Viinterfaces.Application performance such as SLA compliance depends on a variety metrics such as

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

47

resource and system level metrics. The ability to measure application performance andconsumption of resources plays a critical role in operational activities such as customerbilling. Furthermore, statistical processing of VNFCmetrics will be exploited to indicate aparticular malfunction. Metrics to be collected at both physical compute node and VMinclude CPU utilisation,memory utilisation, network interface utilisation, processed trafficbandwidth,numberofprocesses,etc.

3.6.1.2. HypervisorDomain

Thehypervisordomain isthepartoftheT-NOVAarchitecturethatprovidesthevirtualisedcompute environment to VNFs. Since the hypervisor domain embraces different types ofhosts, with different Guest OSs and/or hypervisors, it is important to manageinteroperability issues appropriately. Issues relating to the virtualisation of VNFs ontechnologiesfromdifferentvendorsneedtobecarefullyconsidered.

The primary goal of the hypervisor domain is therefore to manage the heterogeneity oftechnologiesfromdifferentvendors,thusprovidinganinteroperablecloudenvironmenttotheVNFs. In that sense, thehypervisordomainprovidesanabstraction layerbetween theVIM (which controls,monitors and administrates the cloud) and the VNFs resources. ThehighlevelarchitectureofthehypervisordomainisshowninFigure3-3.

Looking at a single host, the hypervisor module provides virtual resources by emulatingdifferentcomponents,likeCPUs,NICs,memoryandstorage.ThisemulationcanbeextendedtoincludecompletetranslationofCPUinstructionssets;sothattheVMbelievesitisrunningonacompletelydifferenthardwarearchitecturewithrespecttotheoneitisactuallyrunningon. The environment provided by the hypervisor is functionally equivalent to the originalmachine environment. This emulation is carried out in cooperation with the Computedomain. The hypervisor module manages slicing and allocation of the local hardwareresourcestothehostedVMs.

It also provides the NICs to the VMs and connects them to a virtual switch in order tosupportboth internalandexternalVMcommunication.Asuitablememoryaccessdriver isintegratedintothevirtualswitchthatinterconnectsVMswitheachother.

Thevirtualswitches(vSwitches)arealsomanagedbythehypervisor,whichcan instantiateand configure one or more vSwitches for each host. They are logical switching fabricsreproducedinsoftware.

TheintegrationofvSwitchesandhypervisorsisanareaofspecificfocusduetoitssignificantinfluenceontheperformanceandonthereliabilityofVMs,especially inthecaseofVNFs.Thevariousaspectsandissuesrelatedtothe integrationofvSwitcheswithhypervisorsarediscussedinSOTAreviewinsection4.2.4.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

48

Figure3-3:Hypervisordomainarchitecture

TheControl,AdministrationandMonitoringmoduleisessentiallyresponsibleforthecontrolfunctionalityforallthehostswithinthecloudenvironment,abstractingthewholecloud.Themaingoalofthismodule istoprovideacommonandstable layerthatwillbeusedbytheVIM to monitor, control and administer the VNFs over the T-NOVA cloud. This supportsvarious operations on the VMs (or VNFs) such as provisioning, creation, modification ofstate,monitoring,migrationandtermination.

Since different hypervisors provide different interfaces, the Control, Administration andMonitoring (CAM)module needs to support heterogeneous hypervisors,managing virtualresources on different vendors’ hypervisor at the same time. This particular task isaccomplishedbythehypervisormanager.

3.6.2. InfrastructureNetworkDomain

TheT-NOVAnetwork infrastructure comprehends thenetworkingdomainof theNFVI, i.e.thedifferentvirtualisednetworkresourcespopulatingtheNFVIasshowninFigure3-4.TheNetworkdomainwithintheNFVIconsidersvirtualresourcescomposingvirtualnetworksasthe functional entity. Those virtual networking resources are devoted to provideconnectivity to thedifferentvirtualisedcompute resources,whichhavebeenpresented inprevioussection.

The T-NOVA architecture, and thus the network domain controlled by the VIM, leveragesSDNforoptimisingnetworkoperations.Thenetworkdomainbuildsafullabstractionoftheactual resources included in the physical substrate and then creates vLinks between VMscomposingvNets,whichareprovisionedinordertosatisfytherequirementsofthedifferentVNFs.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

49

ThemainobjectiveofthenetworkresourcesistoprovideconnectivitybetweenVMswhichcanrunonthesameserver,ondifferentserverswithinthesameDC,oronDC’sboundaries(inthelattercase,toprovideconnectivitywiththeWICMmoduleisrequired).WithintheT-NOVAIVM,thisisdirectlytranslatedintothecreationofvirtualnetworksthatinterconnectasetofcomputeresourceswhichareprovidingtheexecutionsubstratetoVNFs.

Virtual networks must be dynamically programmed in order to ensure network slicing,isolation, and ultimately connectivity in the T-NOVA multi-tenant scenario. Each vNet isdedicated toa specificVNFservice,andprovidesconnectivitybetween thedifferenthostsservingtheVNFservice.ElasticityofthevNetisstrictlyrequiredinordertoguaranteethatthecorrespondingVNFservicescanbeproperlyscaledinor–out.Avirtualisedcontrolplane(leveraging SDN concepts) will be responsible for controlling each one of these virtualnetworks.

The network domain exposes two basic interfaces: one to the VNF (T-Vn-Nf/VN), and theother to the basic VIM controller (T-Nf-Vi/N). A full description of the interfaces can befoundinTable3.2.

Figure3-4:HighlevelarchitectureoftheInfrastructureNetwork

3.7. VirtualisedInfrastructureManagement

WithintheT-NOVAIVMarchitecturetheVIMisthefunctionalentitythatisresponsibleforcontrolling and managing the NFVI compute, storage and network resources within theNFVI.TheVIMisgenerallyexpectedtooperateinoneoperator’sinfrastructuredomain(e.g.,NFVI-PoP).Howeverasmanyoperatorsnowhavedatacentresdistributedonaglobalbasis,scenarioswillarisewheretheVIMmayoperateinmorethanoneNFVI-PoPtosupporteitherthe architecture requirements for VNF services and/or operator business needs.AlternativelymultipleVIMsmayoperateacrossoperatordatacentresprovidingmultiNFVI-PoPs that canoperate independentlyor cooperativelyas requiredunder thecontrolofanOrchestrator.

While a VIM in general can potentially offer specialisation in handling certain NFVIresources, in the specific context of the T-NOVA system the VIM will handle multipleresourcestypesasshowninFigure3-5.TheVIMactsastheinterfacebetweentheT-NOVA

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

50

Orchestrator and the available IT-Infrastructure abstracted by the NFVI. The controlcomponentsareattheheartoftheVIM,encompassingthefollowingelements:

• Thealgorithmsand logic forcontrol,monitoringandconfigurationoftheirrelateddomain;

• An interface or API server to offer the implemented logic and collectedinformation’sinanabstractedwaytoothercomponents;

• Aninterfacetocontrolandaccessthevirtualisedinfrastructure.

Theinterfacesfromthecontrolcomponent intheVIMareaggregatedintheOrchestratorAgent and the VNFManager Agent functions to deliver a unified interface to the upperlayers of the T-NOVA components via the T-Vi-Vnfm and T-Or-Vi interfaces. Thisarchitecture allows interaction with the upper layers with a higher level of abstractiongiving theT-NOVAOrchestrator the layerswithahigher levelof abstractiongiving theT-NOVAorchestrator the flexibility, toconfigureaparticularpartof the infrastructureor tocollect infrastructurerelateddata.TheVIMalsoexposessouthboundinterfaces(asshownin Figure 3-5) to the infrastructure resources (Hypervisor/Compute/Network) of theNFVIwhich enable control andmanagement of these resources. A summary of the key northboundVIMinterfacescanbefoundinTable3.2.

Figure3-5:T-NOVAVIMhighlevelarchitecture

The following are the key set functions identified that must performed by T-NOVA VIMbasedongeneral requirements identifiedbyETSI foraVIMwithin theMANOarchitecture[9].

• Resourcemanagement,

• Orchestrating the allocation/upgrade/release/reclamation of NFVI resources, andmanaging the association of the virtualised resources to the physical compute,storage,networkingresources,

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

51

• Supporting the management of VNF Forwarding Graphs (create, query, update,delete), e.g., by creating andmaintaining Virtual Links, virtual networks, sub-nets,andports,

• Management of the NFVI capacity/inventory of virtualised hardware resources(compute,storage,networking)andsoftwareresources(e.g.,hypervisors),

• Management of VM software images (add, delete, update, query, copy) asrequestedbyotherT-NOVAfunctionalblocks(e.g.,NFVO),

• Collection and forwarding of performance measurements and faults/eventsinformation relative to virtualised resources via the northbound interface to theOrchestrator(T-Or-Vi),

• ManagementofcataloguesofvirtualisedresourcesthatcanbeconsumedfromtheNFVI. The elements in the catalogue may be in the form of virtualised resourceconfigurations (virtual CPU configurations, types of network connectivity (e.g., L2,L3),etc.),and/ortemplates(e.g.,avirtualmachinewith2virtualCPUsand2GBofvirtualmemory).

ThehighlevelarchitecturefortheT-NOVAVIMarchitecturereflectsthesekeyfunctions.

3.7.1. ITResourceManagementandMonitoring

In the T-NOVA system we distinguish between IT and virtualised network resources Alsofrom a management and control perspective this categorisation is clearly visible in thearchitectural design of the VIM. In fact, on the one hand, the VIM provides to theOrchestration layer a unified access point to all infrastructural resources at an abstractedlevel,but,ontheotherhand,internallythecontrolmodulesareexplicitlysplitintoComputeand Hypervisor Controlmanaging the IT resources, whereas the Network Controlmodulemanages the network resources. This architectural choice allows the T-NOVA system tomanagethemaccordingtodifferentrequirementsandneeds(intermsofperformance,timeconstraints,andsoforth)Thefollowingsubsectionsdiscusstherespectivemanagementandcontrol needs of the hypervisor and compute resources within the VIM and theirrelationshiptotheNFVI.

3.7.1.1. HypervisorManagement

The hypervisor control function is responsible for providing high level control andconfiguration capability that is independent of the technology implementationwithin thehypervisor domain. The interface to the hypervisor domain (T-Nf-Vi/H) provides thenecessary levelofabstraction to thespecificsof theunderlyinghypervisor.Thehypervisorcontrol component provides the basic commands like start, stop, reboot etc. and offersthese commands through an API server to the VIM. Specific commands related to aparticularhypervisorimplementationmayalsobesupportedoncasebycasebasisthroughthe same API server allowing finer performance tuning. Additionally the hypervisorcontrollercanimplementaqueryAPIthatcanbeusedtoprovidedetailedinformationsuchasconfigurationdetails,versionnumbersetc.

ThenetworkconfigurationofanewlycreatedVMisusuallyconfiguredbyascriptthatrunsat boot-time or can be donemanually via a CLI after the VMhas booted. The hypervisorcontrolcomponentwillofferthecapabilitytoimplementanetworkconfigurationduringVMinstantiation thus enabling a higher degree of automation which is important for serviceprovider operations. The hypervisor controller is able to configure the VM, provided thehypervisorsupportstheaction,thedesirednetworksettingsbeforethefirstbootoftheOS.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

52

Thereliabilitycapabilitiesinsidethehypervisorcontrollerhaveanimportantroleascustomconfigurations can be requested by the T-NOVA Orchestrator. Their primary role is toprevent the user of the API from placing the hypervisor in an inconsistent or error stateleavingitunabletomanageorrespondtotheVMsunderitscontrol.Themodulemayalsoplay a role in managing issues such as misconfiguration, compromised commands orhypervisoroptionsthatdonotworkwellwiththeallocatedhardware.

Additionally,thehypervisorprovidesVMresourcemetricstobeusedforservicemonitoring,inadditiontothemetricscollectedatthecomputedomain(seenextsection).

3.7.1.2. ComputingResourcesManagement

A key functionality of the compute domain management will be the collection andprocessingofthemonitoringmetricscollectedbyboththephysicalcomputenodesaswellas the VMs (VNFCs). A dedicated monitoring manager is envisaged, to which monitoringagentswillconnecttocommunicatecomputedomainresourcemetrics’measuredvalues.Atthemonitoringmanager,thesemetricswillundergostatisticalprocessinginordertoextractadditional informationsuchasfluctuation,distributionandcorrelation.ThisprocessingwillprovideusefulinformationaboutthebehaviourofeachVNFcomponentandwillcontributetowardstheearlydetectionofpossibleVNFCmalfunctionsorperformanceissues.Thiswillbethecasee.g. ifsomemeasurements falloutsidethenormalVNF loadcurve(e.g. ifCPUutilisationrisesabnormallyeventhoughprocessednetworktrafficvolumedoesnot).

TheplacementofaVMonacomputeresourceisacriticaltaskandthecomputecontrollercarries out this function using a placement scheduler. This scheduler will, if no furtheroptions are specified, make a decision based on the requirements of the infrastructureprovider and place the VM in an automated fashion. To influence this decision makingprocess,theschedulerwillhaveafilterAPIthatcanbeused.DetailsrelatedtothedesiredSLAor specifichardwareandsoftware requirementscanbepassed to the filter.Themainfiltercategoriesthatinfluencethedecisionprocessbytheschedulerare:

• SpecifichardwarerequirementslikeCPUspeedortypeofdisk,

• SpecificsoftwarerequirementslikethehostOSorthehypervisor,

• Current load and performancemetrics of the compute node such as average CPUutilisationrateetc.,

• Financial considerations such as newest hardware or most energy efficienthardware.

AfurthercoretaskofthecomputecontrolleristoprovidespecificmanagementcapabilitiesoftheVMsfortheOrchestratorespeciallywheretheoperationsoverlapwithhypervisorandnetworkcontrolleractions.Suchtasksinclude:

• CreationanddeletionofaVM,

• Rebuilding,suspendandpauseaVM,

• MigratingaVMfromonecomputenodetoanothercomputenode,

• ResizingaVM.

CreationofaVMrequirescloseinteractionwiththebaseimagerepositorythatcontainsthebasicunmodifiedOSimages.

Thecapabilitytoenableinfrastructureadministratorstocarryoutroutinemaintenanceandadministration tasks on compute nodes is required. For example, the administratorsmay

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

53

needtomigrateVMsonaphysicalcomputenodetoanotheroneaspartofaninfrastructureupgrade activity. Other potential action may include interaction with the scheduler andfiltersinordertomodifytheconfigurationofaVMorsetofVMstomaintainanassociatedSLA for a VNF service running on the VMs. These types of functionalities can also beimplementedinanautomatedmanner inordertosupportnotonlymanagementactivitiesbutalsoautomaticSLAfulfilment.

3.7.2. InfrastructureNetworkResourcesManagementandMonitoring

The Network Control functional block within the VIM is responsible for configuring andmanaging the SDN-compatible network elements to provide an abstracted platform forrunningSDNapplications(i.e.networkvirtualisation,loadbalancing,accesscontroletc.).

In order tomeet specific VNF services’ requirements, the network elements, physical andvirtual, need to be properly programmed by the Network Control function to ensureappropriatenetworkslicing,isolationandconnectivityinamulti-tenantenvironment.Inthisway, the network will be properly partitioned and shared among several vNets, eachdedicatedtoaspecificVNFservice.IntheT-NOVAarchitecturethereisanexplicitdistinctionbetweenvirtualandphysicalnetworkcontroldomains:thevirtualnetworkcontroldomainismanaged by the VIM, whereas the physical network control domain is managed by theWICMentity(discussedinnextsection).

ThevirtualisationoftheNetworkControlisintendedtoaddressscalabilityandcentralisationissues affecting the SDN control plane in large network infrastructures. The proposedapproach considers a cluster of controllers to manage the network elements, using adistributed data store to maintain a global view of the network. By exploiting cloud-computing capabilities, the cluster can easily scale through the deployment of new CPinstancesonVMs,asthedemandsontheSDNcontrolplanegrow. Inthisregard,theSDNControlPlanecanofferelasticity,auto-scalingandcomputationalloadbalancing.

InordertoguaranteeefficientCPvirtualisation,aSDNControlPlaneCoordinatorisexpectedtomanageandmonitoreachCPinstance,balancingthedistributionoftheoverallworkloadtomultipleCPinstances.

Figure3-6:VIMNetworkControlArchitecture

TheNetworkControlcomponentinterfacesinternallywiththeVIMHypervisorandComputecomponents and externally with the Orchestrator layer, by accepting requests to deployvNetsbasedoncertaintopologyandQoSrequirements.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

54

3.8. WANInfrastructureConnectionManager

Bydefinition,aVNFissupposedtoreplicatethebehaviorofanetworkfunctionhostedinaphysical appliance (e.g. firewall, DPI, etc.). In most practical scenarios, a VNF will be acomponentofawidernetworkenvironment,often includingaWideAreaNetwork (WAN)connectivity service, which may range from a simple Internet access to some form ofenterpriseconnectivity service, suchasan IPVPNor L2connectivity servicevariant,e.g. E-Line,orE-LAN3.Thus,theT-NOVAservicecanbedecomposedintwobasicparts:

• VNFas-a-Service(VNFaaS),asetofassociatedVNFshostedinoneormultipleNFVI-PoP.

• Aconnectivityservice,inmostcasesincludingoneormultipleWANdomains.

Withregardtothelattercomponent,althoughtheWANconnectivityserviceitselfisbeyondthescopeofT-NOVA,the integrationofWANconnectivityservicewiththeNFVI-PoPsthathosttheVNFsissomethingthatcannotbeneglectedandthereforemustbeanintegralpartof the T-NOVA architecture. TheWAN Infrastructure ConnectionManager (WICM4) is thearchitecturalcomponentthatissupposedtocopewiththisrequirement.

Figure3-7showsthetransitionfromapureWANconnectivityservicetoaVNFaaS-enabledscenario, inwhichthebasicconnectivityserviceisenrichedwithoneormultipleVNFsthatbecomeintegratedinthetrafficend-to-endpath.

Figure3-7-End-to-endcustomerdatapathwithoutVNFsandwithVNFs

AnimportantaspecttobetakenintoaccountinthiscontextistheconceptofVNFlocation.Infact,twotypesofVNFlocationcanbeconsidered5:

3InT-NOVA,theroleplayedbytheServiceProvider(SP)issupposedtoincludetheprovisionofVNFs,aswellastheunderlyingCloudandnetworkinfrastructure.Therefore,foragivencustomer,whenthefirstVNFisdeployed,theconnectivityservice(e.g.VPN)isexpectedtobealreadyupandrunning.InthecaseswheretheSProleisplayedbyanetworkoperatorprovidingVPNservices,VNFaaSmaybeseenasanadd-ontoatraditionalVPNserviceoffering.4 ETSINFVdefinesa fairly similar component calledWIM (WAN InfrastructureManager)but so farvery littlehasbeenspecifiedabout respective functionalityand interfaces.WhetherT-NOVAWICMandETSINFVWIMwillbefunctionallyequivalentisnotclearatthisstage.5 Inprinciple,physicalandlogical locationsshouldbeascloseaspossibletominimizetheimpactofVNF deployment on existingWAN network services – ideally, one NFVI-PoP close to each serviceprovideredgenode;howeverthismaynotbepossibleinpractice.Ultimately,therewillbeatrade-offbetweentheinstantiationofaNFVI-PoPateveryserviceprovideredgepoint(simplebutcostly)and

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

55

• ThelogicalVNFlocation,whichidentifiesthepointinthecustomernetworkwheretheserviceisinstalled(typically,correspondstoacustomerattachmentpointtotheserviceprovidernetwork);

• ThephysicalVNFlocation,whichidentifiestheactualplacementoftheVMsthatsupporttheVNF(NFVI-PoPIDandhosts).

AcomposedT-NOVAserviceinstanceincludesmultipleVNFsthatmayhaveoneormultiplephysicallocations(i.e.differentNFVI-PoPs),butshouldbeseenbythecustomerasasinglelogicallocation.

In Figure 3-8, the WAN connectivity service is represented by the central cloud and isdelimitedbyedgenodes.WithregardtotheconnectivitybetweencustomersitesandNFVI-PoPs,twocasesshouldbedistinguished:

• CaseA:TheconnectionbetweenVNFlogicalandphysicallocationsdoesnotcrossanyWANsegment–e.g.CustomersiteA-NFVIPoP1.ThismeansthattheinsertionoftheVNFintheend-to-endpathdoesnotimplyanyimpactontheexistingWANservice.

• CaseB:TheconnectionbetweenVNFlogicalandphysicallocationscrossesatleastoneWANsegment.Inthiscase,thecreationorremovalofaVNFmayrequirethemodificationofthetrafficpathintheWAN.

The NFVI GW represents the demarcation point between the NFVI-PoP and the externalnetwork infrastructure andplays a key role in this scenario – interworking betweenWANand internalNFVIprotocols.BoththeWANandtheNFVI-PoParemulti-tenantdomainsbydefinition, but aremanaged and controlled independently of each other, even if they areadministeredbythesameentity.Thismeans, forexample, thatmappingbetween internalandexternalVLANIDs(orwhateveraggregationmethodologyisusedoneithersideoftheNFVIGW) is needed to guaranteeend-to-end connectivity.At theNFVI-GW traffic comingfromtheWANisde-aggregatedandmappedtotherespectivevirtualnetworkintheNFVI-PoP.

3.9. WICM:ConnectingVNFstotheWAN

Figure 3-8 illustrates the positioning of the WICM in the T-NOVA architecture and therelationshipwithotherT-NOVAarchitecturalcomponents.TheWICMisresponsiblefortheintegrationofVNFsandWANconnectivityservicesandtakeanyactionsrequiredbyeventsrelated to the customer subscription status (e.g. new VNF subscribed by the customermodifiesend-to-endtrafficpath).

the centralization in a small number of NFVI-PoPs (economical but with strong impact on existingWANservices).ThisdiscussionisbeyondthescopeofT-NOVA.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

56

Figure3-8-WICMoverallvision

TheroleplayedbytheWICMdependsonwhichofthecasesAorB,definedabove,apply.Incase A, the WICM is in charge of controlling network nodes at the access/aggregationnetworksegment,whereasincaseBtheWICMshouldalsocontrolthenetworknodes(e.g.MPLS PEs) located deeper in the service provider network. In T-NOVA, Case A should beconsideredaspreferredforimplementationanddemonstrationpurposes.

3.9.1.1. SDN-enabledNetworkElements

Today’sWANsarebecomingmorecomplexandtheintroductionofSDNalignedapproachesholds great potential formanaging andmonitoring network devices and the trafficwhichflowsoverthem.Thebenefitsaremainlyrelatedtoautomatednetworkprovisioningandtotheflexibilityinlinkdeploymentbetweendifferentdatacentres.

FromaT-NOVAperspective,theWICMhasresponsibilityforcontrollingandmonitoringthephysical network devices which are SDN-enabled, with the purpose of providing WANconnectivitybetweendifferentNFVI-PoPswhileconsideringbothSLAsrequiredbyVNFsandefficient management of cloud infrastructure resources. The WICM and VIM NetworkControlmoduleswillneedtobecoordinatedbytheOrchestratorinanappropriatemanner,inordertosetupVLANsamongdifferentvirtualandphysicalSDN-enableddevices.

Although for the intra-DC networking the deployment of SDN technologies is becomingmorecommonplace,theadoptionofSDNatthetransportlevelandespeciallyatLayer0to1, is still very much in its infancy [19] [20]. When considering the transport layers weencompasstechnologiesatlayer0(DWDM,photonics)andlayer1(SONET/SDHandOTN).AkeyreasonforthedelayinSDNadoptioninthetransportnetworkistheon-goingtransitionfrom the analog to digital domain. Themechanisms for dealing with analog attributes inopticalnetworksare vendor specific, and it isnotpossible for a generic controller todealwith thecurrentmyriadofvendorspecific implementations.Nor is itpossible fornetworkoperatorstoremovealloftheirtransportequipmentfromtheirnetworksandreplaceitwitha standardised optical hardware based around open standards. However, at higher layers(i.e.WAN)theadoptionpathispotentiallymoreexpeditious.CompanieslikeGoogleandtheCarrierEthernet(MEF)andcloudbusinessunitsofnetworkoperatorshavealreadyadoptedSDNsolutionsfortheirWANs[21].

ForthepurposeoftheT-NOVAproof-of-conceptdemonstrationtheWICMdevelopmentofSDNcompatibledevicesisoutofscopeasitisnotpartoftheobjectivesoftheprojectwhich

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

57

are mostly focusing on the NFVI-PoP network management and control. However, at anarchitecturallevelitisthoroughlysupportedtoensureappropriatefutureproofing.

3.9.1.2. LegacyNetworkElements

Managing and monitoring legacy network domains is a well understood and maturecapability.A varietyof standardisedandproprietary frameworkshavebeenproposedandimplementedintheformofcommercialandopensourcesolutions.

The WICM is the T-NOVA architectural component in charge of enabling interoperabilitybetweenNFVI-PoPandWANconnectivityservices, including legacyservicemodelssuchasorMPLS-basedIP-VPN,orCarrierEthernetE-LAN,E-Line.

Asnotedabove, the integrationofNFVI-PoPs in legacyWANservices canbemoreor lesscomplex,dependingonfactorssuchasthelogicallocationoftheNFVI-PoP,namelywhetherthe connection between logical and physical locations of the VNFs crosses any WANsegment.

As outlined in the SOTA review (section 4.3) the standard method for allowing networkoverlaysoverlegacynetworkdomainistheexploitationofL2toL3tunnellingmechanisms.Thosemechanisms introduce the use of tunnelling protocols that allow themanagementaspects of each tunnel on an end-to-end basis. Themost interesting tunnelling protocols,from a T-NOVA architecture perspective, are VxLAN, and Generic Routing Encapsulation(GRE). In short these protocols allow the interconnection of NFVI-PoPs over L3 legacynetwork by encapsulating the NFVI network traffic (actually DC traffic) end-to-end. Theyrequire the setup of an end-point for each connected DC, which is responsible forencapsulationanddecapsulationofpackets.

At a WAN level the architecture design of T-NOVA supports any type of legacy WANtechnology (i.e.MPLS,Optical,CarrierEthernetetc.)orSDNcompatible,providedthat theappropriateinterfacesaredeveloped.Inthiscontext,andforthesakeofdemonstratingtheUCasdiscussed inD2.1, T-NOVAwill exploit an IP/MPLS transportnetworkandprovideasimpleimplementationofWICMinordertosupportprovisioningofvNETsinanend-to-endmanner.

Fromamonitoringperspective,thereareanumberofframeworksrangingfrompassivetoactive and hybrid that allow the monitoring of legacy networks [22] [23]. T-NOVA willemploy such mechanisms in order to monitor adequately the status of the network andmore importantly the statusand resourceusageof theestablished tunnels.Monitoringofthe transport network resourceswill also be part of the consideredWICM functionalitiesthatwill be implemented.Moreover, themonitoring informationwill be conveyed to theNFVOasan input in themappingandpathcomputationmechanism.HistoricalmonitoringdatacollectionwillalsobecollectedtofacilitatetrackingofSLAbreaches.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

58

4 T-NOVAVNFSANDNSSPROCEDURES

ThissectionillustratesthesetofmostcommonproceduresassociatedwiththedeploymentandmanagementofVNFsandNSs(e.g.on-boarding,instantiation,monitoring,scaling,etc.)to illustrate the interactions between the T-NOVAOrchestrator and IVM architecture FEsthathavebeendescribed inSections2and3.The flowdiagramspresented in this sectionserveasameanstovalidatethearchitecturesfortheT-NOVAOrchestratorandIVMlayersandtheirconstituentarchitecturalcomponents.Inaddition,theflowdiagramsalsoillustratethe interactionsat theFE level andvalidate thepurposeand capabilitiesof the interfacesthathavebeen identified insections2and3.Asstatedabove, thesequencediagramsareintendedtoillustratespecificallytheinteractionoftheentitiesoftheOrchestrationandtheIVM layers, however some of the details of the internal actions of each module are notillustratedastheseareimplementationspecificanddependonthetechnologiesutilised.

While a conceptual exercise, the description and illustration of the key VNF and NSdeployment and management workflows provide a means to stress tests regarding thetaken architectural decisions and capture necessary refinements prior to implementationrelated activities in WP3/4. Subsection 4.1 focuses on VNF related procedures, whereassubsection4.2iscentredonNSrelatedprocedures.

4.1 VNFrelatedprocedures

TodescribetheVNFrelatedprocedures,thefollowingassumptionsaremade:

• TheVNFiscomposedbyoneormoreVNFCs;

• EachVNFChasadedicatedVM;

• VNFCsareinterconnectedthroughVirtualNetworkLinks;

• TheVNF,aswellastheconstituentVNFCs,isinstantiatedwithinasingledatacentre.

Depending on which kind of network service the VNF provides (e.g., a firewall must beconnectedtospecificnetworkaddresses),thedeploymentofaVNFinstancemayalsoimplyaninteractionwiththeWICM.

The VNF details (e.g. deployment rules, scaling policies, and performance metrics) aredescribedintheVNFDescriptor.

4.1.1 On-boarding

VNF on-boarding (Figure 4-1) refers to the process of making the T-NOVA OrchestratorawarethatanewVNFisavailableontheNFStore.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

59

Figure4-1:VNFOn-boardingProcedure

Steps:

1. UploadVNFmetadata

2. TheNFVOprocessestheVNFDtocheckifthemandatoryelementsareprovided.

3. TheNFVOuploadstheVNFDtotheVNFCatalogue.

4.1.2 Instantiation

VNF instantiation (Figure 4-2 and Figure 4-3) refers to the process of creating andprovisioning a VNF instance. Figure 4-2 refers to the instantiation process from theperspectiveoftheOrchestrationLayer,whereasFigure4-3showstheinstantiationprocessfromtheIVMlayerpointofview.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

60

Figure4-2:VNFInstantiationProcedure(Orchestrator’sView)

StepsVNFInstantiation–Orchestrator’sView:

1. NFVOcallstheVNFMtoinstantiatetheVNF,withtheinstantiationdata.

2. TheVNFMvalidatestherequestandprocessesit.

3. TheNFVOperformsresourcemapping

4. TheNFVOrequeststheallocationofresourcesfromtheVIM(compute,storageandnetwork)neededfortheVNFinstance.

5. TheVIMrequeststheVNFimagefromtheNFStore

6. TheNFStorereturnstherequestedimagetotheVIM

7. The VIM instantiates the required compute and storage resources from theinfrastructure,forfurtherdetailsseeVNFInstantiationProcedure–IVMView.

8. The VIM instantiates the internal connectivity network – a VNF may requirededicatedvirtualnetworksto interconnect it’sVNFCs(networksthatareonlyusedinternallytotheVNFinstance),forfurtherdetailsseeVNFInstantiationProcedure–IVM’sView.

9. The VIM interconnects the instantiated internal connectivity network with theVNFCs,forfurtherdetailsseeVNFInstantiationProcedure–IVMView.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

61

10. AcknowledgementofcompletionofresourceallocationbacktoNFVO.

11. TheNFVOacknowledges thecompletionof the resourceallocationback toVNFM,returningappropriateconfigurationinformation.

12. After theVNF is instantiated, theVNFMconfigures theVNFwith anyVNF specificlifecycleparameters(deploymentparameters).

13. TheVNFsendsanacknowledgementtotheVNFMthattheconfigurationprocessiscompleted.

14. TheVNFMacknowledgesthecompletionoftheVNFinstantiationbacktotheNFVO.

Thediagramcorrespondingtosteps8-10isindicatedbelow.

Figure4-3:VNFInstantiationProcedure(IVM’sView)

Thespecificsdetailsofsteps8-10areasfollows:

8.1. The VIM Orchestrator Service submits a request to the VIM Compute ControlmoduletocreatenewVMs,accordingtotheVNFrequirements.

8.2. TheVIMComputeControlmodule:

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

62

• Processestherequest;

• Analysestherequiredconfiguration;

• SelectsoneormoresuitablecomputenodestohosttheVMs;

• SendsrequesttoallocateVMstotheselectednodes.

8.3. TheselectedNFVIComputenode(s)allocatestheVMs.

8.4. The NFVI Compute nodes send back an acknowledgment to the VIM ComputeControlmodulewhentheysuccessfullyboot.

8.5. The VIM Compute Controlmodule sends back an acknowledgment to the VIMOrchestratorAgent.

9.1. The VIM Orchestrator Agent submits a request to the VIM Network ControlmodulefortheallocationofNetworkResources.

9.2. TheVIMNetworkControlmodule:

• Processestherequest;

• Analysestherequiredconfiguration;

• Sends a request for allocation of virtual network resources on the NFVINetwork.

9.3. TheNFVINetwork:

• Allocatestheresources;

• Sets up the virtual networks, by configuring the required interconnectivitybetweenvirtualswitches.

9.4. TheNFVINetworksendsbackanacknowledgment to theVIMNetworkControlmodule.

9.5. The VIM Network Control module sends back an acknowledgment to the VIMOrchestratorAgent.

10.1. The VIM Orchestrator Agent submits a request to the VIM Hypervisor ControlmoduletoattachthenewVMstotherequiredvirtualnetworks.

10.2. The VIM Hypervisor Control module sends the request to the hypervisorscontrollingthenewVMhostingnodes.

10.3. Thehypervisorsofthosenodes:

• Configure the vSwitches in order to manage the VLANs connectivitynecessarytosupporttheVNFrequirements;

• SetuptheinterconnectionsamongVMsandvSwitches.

10.4. The hypervisors send back acknowledgments to the VIM Hypervisor Controlmodule.

10.5. TheVIMHypervisorControlmodulesendsbackanacknowledgmenttotheVIMOrchestratorAgent.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

63

4.1.3 Monitoring

VNFmonitoringfromtheOrchestrator’slayerpointofview(Figure4-4)referstotheprocessof checking if the values from a set of predefined parameters of the VNF are at withindefinedlimits,includingtheinfrastructurespecificparameterscollectedandreportedbytheIVM,aswellastheVNFapplication/service-specificparameters.

VNF monitoring from the IVM’s layer point of view (Figure 4-5) refers to the process ofmonitoringoftheVIM,includingtheVMperformancemetrics,theVLperformancemetrics,and the physicalmachines performancemetrics. From anOrchestration layer perspectivethesequenceisasfollows:

Figure4-4:VNFMonitoringProcedure(Orchestrator’sView)

Steps(VNFMonitoring–Orchestrator’sView):

1. VNFcollects/calculatesmetricsrelatedwiththeVNFapplication/service.

2. VNFnotifiestheVNFMwiththeVNFapplicationspecificmetrics

3. VIM collects performancemetrics relatedwith the infrastructure allocated for theVNF,forfurtherdetailsseeVNFSupervisionProcedure–IVMView.

4. VIMnotifiestheVNFMwiththeVNFinfrastructurerelatedperformancemetrics.

As far as themonitoring procedure from the IVM’s layer point of view is concerned, thesequenceisasfollows:

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

64

Figure4-5:VNFSupervisionProcedure(IVM’sView)

Steps(VNFSupervision–IVM’sView):

1. TheNFVIHypervisors:

a. CollectVMperformancemetricsdata;

b. SendthedatatotheVIMHypervisorControlmodule.

2. TheVIMHypervisorControlmodulesendsthedatatotheVIMVNFManagerAgent.

3. TheNFVINetworkdevices:

a. Collectperformancemetricsdatafromvirtualnetworklinks(VLs);

b. SendthedatatotheVIMNetworkControlmodule.

4. TheVIMNetworkControlmodulesendsthecollecteddatatotheVIMVNFManagerAgent.

5. TheNFVIComputenodes:

a. Collectthephysicalperformancemetricsdata;

b. SendthedatatotheVIMComputeControlmodule.

6. TheVIMComputeControlmodulesendsthedatatotheVIMManager.

4.1.4 Scale-out

VNFscale-outreferstotheprocessofcreatinganewVMtoincreasetheVNFcapacitywithadditionalcompute,storage,memoryandnetworkresources.

The scaling policies that indicate which/how/when VMs/VNFCs should be scaled areidentified in the ”VNF Deployment Flavour” attribute, which makes part of the VNFDescriptor(VNFD).

Figure 4-6 illustrates this case for a scaling-out: VNFC A is instantiated and a new VL iscreatedtoconnectthisnewinstancetotheexistingVNFCBinstance.VNFscalingisfurtherdetailedonanotherdeliverableofthisproject,D2.42[5].

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

65

Figure4-6:ScalingoutaVNF

Figure 4-6 illustrates the VNF scale-out process. Note that VNF scaling (out or in) mustalwaysbetriggeredattheNFVOlevel,sincetheneedtoscaleisdeterminedbymonitoringoftheNS,andnotonlyaparticularVNF.ScalingaNSisacomplexdecisionthatmayormaynotinvolvescalingtheVNFinstancesthatarepartofthatparticularNSinstancethatneedsto be scaled (e.g., the bottleneck might be in the network, and not in any of the VNFinstances)

4.1.5 Termination

VNFterminationFigure4-7andFigure4-8refertotheprocessofreleasingaVNFinstance,includingthenetworkandVMresourcesallocatedtoit.Figure4-7referstotheterminationprocess from the perspective of the Orchestrator’s layer, whereas Figure 4-8 shows theterminationprocessfromtheIVMinternalpointofview.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

66

Figure4-7VNFTerminationProcedure–Orchestrator’sView

Steps(VNFTerminationOrchestrator’sView):

1. The NFVO calls the VNFM to terminate the VNF service. The VNF terminationprocedurecanbetriggered,forexample,bythefollowingactions:

• TerminationoftheNSinwhichtheVNFisinstantiated;

• Scale-inoftheNS,requestingaspecificVNFinstancetobeterminated;

• ExplicitrequestfromtheSPortheFPtoremovetheVNF.

2. TheVNFM gracefully shuts down the VNF, i.e.without interrupting theNS that isbeingdelivered, if necessary in coordinationwithothermanagemententities. TheVNFimage(s)willbemaintainedontheNFStore(inordertobeinstantiatedagaininthefuture).TheVNFcatalogueisnotaffectedbytheVNFtermination.

3. TheVNFMacknowledgesthecompletionoftheVNFterminationbacktotheNFVO.

4. TheNFVOrequestsdeletionoftheVNFresourcesbytheVIM.

5. Virtualnetworklinks(VLs)interconnectingtheVMsarereleased,forfurtherdetailsseeVNFInstantiationProcedure–IVM’sView.

6. VMs resources (compute, storageandmemory)usedby theVNFare released, forfurtherdetailsseeVNFInstantiationProcedure–IVM’sView.

7. An acknowledgement is sent indicating the success or failure of resource releasebacktoNFVO.

• Theinfrastructureresourcesrepositoryisupdatedautomatically.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

67

Figure4-8:VNFTerminationProcedure–IVM’sView

Steps(VNFTermination–IVM’sView):

5.1. TheVIMOrchestrationServicesubmitsa request to theVIMHypervisorControllerfordetachingVM(s)fromthevirtualnetwork;

5.2. TheVIMHypervisorControlmodule forwards the request to theNFVIHypervisorsinvolved;

5.3. TheNFVIHypervisorsdetachVM(s)fromthenetwork;5.4. The NFVI Hypervisors send back an Acknowledgement to the VIM Hypervisor

Control;5.5. The VIM Hypervisor Control sends back an Acknowledgement to the VIM

OrchestratorAgent;6.1. TheVIMOrchestratorAgent submits a request to theVIMNetworkController for

releasingvirtualnetworkresources;6.2. The VIM Network Control module forwards the request to the NFVI Network

domain;6.3. TheNFVINetworkInfrastructurereleasesthenetworkresourcesandsendsbackan

Ack;6.4. TheVIMNetworkControlsendsbackanAcktotheVIMOrchestratorAgent;6.5. TheVIMOrchestratorAgent submitsa request to theVIMComputeController for

releasingvirtualcomputeresources;6.6. The VIM Compute Control module forwards the request to the involved NFVI

Computenodes;6.7. TheNFVIComputenodesreleasesthecomputeresourcesandsendsbackanAck;6.8. TheVIMComputeControlsendsbackanAcktotheVIMOrchestratorAgent.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

68

4.2 NSrelatedprocedures

TodescribetheNSprocedures,thefollowingassumptionsaremade:

• TheNSiscomposedbyoneormoreVNFs(inthefollowingprocedures,twoVNFs–VNF1andVNF2–composetheNS);

• VNFscomposing theNSare interconnected throughVirtualNetworkLinks (ifVNFsrunon the sameDC) or throughNon-virtual/legacyNetwork Links (if VNFs runondifferent DCs) (in the following procedures, VNF1 runs on DC1 and VNF2 runs onDC2);

• TheNSconstituentVNFscanbeimplementedinasingleDCorspreadacrossseveralDCs;

• BesidesVNFs,PNFscanalsobepartoftheNS(inthefollowingprocedures,PNF1isinterconnectedwithVNF1).

The NS details (e.g. deployment rules, scaling policies, performance metrics, etc.) aredescribedintheNSD,e.g.VNFForwardingGraphfordetailingtheVNFsinterconnections.

4.2.1 On-boarding

NS on-boarding (Figure 4-9) refers to the process of submitting a NSD to the NFVOrchestratorinordertobeincludedinthecatalogue.

Figure4-9:NSOn-boardingProcedure

Steps:

1. TheMarketplacesubmitstheNSDtotheNFVOforon-boardingtheNS.

2. NFVOprocessestheNSDtocheckifthemandatoryelementsareprovided.

3. NFVOnotifiesthecatalogueforinsertionoftheNSD.

4. NFVOacknowledgestheNSon-boarding.

4.2.2 Instantiation

NS instantiation refers to the instantiation of a new NS, i.e. Figure 4-10 from theOrchestrator’sview,andFigure4-11fromtheIVM’sview.

Asstatedabove,thenextsequencediagramdepictsasituationwhereVNFscomposingtheNSrunondifferentDCsandareinterconnectedthroughNon-virtual/legacyNetworkLinks.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

69

Figure4-10:NSInstantiationProcedure(Orchestrator’sView)

Steps(OrchestratorView):

1. TheNFVOreceivesarequesttoinstantiateanewNS.

2. TheNFVO validates the request, both validity of request (including validating thatthesenderisauthorisedtoissuethisrequest)andconfirmingtheparameterspassedaretechnicallycorrect.

3. TheOrchestratorcheckstheNScomposition(e.g.VNFFG)intheNScatalogue:

4. TheNFVOexecutesany requiredpre-allocationprocessingwork, e.g.VNF locationselection, Resource pool selection, Dependency checking. For further details seeVNFInstantiationProcedure–Orchestrator’sView.

5. The NFVO requests the WICM to setup the WAN resources required forinterconnectingtheVNFsacrosstheDCs(resourcephaseestablishment).

6. TheWICMconfigurestheWANresourcesbetweenDC1andDC2.

7. TheWICMsendsanacknowledgmenttotheNFVOreportingthattheWANhasbeenconfiguredasrequested.

8. TheNFVOsendsarequesttotheVIMto interconnecttheWANingressandegressrouterstotheDCVLANs(connectivityphaseestablishment).

9. TheVIM interconnects theconfiguredWANresourceswithVNF1andVNF2 inDC1andDC2,respectively.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

70

10. TheVIMacknowledgescompletionoftheWAN/VLANsconfiguration:

• If necessary, NFVO requests Network Manager to connect VNF externalinterfacestoPNFsinterfaces:

1. TheNetworkManagercanbeanOSS,anNMSoranEM;

2. ConnectiontoPNFsisassumedtobedonebytheNFVO.

11. TheNFVOacknowledgescompletionoftheNSinstantiation.

Figure4-11:NSInstantiationProcedure(IVM’View)

Steps(IVMView):

1. TheWICMOrchestratorAgentsendstherequesttotheWICMNetworkControltoconfiguretheWANendpoints.

2. TheWICMNetworkControlconfigurestheendpoints.

3. TheWICM Network Control sends an acknowledgement indicating success orfailureoftheconfigurationsetuptotheWICMOrchestratorAgent.

4. WhentheacknowledgementofasuccessfulWANconfiguration isreceivedtheNFVOsendsarequesttotheVIMOrchestratorAgenttoconnectaVLANtoWANendpoints.TheVIMOrchestratorAgentsendstherequesttoconnectaVLANtoWANendpointstotheVIMNetworkController.

5. The VIM Network Controller sends the request to connect a VLAN to WANendpointstotheNFVINetwork.

6. TheNFVINetworkconnectstheVLANtotheWANendpoints.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

71

7. The NFVI endpoint sends an acknowledgement of a successful or failedconnectionconfigurationtotheVIMNetworkController.

8. TheVIMNetworkControllersendsanacknowledgementofasuccessfulorfailedconnectiontotheVIMOrchestrationAgent.

4.2.3 WANConnections

Theset-upofaWANconnectiontriggeredbyacustomersubscribingtoanewservicethatrequirestheWICMinterventionisshowninFigure4-12.ThesequenceofstepsthatrequiresWICMinterventionareasfollows:

Steps:

1. TheCustomerpurchasesaVNFservicethroughtheMarketplacedashboard,possiblyindicatingthelogicallocationoftheservice(e.g.aspecificedgepointofhisnetwork)

2. The T-NOVA orchestrator forwards the request toWICM in order to allocate theWAN/NFVI-PoPnetworkconnectivity.

3. Ifstep2issuccessful,theWICMreturnstheVLANIDtobeusedinconfigurationoftheVNFinstances.

4. The VIM instantiates the resources required to support the new VNFs. configurestheNFVI-GWtoappropriatelymapexternalandinternalVLANIDs.

5. If step 4 is completed successfully, the Orchestrator notifies the WICM and theWICMenforcescustomertraffictobedivertedtowardsthenewlyinstantiatedVNFs.

Figure4-12:WANConnectionProcedure

4.2.4 Supervision

NS supervision (Figure 4-13) refers to the monitoring of the NS performance metrics,including:

• VNFinfrastructureandservicespecificinformation;• NetworklinksinterconnectingtheVNF(acrossmultipleDCs).

Again, as stated above, the next sequence diagram depicts a situation where VNFscomposingtheNSrunondifferentDCsandare interconnectedthroughNon-virtual/legacyNetworkLinks.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

72

Figure4-13:NSMonitoringProcedure

Steps:

1. TheVNFMsendsperformancemetricsrelatedtoaVNFtotheNFVO.This includesboth metrics from the infrastructure supporting the VNF (VMs, Virtual Links) –obtaineddirectlyfromtheVIM-aswellasapplication/service-specificmetricsfromtheVNF-obtaineddirectlyfromtheVNForfromtheEM).Forfurtherdetailsaboutthe VNF performance metrics retrieval, please check the “VNF SupervisionProcedure”,wheretheoptiontoforwardaggregatedinformationtoNFVOhasbeentakenbytheVNFM.

2. TheWICMfetchesanddeliverstheWANsegmentmetricstotheNFVO.

3. TheVIMdeliversVM-relatedmetricstotheNFVO:

• BasedonthemetricsreceivedandonthedefinedscalingpoliciesincludedintheNSD,theNFVOmaydecidetotriggerascalingprocedure.Furthermore,

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

73

ifconfigured,theNFVOwillalsodeliveraggregatedNS-relatedperformancemetricstotheMarketplace.

4.2.5 Scale-out

NS scale-out refers to the process of increasing the capacity of the service in order toaccomplishaSLAthatischangingtoanewNSdeploymentflavour,ortomaintainanexistingSLA.

Thescale-outpoliciesaretriggeredbasedonthefollowinginformation:

• NSissues(retrievedfromVNFM);• VNFissues(retrievedfromVNFM);• WANsegment,i.e.connectingVNFsissues,retrievedfromWICM.

Figure4-14illustratesNSscalingcase,whereasecondinstanceofVNFBiscreatedandtheexistingVNFAinstanceisreconfiguredsothatitbecomesabletocommunicatewiththetwoVNFBinstances.

Figure4-14:Scaling-outaNS

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

74

Figure 4-15 refers to a situation where, according to the metrics received, the requiredperformanceoftheSLAcannotbeachieved.AssuchtheNFVOdecidestoscale-outtoanewdeployment flavour, takingalso intoaccount theauto-scalingpolicies included in theNSD.Thisimplies:

• Scale-outaspecificVNFtoanewdeploymentflavour(includedinthisworkflow);

• CreationofanewVNFinstance(includedinthisworkflow);

• ChangingtheVNFlocationtoanotherDC(notincludedinthisworkflow);

• IncreasingtheWANsegmentnetworklinkcapacity(includedinthisworkflow).

Note • ScalingofNSwithintheT-NOVAsystemisstillawork inprogress.ThespecificsofwhichwillbefinalisedduringthecourseofWP7wherethefinalT-NOVAsolutionincludingscalingwillbeimplemented

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

75

Figure4-15:NSScale-out

Steps:

1. TheNFVO collects and checksmonitoring information from theVNF,VIMand theWAN.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

76

2. Basedontheretrievedperformancemetricsandontheauto-scalingpoliciesdefinedontheNSD,theNFVOdetectstheneedtoscale-outtheNS.

3. TheNFVO decides to change to anotherNS deployment flavour,which comprisestheVNF1scale-out,anewinstantiationofVNF2#2andanincreaseontheWANlinkcapacity:

• The other procedures (VNF scale-out, VNF instantiation and WANinterconnection)havealreadybeendescribedinthepreviousworkflows.

4.2.6 Termination

NStermination(Figure4-16)referstotheprocessofreleasingtheNSinstance,includingtheconstituentVNFs(VNF1,VNF2#1-instance1ofVNF2andVNF2#2–instance2ofVNF2),aswellastheWANsegment.

Figure4-16:NSTerminationProcedure

4.3 NS,VNFandInfrastructureMonitoring

Proper NS/VNF as well as infrastructure monitoring is crucial for the implementation ofmany of the use cases foreseen for the T-NOVA system, see D2.1 [6]. Especially UC2(ProvisionNFVservices),UC3(Reconfigure/RescaleNFVservices)andUC5(BillNFVservices)use themonitoringmetrics,which are collected duringUC4 (Monitoring). Specifically, thelatterpresentsanoverviewoftheT-NOVAmonitoringprocedures,focusingonboth:

1. VNFandServiceMonitoring-relatedtothestatusandresourcesoftheprovisionedservices,aswellas,

2. Infrastructure Monitoring - related to the status and resources of the physicalinfrastructure.

Monitoringmetricsaremostly collectedat thevariousdomainsof the Infrastructure layerandcommunicatedtotheupper layers.Figure4-17showsahigh-levelviewof the flowofthemonitoringinformationacrosstheT-NOVAsystem.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

77

Figure4-17:CommunicationofmonitoringinformationacrosstheT-NOVAsystem

The first step is the collectionofboth InfrastructureandNS/VNFmonitoringmetrics fromdifferentdomainswithintheT-NOVAInfrastructurelayer.Thesemetricsaretypicallyofferedby a dedicated monitoring agent at each physical or virtual infrastructure element. Inaddition, some compute node and VNF/VM metrics can be collected directly via thehypervisor,whichalsoprovideshostandguestmachineresourceinformation.

Table 4.1 presents an indicative list of monitoring metrics to be collected at eachinfrastructure domain. It must be noted that this list is tentative and is expected to bemodified/updated throughout the project course, as the specific metrics which will beactuallyneededforeachstepoftheT-NOVAservicelifecycle,willbepreciselydefined.

Table4.1:Monitoringmetricsperinfrastructuredomain

Domain VNFApp/ServiceMetrics InfrastructureMetrics

VNF(VM,guestmachine) CPUutilisationCPUtimeusedNo.ofvCPUsRAMallocatedDiskread/writebitrateNetworkinterfacein/out

-

VNF

Marketplace

Orchestrator

Virtualised InfrastructureManagementTransportNetwork

Management

Compute Storage

Infrastru-cture

Network

TransportNetwork

Hypervisor

ServiceandInfrastructureMonitoring

(VNFM/NFVO)

Accounting SLAManagement Dashboard

VNF

NS/VNFInstancesRecords

InfrastructureResourcesRecords

WICM

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

78

bitrateNo.ofprocesses

Compute(percomputenode,hostmachineoftheDCdomain) -

CPUutilisationRAMallocatedDiskread/writebitrateNetworkinterfacein/outbitrate

Storage(objectorvolumestorageoftheDCdomain)

Read/writebitrateVolumeusage(volumestorage)No.ofobjects(objectstorage)Totalobjectssize(objectstorage)

TotalRead/writebitrateTotalVolumeusage(volumestorage)Totalno.ofobjects(objectstorage)Totalobjectssize(objectstorage)

InfrastructureNetwork(pernetworkelementoftheDCdomain)

Per-flowpacketscumulativeandpersecondPer-flowbytespacketscumulativeandpersecondFlowduration

Per-portpacketscumulativeandpersecondPer-portbytespacketscumulativeandpersecondPer-portreceivedropsPer-porttransmitdropsPer-portlinkstateandspeedCPUutilisation

TransportNetwork(pernetworkelement)

Per-flowpacketscumulativeandpersecondPer-flowbytespacketscumulativeandpersecondFlowduration

Per-portpacketscumulativeandpersecondPer-portbytespacketscumulativeandpersecondPer-portreceivedropsPer-porttransmitdropsPer-portlinkstateandspeedCPUutilisation

Infrastructuremetrics,aswellasevents/alertsareaggregatedbytheVIMandtheWICMandcommunicatedtotheOrchestratorFEs,NFVOandVNFM,whichareinchargeofassociatingeach group of metrics to specific VNFs and Network Service. Those parameters are thencompared against the NS and VNF templates composed by a. o. the NSD and the VNFD,whichdenotetheexpectedNSandVNFbehaviour.Ifanymismatchisdetected,appropriateactionsareselectedandexecuted,e.g.scaling.

ThisprocedureneedstobecarriedoutattheOrchestratorlevel,sinceonlythelatterhastheentireviewoftheend-to-endserviceaswellastheresourcesallocatedtoit.Inthiscontext,and in addition to the double check (NS and VNF) mentioned above, the monitoringprocessed at the Orchestrator, as part of the NFVO and VNFM, performs the followingoperations:

• Aggregate infrastructure-related metrics and update the Infrastructure Resourcesrecords,

• Associates metrics (e.g. VM and flow metrics) to specific services, produce anintegrated picture of the deployed service and updates the NS/VNF instancesrecords,

• ChecksmonitoredparametersagainsttheNSandVNFtemplates,composedbytheNSD and the VNFD, which denote the expected NS and VNF behaviour. If anymismatchisdetected,appropriateactionsareselectedandexecuted,e.g.scaling,

• Communicate servicemetrics to theMarketplacevia theOrchestratornorthboundinterface.

AttheMarketplacelevel,servicemetricsareexploitedforaccounting(especially inpay-as-you-gobillingmodels) aswell as SLAManagement, in order to compare the statusof theservice against the contracted one. They are also presented via the Dashboard to the

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

79

Customer,sothathe/shecanhaveaconsolidatedoverallviewofthestatusof theserviceandtheresourcesconsumed.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

80

5 GAPANALYSIS

Considering both existing industry oriented initiatives and currently available technologiesthatarebeingusedcommercially(orareinadevelopmentstage)afocusedgapanalysiswascarriedout todeterminewhatstepsneedtobetaken inorder tomoveNFV/SDNfrom itscurrent state to a position that can fully realise the needs of key areas that need to beaddressedduringtheprojectactivities. Inthefollowing,keyresultsofthisgapanalysisaredescribed,forallthevariousdomainsrelatingtotheT-NOVAOrchestratorandT-NOVAIVMarchitectures.Wherepossible thegapshavebeenalignedwithT-NOVAtaskswhere thesecouldbeeitherelucidatedorprogressedtowardsaddressingthegapcouldbemade.

5.1 Compute

TheComputedomainoftheT-NOVAarchitectureprovidesbasicbuildingblocksforVNFs/NSexecution.Gapanalysisforthisdomainidentifiedtwokeyareasthatneedtobeaddressedduringtheprojectactivities:

Table5.1:Gapanalysisinthecomputedomain

Gap Description T-NovaTaskAlignment

Virtualisationinfrastructurefortelecommunicationworkloads

Featuresrequestedbyworkloadsneedtobeinvestigatedingreaterdetailandspecialpurposeprocessors(orco-processors)havetobeintegratedincomputedomaininfrastructures.

Thoseenhancedfeatureshavealsotobeexposedtotheupperlayerofthearchitecture,inordertomakethemavailablefromanorchestrationperspective

Task3.2

Task4.1

Interoperabilitybetweendifferenthardwarearchitectures

ComputearchitectureheterogeneityneedstobeimprovedincurrentcomputedomaininfrastructuretoprovideagreaterdiversityofoptionsforcertainNFVworkloadtypes.

Supportforheterogeneityshouldnotonlyapplyintermsofmulti-vendortechnologies,butalsointermsofdifferenthardwarearchitecturesrequiredbydifferentVNFs.

Task4.1

5.2 Hypervisor

The Hypervisor domain is responsible for abstracting VNFs from the underlying physicalresources.ThemaingapissuesthathavetobeaddressedbyT-NOVAsystemforthisdomainare:

Table5.2:GapanalysisintheHypervisordomain

Gap Description T-NovaTaskAlignment

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

81

IntegrationofvSwitcheswithhypervisorsandvNICs

ThehypervisorsareresponsiblefortheintegrationbetweenvSwitchesandvNICs.

Howeverthisintegrationcurrentlyneedsfurtherperformanceimprovements.InorderfortheT-NOVAplatformtoprovidetherequiredlevelofperformance,itisnecessarytoaddresstheseperformancefeatures.

Task4.1

PortabilityofVirtualResourcesacrossdifferentplatforms

InordertosupportthelivemigrationofVNFs,allthevSwitchsolutionsneedtobedescribedusingthesamesyntax,providingtotheT-NOVAsystemacommoninterfacetoallowportabilityondifferentplatformsandsupportlivemigrationbythehypervisor.

Task3.2

ProcessorPinning

SomeVNFvendorsuseadedicatedCPUplacementpolicy,whichstrictly‘pins’thevCPUstoasetofhostphysicalCPUs.

ThisisnormallydonetomaximiseperformancesuchL3cachehitrates.

Howeverthiscanmakemigration`bythehypervisorchallenginginordertoguaranteethatsamepinningallocationofvCPUstophysicalcoresonthedestinationsystem.

Task4.1

Lightweightvirtualisation

DeploymentofVNFsusingVMiscurrentlythedefaultapproachfordeployments.Whilethisapproachhasnumberofadvantagessuchassecurityetc.deploymentstimes,deploymentdensityandscalablearechallengingparticularlyforscalingactions.AlternativeapproachesbasedoncontainersandCloudOS’stodeliverlittleweightandfastdeployments(>1sec)neededtodemonstratedandfullyevaluatedasviablealternativeapproaches.SolutionstothemanagementofdeploymentsofcontainersorunikernelsneedtobeinvestigatedsuchasDCOS,Brooklynetc.

Task4.1

5.3 SDNControllers

The SDN Controller domain is responsible for the control of the SDN-enabled networkelementsregardingthedeploymentandthemanagementofthevNets.Themainissuesthatneedtobeaddressedinclude::

Table5.3:GapanalysisregardingSDNControllers

Gap Description T-NovaTaskAlignment

Control-planeworkloadbalance

ResearchactivitiesarerequiredtomitigatevariousscalabilityandavailabilityissuesduetoSDNControlplanecentralization,whereacluster-basedapproachwouldhavealsoto

Task3.3

Task4.2

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

82

beconsidered.

BottleneckAvoidance Researchactivitiesarerequiredtoaddresslarge-scale,highloaddeploymentscenariostoavoidbottleneckswhereacluster-basedapproachwouldnecessitateconsideration

Task3.3

Task4.2

UniformNorth-boundInterface CurrentlyauniforminterfaceforalltheSDNcontrollersdoesnotexist,whichincreasesboththecomplexityofdevelopmentprocess.Workisrequiredintheabstractionofnorthboundinterfacestosupportapplicationdevelopers.

Task4.3

5.4 CloudControllers

The Cloud Controller represents the central management system for cloud deployments.Ranging from basic to more advanced, the T-NOVA project will investigate gaps in thecurrentsolutionswithinthisdomain,whichfallshortwithrespecttothefollowingaspects:

Table5.4:GapanalysisregardingCloudControllers

Gap Description T-NOVA TaskAlignment

InteroperabilitybetweendifferentCloudComputingPlatforms

FromaT-NOVAOrchestratorperspective,aunifiedsouthboundinterfaceisneeded,inordertomaketheAPIofdifferentIaaSprovidersaccessibleandtoenablecommunicationswithdifferentCloudComputingPlatforms.Evenifmostcloudplatformshavewidelyadoptedopenstandards,therearestillinconsistencieswithrespecttothedifferentversionsandAPIs,inconsistenciesthatneedtobeconcealedunderasingleinterface.

Task3.1

ResourceAllocationandConfiguration

FromaT-NOVAVNFperspective,cloudcontrollersneedtosupporttheallocationandconfigurationofnetworkresourcesandallowenhancedresourcemanagement.

Existingcloudmanagementsolutionsneedtobeextendedtoprovideaninterfacethatwouldallow,forinstance,enhancedconfigurationofnetworkparameters.

Task3.2

Task3.3

Task3.4

Providinginfrastructureutilisationdata

MonitoringandcollectinginformationwithrespecttothecurrentloadandavailabilityofresourcesisacrucialcapabilityforT-NOVAsystem,asitwillbeacloudmanagementframeworkthatneedstoperforminreal-timeandsupportveryhighvolumetraffic.

Althoughcloudplatformsalreadyoffermonitoringcapabilities,thechallengewithrespecttoT-NOVAistocollecttheinformationthatisrelevantforVNFsandhowtorepresentitsuchthatthe

Task3.2

Task3.4

Task4.4

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

83

Orchestrationlayercanbestutiliseit.

PlatformAwareness

CloudenvironmentsneedtobecomemoreawareofthefeaturesandcapabilitiesoftheirconstituentresourcesandtoexposetheseenhancedplatformfeaturestotheOrchestrationlayertoimprovetheplacementdecisionsofworkloadssuchasVNFs.

Acommondatamodeltodescriberesourcesisneeded,whichcouldbeusedtoidentifyspecificfeatureslikeDPDKenablementorSR-IOVcapabledevices.

Task3.2

Task4.1

5.5 NetworkVirtualisation

NetworkvirtualisationintroducesseveralgapsandopenissuesthatneedtobeaddressedbytheT-NOVAproject:

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

84

Table5.5:GapanalysisregardingNetworkVirtualisation

Gap Description T-NOVATaskAlignment

Networkresourceisolation

VNsarebydefinitionbasedonsharedresourcesandthisbringsuptheisolationproblem,especiallywhenthenumberofVNssharingthesameinfrastructureisveryhigh.

Ontheotherhand,thestrictnessofisolationvariesaccordingtothespecificusecase.InaNetworkas-a-Servicescenarioisolationitwillobviouslybeafundamentalrequirement.

IsolationisrequiredbetweentheVNsrunningseparateTelcoservices.WithintheT-NOVAplatform,aVNmightrequirecompleteisolationwhileanotheronemightshareitsresourceswhenitisidle,dependingonthebusinessmodelandthetypeofthespecificservice.

Task4.2

MultipleDCinterconnection

LimitationsofsupportingdistributedcloudserviceprovisioningandtherequirementforVNstospanmultiplecomputingfarms;seamlessnetworkinghandovertechnologiesarestillimmatureorinefficient;potentiallycomplicatingservicedeliveryprocessesorbusinessmodels.

T-NOVAplatformneedstomanagethiscomplexity,sinceoneofitsmainlyfeaturesistosupportservicedeploymentoverdifferentDCs.

Task4.2

Task4.5

Reliabilityofavirtualnetwork(VN)

Reliabilityisultimatelydeterminedbythedependabilityoftheunderlyinginfrastructure.Virtualisationintroducesanadditionallevelofcomplexityandrepresentsapotentialextrasourceoffailure.

VNsmustbereliable,atleastasreliableasaphysicalnetworkcounterpart.Todaymostoftheavailableproductswithnetworkvirtualisationcapabilitiesaremainlytargetedatthehigh-endsegmentofthemarket.

Ontheotherhand,verypromising,flexibleandadaptabletechnologiessuchasOpenFlowareperceivedasresearchtoolsandhavenotyetreachedapointofmaturitytoenablelarge-scaledeployment.

Task4.2

Task4.5

Interoperabilitybetweendifferentheterogeneous

Standardisationactivitiesarerequired,especiallywithinterconnectionofnon-

Task4.5

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

85

domains contiguousnetworkdomains.

Interoperabilityisacrucialrequirementtoenablewidespreaddeploymentofnetworkvirtualisation.StandardisationwillberequiredtoenableinteroperabilitybetweenVNs,aswellasinteroperabilitybetweenvirtualisedandnon-virtualisednetworks.

Scalabilityanddynamicresourceallocation

Dealingwiththeincreasingnumberofservicesaswellassubscribersforeachservicewouldbechallenging.TheimportanceofscalabilityasanetworkvirtualisationrequirementisparticularlyrelevantwhenthenumberofVNsisexpectedtogrow.NewsolutionsarerequiredtohelptheT-NOVAsystemtoscalewithrespecttothenetworksize(forinstancereducingthesizeofOpenFlowtables).

Task4.1

5.6 NFVOrchestrator

TheNFVorchestratorisresponsiblefortheNSs&VNFslifecyclemanagement.Severalopenissuesarestilltobeaddressed:

Table5.6:GapanalysisregardingOrchestration

Gap Description T-NOVATaskAlignment

VNFsPlacement

StandardisationbodiesshouldaddressthedefinitionandimplementationofalgorithmsforoptimalinfrastructureallocationaccordingtothevirtualisedservicecharacteristicsandSLAagreements.

Task3.3

Interoperabilitybetweenheterogeneousvirtualiseddomains

Standardisationactivitiesarerequiredinordertodeliverend-to-endservicesthathavevirtualisedcomponentsdistributedacrossmultipledomains,ownedandoperatedbydifferentvirtualserviceand/orinfrastructureproviders.

Task3.1

VirtualandPhysicalNetworkFunctionsOrchestration

Enhancementsarerequiredinstandardisationbodies,suchasETSIMANO,toaddressdatacentreWANlinksinterconnectivityconfigurationandorchestrationissues.

Task3.2

Task3.4

NetworkServicesScaling

EnhancementsarerequiredintheNSDescriptor,toaddresstheScalingofNetworkServices(composedbymorethanoneinterconnectedVNFs).ThisisstillaworkinprogressinT-NOVA,thatwilleventuallygenerateoneormore

Task3.4

WP6

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

86

proposedchangesinstandardisationbodies,suchasETSIMANO.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

87

6 CONCLUSIONS

Virtualisation is a foundational technology for the T-NOVA system in particular for theinfrastructurevirtualisationandmanagementlayer.Originatinginthecomputedomain,useof this approach now finds application in a variety of domains including storage andnetworking.Theapproachisbasedonabstractingphysicalresourcesintoaformthathidesits physical characteristics fromeither users or applications. It brings a variety of benefitsincluding better utilisation of resources, greater flexibility, improved scalability, etc.Virtualisation encompasses a number of technology approaches, at both a hardware andsoftwarelevel.

In the courseof reviewing the various virtualisation technologies and considering them inboththecontextofthetelecomsserviceprovidersandthepotentialneedsoftheT-NOVAsystem a variety of gaps in the capabilities of the currently available technologies wereidentified (see Tables 5-1 to 5-6). While the use of virtualisation technologies in the ITdomain is well established, adoption of this approach in carrier grade environments tosupport NFV and SDN proliferation brings a unique set of challenges that do not exist inenterprise ITenvironments.A varietyof furtherdevelopmentswill be required toaddressspecific issues in the currently available compute, hypervisor, SDN Controller, Cloud OSs,networkvirtualisationandorchestrationrelatedtechnologies.Whereappropriate,wehavemappedT-NOVA taskswhoseactivitieswill be related to these technologies challengesorlimitations.ItisexpectedthatwewillfurtherrefinethesegapstospecificissuesidentifiedinimplementationoftheT-NOVAsystem,andhighlightprogressthathasbeenmadeinfurtherelucidating the characteristics of these problems, as well as the work that T-NOVA hascarriedoutinordertocontributetowardsasolution.

From an architectural point of view, the T-NOVA Orchestrator is composed by twomainbuildingblocks:theNFVOandtheVNFM.TheNFVOhastwomainresponsibilities,whichareaccomplished by its two FEs designated byNSO andVROwithin the T-NOVAproject. TheNSO orchestrates the subset of NFVO functions that are responsible for the lifecyclemanagementofNetworkServices,whiletheVROperformstheorchestration/managementof the virtualised infrastructure resources distributed acrossmultiple DCs. In particular, itperformsthemappingoftheincomingNSrequeststotheavailablevirtualisedinfrastructureresources, aswell as the coordinationof the resources allocation andplacement for eachVMthatcomposestheVNF(andtheNS).TheVROdoesthisbyinteractingwiththeVIM,aswellaswiththeWICMwhichinteractswiththeWANelementsforconnectivitymanagementpurposes.

Since the NSs are composed by VNFs, the NFVO is able to decompose each NS into theconstituentVNFs.AlthoughtheNFVOhastheknowledgeoftheVNFsthatcomposetheNS,itdelegatestheirlifecyclemanagementtoadedicatedFEdesignatedastheVNFM.

InSection2,thearchitectureforT-NOVAOrchestratorispresented,takingintoaccountthelistofOrchestratorrequirements identifiedafteraresearchingvarioussourcessuchastheuse cases defined in D2.1, ETSI ISG NFV requirements, ITU-T requirement for networkvirtualisation as well as excerpts from the relevant parts of the ETSI ISG MANO WGarchitecture. The reference architecture and its functionalities outlined in section 2.3 arenowbeingimplementedbythevarioustasksinWP3.

The T-NOVA IVM is responsible for providing the hosting and execution environment fornetworkservices intheformofvirtualisedresourcesthatareabstractedfromthephysicalresources in the compute and infrastructure network domains. A system engineering

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

88

approachwasadoptedtodefinethekeyfunctionalblocksoftheIVMandtheirinterfaces.InadditionthekeyobjectivesfortheIVMweredefined.UseofthisinformationandpreviousT-NOVA deliverables contributed to a requirement capture process that focused onidentifying thedesiredbehaviours for the IVM.Requirementswhere identified foreachofthefunctionalentitieswithintheIVM.TheserequirementswherethenusedinthedesignoftheIVM.

From an architectural perspective the T-NOVA IVM includes three key functional blocks,namely the NFVI, VIM and WICM, which are defined and discussed in Section 3. Thesefunctionalblocksarecomprisedofvariousdomainsthathavespecifictechnologycapabilitiesrequiredforthedeliveryandmanagementofvirtualisedresources.ForexampletheNFVIiscomprised of the Compute, Hypervisor and Network domains. A number of specificinterfaces provide both the internal and external connectivity that integrates the varioustechnologycomponents intoa functional IVM.FromaT-NOVAsystemperspectivethekeyexternal interfaces of the IVM are those to the T-NOVA Orchestrator, which areimplementedintheVIM.TheseinterfacesenabletheT-NOVAOrchestratortosendrequeststo theVIMtocreateandconnectVMs inorder tosupport thedeploymentofVNFserviceandtomanagethevirtual resourcesallocatedto theVNFs inorder toaccomplish toSLAs.Additionally, these interfaces allow the IVM to send infrastructuremetrics related to theutilisation and performance to the T-NOVA Orchestrator in order that this entity canperform placement decisions and management of existing deployed services. Anotherimportant interface istheoneprovidedbytheWICMtotheOrchestrator inorderfor ittomanage the network resources related to external networks i.e.WAN transport betweenDCs.

Collectively, these reference architectures and FEs instantiate the requirements thatwereidentifiedfortheT-NOVAOrchestratorandfortheT-NOVAIVMtogetherwithitsgoalsandobjectives.The referencearchitectureswere interrogatedandvalidatedat functional levelthrough thedevelopment sequencediagramsasoutlined in Section4,whichdescribe thekey actions and interactions takenwithin the T-NOVA systemduring standardoperationalactivitiesrelatedtothedeploymentandmanagementofNSandVNFservices.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

89

ANNEXA-ORCHESTRATORREQUIREMENTS

The present annex contains a set of tables, which include the requirements identified insection2,i.e.OrchestratorinternalrequirementsandInterfacerequirements.

Eachrequirementhasassociatedasetofattributesrelatedtoitsidentification(Req.IDandReq.Name),toitstextsupport(Alignment)andtoitsdescription(RequirementDescriptionandcomplementaryComments).

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

90

A.1 Internalrequirements

A.1.1 NFVORequirements

A.1.1.1NSLifecyclerequirements

Table6.1:OrchestratorRequirements–NFVO-NSLifecycle

Req.ID Alignment Req.Name RequirementDescription Comments

NFVO.1 ETSIMANO§C.2.1

On-boardingNS TheNFVOSHALLbeabletoacceptneworupdatedNSstobeon-boarded,uponrequest.

TheNSlifecycleisinitiatedbyaNSon-boardingrequestbytheSP,andincludesprovidingtheNSDescriptor(NSD)totheNFVOandstoringitintheNSCatalogue.TheNSD,whichmustbevalidated,includestheNSdeploymentflavours,aswellasreferencestotheVNFsused,theVNFForwardingGraphs(VNFFGs)andtotheVirtualLinkDescriptors(VLDs)neededtoimplementit.

NFVO.2 ETSIMANO§C.3

NSInstantiationrequest

TheNFVOSHALLbeabletoinstantiateanalreadyon-boardedNS,uponrequest.

InordertostartaT-NOVAservice,anewinstancefromtheNSCataloguehastobedeployed,usuallybyrequestoftheSP(insomescenarios,aCustomerorevenanOSSmayalsotriggerthatrequest)throughtheMarketplace.Duringtheinstantiationprocess,theNFVOvalidatestherequest,e.g.byauthorizingtherequester,byvalidatingtechnicalcontent

NFVO.3 T-NOVA ExtractionofspecificdeploymentNSinformation

TheNFVOSHALLbeabletoextractfromtheincomingNSinstantiationrequestthesetofrequiredinformationtoproceedwiththeinstantiationofthatNS.

AfterreceivingtheNSinstantiationrequest,theNFVOextractstheinformationabouttheNSinstantiation(deploymentflavours,VNFFG,VLDs,etc.).

NFVO.4 T-NOVA ConfigureNS TheNFVOSHALLbeabletoconfigureorupdatetheconfigurationofaninstantiatedNS,uponrequest.

T-NOVANSsmustbeconfigurableuponexternalrequeste.g.CustomerorSP.AsaNSistheresultofthecompositionofatomicVNFinstancesandtheirinterconnections,theconfigurationofaNSmayimplytheconfigurationoftheentiresetofVNFs.

NFVO.5 UC1 NSTermination TheNFVOSHALLbeabletode-allocatealltheresourcesthathavebeenallocatedtoaspecificNSinstancewheneithertheNSSLAterminates,theNSinstanceisterminatedbyinternaltriggers,orwhentheNSinstanceisterminateduponrequest,e.g.bytheCustomerorbytheSP.

ThedurationoftheNSwillbespecifiedintheSLA.AlternativelytheSLAortheNSinstancecanbeterminatedon-demand,e.g.bytheCustomerorbytheSP.WhentheNSinstanceisnolongerneededthesystemshouldbeabletode-allocatealltheresourcesinitiallyallocatedtothatspecificNSinstanceandcanceltheSLA.UnlesstheSPaskstoremovetheNS,itremainson-boardedsothatothercustomerscanbuyandrequestnewinstancesofit.NSinstancescannotbepaused.

NFVO.6 ETSIMANO§C.4.1

Scale-outNS TheNFVOSHALLbeabletoscaleouttheNS,eitheruponrequestorautomatically.

AutomaticscalingoutdependsonanalgorithmandalternativearchitectureordeploymentflavoursprovidedbytheSPwhencomposingtheNS.ScalingoutaNSmightimplyincreasingtheVMssupportingitsVNF(s).

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

91

NFVO.7 ETSIMANO§C.4.2

Scale-inNS TheNFVOSHALLbeabletoscaleintheNS,eitheruponrequestorautomatically.

AutomaticscalingimpliestheuseofanalgorithmandalternativearchitectureordeploymentflavoursprovidedbytheSPwhencomposingtheNS.ScalinginaNSmightimplydecreasingtheVMssupportingitsVNF(s).

A.1.1.2VNFLifecyclerequirements

Table6.2:OrchestratorRequirements–NFVO-VNFLifecycle

Req.ID Alignment Req.Name RequirementDescription Comments

NFVO.8 §B.2.1 On-boardingVNFPackageRequest

TheNFVOSHALLbeabletoreceiveneworupdatedversionsofexistingVNFpackagesfromtheNFStoreandstorethemintheVNFCatalogue.

VNFpackageincludestheVNFDescriptor(VNFD),theVNFsoftwareimage(s)andtheVNFversion.

NFVO.9 ETSIMANO§B.3.1.2

VNFInstantiationRequestbytheNVFO

TheNFVOSHALLbeabletoinstantiateaVNF,uponrequest. VNFinstantiationsoccurinthecontextofNSinstantiationrequests.WhenarequesttoinstantiateaVNFisreceived,theNFVOvalidatestherequest.TheNFVOacknowledgesthecompletionoftheVNFinstantiationafterconfiguringtheVNFthroughtheVNFM.

NFVO.10 UC2UC3

VNFConfigurationRequestbytheNFVO

TheNFVOSHALLbeabletorequesttheVNFMtoconfigureaninstantiatedVNF.

T-NOVAVNFsmustbeconfigurablebyexternalrequest,e.g.CustomerorSP,orautomaticallyuponthecompletionofaninstantiationprocess.Itincludesthenotificationofthesuccessfulconfiguration.

NFVO.11 UC3.2 VNFScaleOut BymonitoringtheNSdata,theNFVOSHALLrecogniseanSLAthatisgoingtobreachandactbyrequestingtheVNFMtoscale-outtheNS’sVNFs(allocatingmoreVMstothoseVNFs).

TheT-NOVAsystemmustprovidetheabilityforadditionalVMsrequestsinordertomeetbusinessneeds.

NFVO.12 UC3.2 VNFScaleIn BymonitoringtheNSdata,theNFVOSHALLrecogniseanSLAthatisgoingtobreachandactbyrequestingtheVNFMrequesttoscale-intheNS’sVNFs(releasingVMsusedbyinstancesoftheVNF).

T-NOVAsystemmustprovidetheabilityforareductionofVMsortocompletelyremoveaVNFasrequiredbytheirchangingbusinessneeds.

NFVO.13 UC3.1 VNFScaleUp BymonitoringtheNSdata,theNFVOSHALLrecognizeanSLAthatisgoingtobreachandactbyrequestingtheVNFMrequesttoscale-uptheNS’sVNFs(increasingthespecifiedamountsofVMresourcesfromVMsusedbyinstancesoftheVNF).

TheT-NOVAsystemmustprovidetheabilityforincreasinginaVNFinordertomeetbusinessneeds.

NFVO.14 UC3.1 VNFScaleDown BymonitoringtheNSdata,theNFVOSHALLrecognizeanSLAthatisgoingtobreachandbyrequestingtheVNFMrequesttoscale-downtheNS’sVNFs(decreasingthespecifiedamountofallocatedresourcesfromVMs,suchasmemoryandstorage,usedbyinstancesoftheVNF).

TheT-NOVAsystemmustprovidetheabilityfordecreasingresourcesinaVNFinordertomeetbusinessneeds.

NFVO.15

T_NOVA_24,T_NOVA_25,T_NOVA_35,T_NOVA_27

ManageVM’sstate

TheNFVOSHALLbeabletorequesttheVNFMtomanagetheVMsallocatedtoagivenVNF.

WecanassumeafiniteandsmallnumberofpossibleVMstates,e.g.,‘Beingconfigured’,‘Notrunning’,‘Running’,‘Beingre-scaled’,‘Beingstopped’.Itisassumedthatwhenina‘Running’statetheVMisreadytobe(re-)configured.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

92

A.1.1.3ResourceHandlingRequirements

Table6.3:OrchestratorRequirements–NFVO-ResourceHandling

Req.ID Alignment Req.Name RequirementDescription Comments

NFVO.16 ETSIMANO§C.3

Mappingofresources

TheNFVOSHALLbeabletooptimallymaptheVNFsthatarepartofaNStotheexistinginfrastructure,accordingtoanagreedNSSLA.

Baseduponthecurrentinfrastructurestatus,therequestedVNFandSLA,theNFVO,usingResourceMappingalgorithms,mustbeabletofindtheresourcesitshouldallocateintermsofPoPsandtheirconnectionsortodeclarethattheNScannotbeinstantiated.TheSLAstillneedstobedetailed.ResourceMappingcanconsiderlinklatencyorlinkdelayandlookforconnectionsamongPoP,whichguaranteetherespectoflatencyordelaythresholds.

NFVO.17 Resourcesinstantiation/releasing

TheNFVOSHALLbeabletorequesttheVIMfortheinstantiation/releaseoftheVMsthatcomposeeachVNFoftheNSandtheconnectionsbetweenthem

DuringtheNSinstantiation/scalingprocedures,foreachVNFandcorrespondingmappedPoP,theNFVOusestheOpenStackSchedulerfordecidingthebestlocationfortheVMs,anditrequeststheVIMtoallocatetherequiredvirtualisedresources(i.e..VMsandtheirinterconnections).

NFVO.18 ETSIMANO§5.4.1,§5.4.3

ManagementofVMimages

TheNFVOSHALLbeabletomanageVMimagesrelatedtotheVMssupportingagivenVNF.

TheNFVOistheFEinchargeofhandlingVMandVMresourcesintheT-NOVAsystem.Assuch,itmustbeabletomanageVMimages,e.g.byprovidingVIMwithVMimagesforVNFon-boardingorupdating,orbyremovingimagesforVNFremoval.

NFVO.19 UC3.1 ResourcesInventoryTracking

TheNFVOSHALLupdateitsinventoryofallocated/availableresourceswhenresourcesareallocated/released.

TheT-NOVASystemmustmaintaintheInfrastructureCatalogueandaccuratelytrackresourceconsumptionandthedetailsoftheservicesconsumingthoseresources.

A.1.1.4MonitoringProcessrequirements

Table6.4:OrchestratorRequirements–NFVO-MonitoringProcess

Req.ID Alignment Req.Name RequirementDescription Comments

NFVO.20 UC2,UC3,UC4

NS-specificresourcemonitoringbytheNFVO

TheNFVOSHALLbeabletomonitorNS-relatedresourcesonarealtimebasis.

NS-specificmonitoringisrelatedwiththemonitoringofthevirtualnetworklinksthatinterconnecttheVNFs(retrievedfromtheVIM),aswellasmonitoringofVNF-specificdetailsthatcanbeusedtoassurethattheNSisfulfillingtheestablishedSLAwiththecustomer.

NFVO.21 UC4 MonitoringmetricsconsolidationbytheNFVO

TheNFVOSHALLbeabletoaggregateandconsolidateallmonitoringmetricsassociatedwithaservice.

A consolidated operational picture of the service via the dashboard is considered amandatory customer requirement. The gatheredmetrics should be presented to theCustomer,totheSP,ortotheFP,withanintegratedstatusoftheprovisionedservice,inoneoftheabovelayers.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

93

A.1.1.5ConnectivityHandlingrequirements

Table6.5:OrchestratorRequirements–NFVO-ConnectivityHandling

Req.ID Alignment Req.Name RequirementDescription Comments

NFVO.22 ETSIMANO§C.3.6

Connectivitymanagement

TheNFVOSHALLbeabletorequesttheVIMforinstantiation/releaseoftheinter-connectionofrequiredVMsintheITcomputedomain,networkdomainandintheinfrastructurenetwork.

DuringtheNSinstantiation/scalingprocedures,afterinstallingtheVMsfortheVNF/NS,theNFVOrequeststheVIM/WICMtoallocate/releasetherequiredvirtualnetworkresources(i.e.,virtualnetworklinks).ThisrequirementincludesnotificationoftheVNFMoftheremovedVNF.VNFshavinginstancesparticipatinginNSinstancescannotberemoveduntiltheNSinstancestopsandisrequestedtoberemoved.Alsousedinre-instantiatingVNFinfrastructure(e.g.,forperformanceormal-functionreasons).

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

94

A.1.1.7Marketplace-specificinteractionsrequirements

Table6.6:OrchestratorRequirements–NFVO-Marketplacespecific

Req.ID Alignment Req.Name RequirementDescription Comments

NFVO.23 T-NOVA PublishNSinstantiation

TheNFVOSHALLbeabletonotifythattherequested(new,updated)NSinstantiationisreadytobeused.

T-NOVAsystemmustnotifyrelevantexternalentitiesuponsuccessfulinstantiationofeveryVNFandconnectionsbetweenthemofarequestedNSinstance.InthecaseoftheMarketplace,itmayusethisnotificationforAccounting/Billingpurposes.

NFVO.24 T-NOVA PublishNSmetrics

TheNFVOSHALLbeabletopublishtheNSmetrics,ifconsideredintheSLA.

TheT-NOVAsystemmustprovidetoexternalentitiesNSmetricsinordertoenableservicecontrol.

NFVO.25 UC1.1 VNFmappingofSLAdata

TheNFVOSHALLbeabletomaptheVNFmonitoringparameterstoeachNSinstanceSLAattributes.

ResultsofselectionofferingsmaterializedinSLAsneedtobetranslatedintoVNFattributesinordertobeprocessedbytheT-NOVAsystem,accordingtothecontentsoftheVNFdescriptorwheretheamountofneededresourcesisindicated.

NFVO.26 T-NOVA SLAenforcementrequest

TheNFVOSHALLbeabletotaketherequiredactions(e.g.scaleout,withtheinstantiationofmoreVMsandconnectionsbetweenthem)uponrequesttoenforceaSLA.

ItisassumedthattheSLAprovidesalltheinformationaboutmetricsandthresholdstobecomparedwith,togetherwiththeNSdescriptorprovidingalternativearchitecturesordeploymentflavours,e.g.scalinginwhenmetricsshowunder-usedresourcesshouldbeautomatic.

NFVO.27 UC4,UC5

NSusageaccountingandbilling

TheNFVOSHALLstorealltheinformationaboutresourcesusageperservice,andSHALLprovideittoexternalentitiestobillonapay-per-usemode.

Pay-as-you-gomaybeconsideredattractiveforsomeCustomers,asanoption,asopposedtoflat-rate.TheNFVOshouldnotifyrelevantFEsinorderthattheT-NOVAsystembecomesabletodeploythistypeofbilling/charging.

A.1.2 VNFMRequirements

A.1.2.1VNFLifecyclerequirements

Table6.7:OrchestratorRequirements–VNFM-VNFLifecycle

Req.ID Alignment Req.Name RequirementDescription Comments

VNFM.1 ETSIMANO§B.3.1.2

VNFInstantiationRequestbytheVNFM

TheVNFMSHALLbeabletoacceptarequestfromtheNFVOtoinstantiateaVNF.

AfterreceivingaVNFinstantiationrequestfromtheNFVO,theVNFMwillstartcoordinatingtheVNFinstantiationprocedure.Nevertheless,thevirtualizedresourcesallocationisunderthescopeoftheNFVOandthereforetheVNFMwillhavetorequestthelatertoallocatetherequiredresourcesfortheVNF.

VNFM.2 UC2UC3

VNFConfigurationRequestbytheVNFM

TheVNFMSHALLbeabletoacceptaNFVOrequesttoconfigureaninstantiatedVNF.

T-NOVAVNFsmustbeconfigureduponexternalrequestorfollowinganexternalinstantiationrequest.Itincludesthebackwardsnotificationofthesuccessfulconfiguration.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

95

VNFM.3 ETSIMANO§C.3

CheckofVNFspartofaNSbytheVNFM

TheVNFMSHALLbeabletoacceptarequestfromtheNFVOtochecktheVNFsassociatedwithaNS,accordingtotheNSdescriptor.

ForeachVNFindicatedintheNSdescriptor,theVNFMchecksifaVNFmatchingexistsintheVNFCatalogue

VNFM.4 UC2 VNFlifecycleautomationbytheVNFM

TheVNFMSHALLbeabletoautomatetheinstantiationofVNFsandassociatedVMresourcesbytriggeringscalingprocedures.

AutomationofVNFlifecycleisanessentialcharacteristicoftheT-NOVAsystem.TriggeringofscalingproceduresisbasedonthemonitoringprocessmaintainedoverVNFs,aswellasonpolicymanagementandotherinternalalgorithmcriteria.TheexactresourcesontowhichtheVNFwillrunhavetoberequestedtotheNFVO.

A.1.2.2MonitoringProcessrequirements

Table6.8:Orchestratorrequirements–VNFM-MonitoringProcess

Req.ID Alignment Req.Name RequirementDescription Comments

VNFM.05 UC2,UC3,UC4

VNF-specificresourcemonitoringbytheVNFM

TheVNFMSHALLbeabletomonitorVNF-relatedresourcesonarealtimebasis.

VNF-specificmonitoringisrelatedwiththemonitoringofinformationretrievedfromtheVIM,relatedtothevirtualizedinfrastructureresourcesallocatedtotheVNF(i.e.compute/storage/memoryoftheVMsandvirtualnetworklinksthatinterconnecttheVMs),aswellasmonitoringofVNF-specificmetricsthatcanbeusedtoassurethattheVNFisbehavingasitshould.

VNFM.06 UC4 MonitoringmetricsconsolidationbytheVNFM

TheVNFMSHALLbeabletoaggregateandconsolidateallmonitoringVNFmetricsassociatedwithaservice.

Aconsolidatedoperationalpictureoftheserviceviathedashboardisconsideredamandatorycustomerrequirement.ThecollectedmetricsshouldbepresentedbytheVNFMtotheNVFO,andfromthisFEtothedashboardwithanintegratedstatusoftheprovisionedservice.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

96

A.2 Interfacerequirements

A.2.1 InterfacewithVIM

TherequirementsidentifiedfortheInterfacewiththeVIMmoduleareasfollows:

Table6.9:RequirementsbetweentheOrchestratorandVIM

Req.id Alignment Domain(s) RequirementName

RequirementDescription Comments

Or-Vi.01 Orchestrator,VIM Reserve/releaseresources

TheOrchestratorSHALLusethisinterfacetorequesttheVIMtoreserveorreleasetheentirerequiredsetofinfrastructureresourcesneededtoprovisionagivenVNF

Caremustbetakeninordernottohaveresourcesreservedandnotprovisionedforlongperiodsoftime,thusimpactingontheoptimisationofresourceusage.TheimplementationoftheresourcereservationwillnotbedoneduetothecurrentlylackofsupportfromthechosenVIM(OpenStack)

Or-Vi.02

T_NOVA_03,T_NOVA_21,T_NOVA_22,T_NOVA_26,T_NOVA_31,T_NOVA_33,T_NOVA_34,T_NOVA_36,T_NOVA_37,T_NOVA_38,T_NOVA_39,T_NOVA_40,T_NOVA_42,T_NOVA_43,T_NOVA_44,T_NOVA_45,T_NOVA_586

Orchestrator,VIMAllocate/release/updateresources

TheOrchestratorSHALLusethisinterfacetorequesttheVIMtoallocate,updateorreleasetherequiredinfrastructureandnetworkresourcesneededtoprovisionagivenVNF

Itisassumedthatconfigurationinformationisaresourceupdate.Resourceupdatemightimplystopandre-start,withamigrationinbetween.ExclusiveuseofthisinterfacegrantsconsistencebetweenactualresourcestatusandOrchestratorviewofit..ThisinterfaceisalsousedtoprovisionNFGsandinter-VNFCnetworks.

Or-Vi.03 Orchestrator,VIMAdd/update/deleteSWimage

TheOrchestratorSHALLusethisinterfacetoadd,updateordeleteaSWimage(usuallyforaVNFComponent)

PerformancewillprobablydemandhavingtheseimagesreadytobedeployedontheOrchestrator’sside.ThesesoftwareimageswillbeperVDU.

6ReferstoT-NOVArequirementsdescribedindeliverableD2.1[6] "UseSystemCasesandRequirements,"2014.Available:

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

97

Or-Vi.04 UC4,T_NOVA_46 Orchestrator,VIMRetrieveinfrastructureusagedata

TheOrchestratorSHALLuseaninterfacetocollect/receiveinfrastructureutilisationdata(network,computeandstorage)andanyrequiredVNFImetricfromtheVIM

Someofthisdataisusedtodeterminetheperformanceoftheinfrastructure(includingfailurenotifications)andtoinformdecisionsonwheretoprovisionnewlyrequestedservicesorwheretomigrateanalreadyprovisionedNSthatispredictedtobreakitsSLA.Thisinterfacewillverylikelyhavetosupportveryhighvolumetraffic.

Or-Vi.05 UC4,T_NOVA_20 Orchestrator,VIM

Retrieveinfrastructureresourcesmetadata

TheOrchestratorSHALLuseaninterfacetorequestinfrastructure'smetadatafromtheVIM

Duetohighperformanceneeds,thisdatawillmostprobablyhavetobecachedontheOrchestrator’sside.

Or-Vi.06 T_NOVA_02 Orchestrator,VIM SecureinterfacesTheinterfacesbetweentheOrchestratorandtheVIMSHALLbesecure,inordertoavoideavesdropping(andothersecuritythreats)

Weshouldkeepinmindthatencryptingallthecommunicationbetweenthesetwoentitieswillprobablymakeaperformingsolutiontoocostly

Or-Vi.07 Orchestrator,VIM VNFinitializationflows

TheinterfacesbetweentheOrchestratorandtheVIMSHALLallowtopassinitializationflowstotheVIMwhennewVNFsareinstantiated

TheinterfacesbetweentheOrchestratorandtheVIMmustallownotonlytoprovision/deprovisionresourcesaccordingtothe“static”VNFdescription,butalsotosendthroughtheinitializationcommandsfortheVNFwhichcanonlybedefinedatinstantiationtime(e.g.theIPaddresses).

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

98

A.2.2 InterfacewithVNF

TherequirementsidentifiedfortheInterfacewiththeVNFmoduleareasfollows:

Table6.10:RequirementsbetweentheOrchestratorandVNF

Req.id Alignment Domain(s) RequirementName

RequirementDescription JustificationofRequirement

Vnfm-Vnf.01 T_NOVA_02 VNFM,VNF SecureinterfacesAlltheinterfacesbetweentheVNFMandtheVNFSHALLbesecure,inordertoavoideavesdropping(andothersecuritythreats)

RequiredtoavoideavesdroppingtheconnectionbetweentheVNFMandeachVNF.Weshouldkeepinmindthatencryptingallthecommunicationbetweenthesetwoentitieswillprobablymakeahighperformancesolutiontoocostly

Vnfm-Vnf.02 VNFM,VNFInstantiate/terminateVNF

TheVNFMSHALLusethisinterfacetoinstantiateanewVNForterminateonethathasalreadybeeninstantiated Requiredtocreate/removeVNFsduringtheVNFlifecycle

Vnfm-Vnf.03T_NOVA_46,T_NOVA_48

VNFM,VNFAcceptVNFinstancerun-timeinformation

TheVNFMSHALLusethisinterfacetoaccepttheVNFinstancerun-timeinformation(includingperformancemetrics)

VNFinstancerun-timeinformationiscrucialbothforVNFscaling(requestedbytheNFVO)andforshowingNetworkServices’metricsintheMarketplace’sDashboard

Vnfm-Vnf.04T_NOVA_23T_NOVA_33

VNFM,VNF ConfigureaVNF TheVNFMSHALLusethisinterfaceto(re-)configureaVNFinstance

Inthegeneralcase,theCustomershouldbeableto(re-)configureaVNF(instance).Includesscaling.

Vnfm-Vnf.05T_NOVA_24,T_NOVA_35,T_NOVA_58

VNFM,VNF ManageVNFstateTheVNFMSHALLusethisinterfacetocollect/requestfromtheVNFsthestate/changeofagivenVNF(e.g.start,stop,etc.)

ThisinterfaceincludescollectingthestateoftheVNF(aswellaschangingit).TheVNFinstanceshouldincludeastatelike‘Readytobeused’whenitisregisteredintherepository.

Vnfm-Vnf.06

T_NOVA_36,T_NOVA_37,T_NOVA_38,T_NOVA_39,T_NOVA_42,T_NOVA_43,T_NOVA_44,T_NOVA_45

VNFM,VNF ScaleVNF TheVNFMSHALLusethisinterfacetorequesttheappropriatescaling(in/out/up/down)metadatatotheVNF

VNFscalingdependsonthe(mostlyarchitectural)optionstheFPprovidedwhenregisteringtheVNF.TheVNFscalingmetadataisthenusedbytheNFVOtorequesttheVIMtoallocatetherequiredinfrastructure

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

99

A.2.3 InterfacewithMarketplace

TherequirementsidentifiedfortheInterfacewiththeMarketplaceareasfollows:

Table6.11:RequirementsbetweentheOrchestratorandtheMarketplace

Req.id Alignment Domain RequirementName

RequirementDescription JustificationofRequirement

NFVO-MKT.01

UC1,T_NOVA_10,T_NOVA_15

Orchestrator,Marketplace

ProvideavailableVNFs

TheMarketplaceSHALLusethisinterfacewiththeOrchestratortoprovidetheServiceProviderwithalistoftheVNFs,sothatitcanselectandparameterisethem,orusetheminthecompositionofanewnetworkservice.

ItisassumedthatthisVNFmetadataincludesaURL/repositorynamefromwhichtofetchtheactualVNFsoftwareandinstallitonthepreviouslyallocatedinfrastructure(seeNFVO.10below).Notethat,althoughthisinformationwillmostcertainlyhavetobecachedontheOrchestrator’ssideforperformancereasons,theavailableVNFswillbedynamic,soupdatestothiscachedinformationwillberatherfrequent.

NFVO-MKT.02

UC2,T_NOVA_04,T_NOVA_08,T_NOVA_20

Orchestrator,Marketplace

ProvisionanewNS

TheMarketplaceSHALLusethisinterfacetorequesttheOrchestratortoacceptanewNS,aftertheSPhascomposeditbyusingtheVNFsavailableintheVNFCatalogue.

TheNSmusthaveaNSDescriptor,withthelistofcomposingVNFs,connectionsbetweenthem,etc.EachNScanbecomposedofoneormoreVNFs..Itisassumedthatinformationaboutscaling(up/down/in/out)isincludedintheNSD(oratleastreasonablevaluescanbeinferred).

NFVO-MKT.03

UC2,T_NOVA_04,T_NOVA_08,T_NOVA_20

Orchestrator,Marketplace

ProvisionanewNSinstance

TheMarketplaceSHALLusethisinterfacetorequesttheOrchestratortoprovisionanewtheNSinstance,aftertheCustomerhasselectedandparameterisedthenetworkservice.

TheOrchestratorSHALLreadtheNSDandallocatestheneededresources.ThenotificationoftheservicereadinessmustbedonetotheMarketplace.

NFVO-MKT.04

UC3,T_NOVA_31,T_NOVA_32,T_NOVA_33,T_NOVA_36,T_NOVA_42,T_NOVA_44

Orchestrator,Marketplace

ChangeconfigurationofaNSinstance

TheMarketplaceSHALLusethisinterfacetochangetheconfigurationofaNSinstanceontheOrchestrator.

NFVO-MKT.05

UC5,UC6,T_NOVA_03,T_NOVA_28,T_NOVA_29,T_NOVA_34,T_NOVA_41,

Orchestrator,Marketplace

ProvideNSinstancestatetransitionsinformation

TheMarketplaceSHALLusethisinterfacetomakeagivenNSinstancetochangeitsstateanddeterminewhichstatetransitionsofagivenNSinstancehaveoccurred,e.g.tofacilitatestartingandstoppingbillingfortheservice.

ItisassumedthateachNShasapre-definedstate-diagram,like‘Readytorun’,‘Running’,‘Stopped’,etc.,thatisalsoknowntotheMarketplace.Itisassumedthattheimpactonthedependentmoduleslikebilling,aretakencarebytheMarketplace(seeNFVO.04).SLAManagementispartoftheMarketplace.Eitherafteracustomer’srequestorbythepre-definedendingdatehadbeenattained,theSLAManagement

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

100

T_NOVA_48,T_NOVA_56,T_NOVA_57,T_NOVA_58

notifiestheOrchestratoroftheendoftheSLA.

NFVO-MKT.06

UC4,T_NOVA_28,T_NOVA_29,T_NOVA_30,T_NOVA_46,T_NOVA_52

Orchestrator,Marketplace

ProvideNSmonitoringdata

TheMarketplaceSHALLusethisinterfacetoshowtheCustomerhowthesubscribedNSinstanceisbehaving,howitcomparestotheagreedSLAandbilltheserviceusage.

Thisinterfacewillverylikelyhavetosupportveryhighvolumetraffic.

NFVO-MKT.07

T_NOVA_02 Orchestrator,Marketplace

Securecommunication

InterfacesbetweentheMarketplaceandtheOrchestratorSHOULDbesecured.

Encryptionshouldbeused,inordertopreventeavesdropping.EvenbetweentheMarketplaceandtheOrchestrator,sincetheMarketplaceisreallyasetofdistributedapps.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

101

ANNEXB-VIRTUALISEDINFRASTRUCTUREMANAGEMENTREQUIREMENTS

B.1 VirtualInfrastructureManagementRequirements

Table6.12:IVMRequirements–VIM

Req.id T-NOVAUseCaseAlignment

Domain RequirementName

RequirementDescription JustificationofRequirement

VIM.1 UC1,UC2.1 VIM Abilitytohandleheterogeneousphysicalresources

TheT-NOVAVIMSHALLhavetheabilitytohandleandcontrolbothITandnetworkheterogeneousphysicalinfrastructureresources.

BasicfunctionalrequirementoftheVIM.

VIM.2 UC1,UC2.1 VIM Abilitytoprovisionvirtualinstancesoftheinfrastructureresources

TheT-NOVAVIMSHALLbeabletocreatevirtualresourceinstancesfromphysicalinfrastructureresourcesuponrequestfromtheOrchestrator

RequiredtosupportVIMintegrationwiththeT-NOVAOrchestrator.

VIM.3 UC3/3.1/3.2 VIM APIExposure TheT-NOVAVIMSHALLprovideasetofAPI’stosupportintegrationofitscontrolfunctionswiththeT-NOVAOrchestrationlayer.

RequiredtosupportVIMintegrationwiththeT-NOVAOrchestrator

VIM.4 UC2.1 VIM Resourceabstraction

TheT-NOVAIVMlayerSHALLprovideresourceinformationattheVIMlevelfortherepresentationofphysicalresources.

RequiredtosupportVIMintegrationwiththeT-NOVAOrchestrator.

VIM.5 UC1.1UC1.3UC2.1UC4

VIM Abilitytosupportdifferentservicelevels

TheVIMcontrollerSHOULDprovidetheabilitytorequestdifferentservicelevelswithmeasurablereliabilityandavailabilitymetrics.

RequiredtosupportSLAsagreementswhenaserviceispurchasedintheT-NOVAMarketplace.

VIM.6 UC1 VIM Translationofreferencesbetweenvirtualandphysicalresourceidentifiers

TheVIMcontrollerSHOULDbeabletoassignIDstovirtualcomponents(e.g.,NFs,virtuallinks)andprovidethetranslationofreferencesbetweenlogicalandphysicalresourceidentifiers.

Needtotracktheuseofphysicalnetworkresources.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

102

VIM.7 UC2

UC3

VIM Multi-tenancycompliance”

TheVIMcontrollerSHALLguaranteeisolationamongthedifferentvirtualnetworkresourcescreatedtoprovisiontherequestedservices.

T-NOVAsystemwillprovideservicesfordifferentcustomersthroughthecompositionanddeploymentofVNFs.Theseserviceswillsharethesamephysicalnetworkinfrastructure,atthesametimethus,isolationatthephysicallevelmustbeguaranteedforeachcustomer.

VIM.8 UC3.1,UC3.2,UC3.3UC4

VIM ControlandMonitoring

TheVIMSHALLbeabletoprovidetherequiredinformationtoallowthehigherlevelsystemlayerstovisualisethereal-timestatus,historicalperformanceandresourceutilisationofboththeunderlay(physical)andoverlay(virtual)networks.

T-NOVAsystemsmustbeabletovisualisethereal-timestatus,historicalstatusandresourceutilisationoftheunderlayandoverlaynetworkstosupportstandardoperationsactivitiessuchasbandwidthadjustments

VIM.9 UC3 VIM Scalability TheVIMcontrollerSHOULDscaleinaccordancetothenumberofvirtualresourceinstancesandphysicalnetworkdomains.

TheT-NOVAsystemshouldbeabletomanagealargenetworkinfrastructure.

VIM.10 UC1 VIM Networkserviceandresourcediscovery

TheVIMcontrollerSHOULDprovidemechanismstodiscoverphysicalnetworkresources.

TheOrchestratormustbeawareoftheavailablephysicalnetworkresources.

VIM.11 UC1.1UC2.1UC3.1UC3.2UC3.3UC4

VIM Specificationofperformanceparameters

TheVIMcontrollerSHOULDallowtheinfrastructureconnectivityservicestospecifyperformancerelatedparameters.

T-NOVAsystemshouldsupportahighlevelofcustomisationforthenetworkservice.

VIM.12 VIM Flowentrygeneration

TheVIMcontrollerSHOULDbeabletogenerateandinstallrequiredflowentriesforSDNswitchesforpacketforwardingandNFpolicyenforcement(i.e.,ensuretrafficwilltraverseasetofNFsinthecorrectorder).

RequiredtosupporttheNetworkServicedefinition.

VIM.13 VIM Virtualaddressspaceallocation

TheVIMcontrollerSHOULDbeabletoallocatevirtualaddressspaceforNFgraphs(virtualaddressesofNFsbelongingtodifferentgraphscouldoverlap).

Requiredtosupportisolationamongdifferentvirtualnetworkdomains.

VIM.14 UC4 VIM QoSsupport TheVIMcontrollerSHALLprovidemechanismstosupportQoScontrolovertheITinfrastructure.

Requiredtosupportthespecificperformanceneededbyanetworkservice.

VIM.15 VIM SDNControllerperformance

TheVIMcontrollerSHOULDminimisetheflowsetuptimemaximisingthenumberofflowspersecondthatitcansetup.

Requiredtoprovidearesponsiveconfigurationoftheunderlyinginfrastructure.

VIM.16 VIM VIMControllerRobustness

TheVIMcontrollerSHALLbeablecontendwithcontrolplanefailures(e.g.,viaredundancy)inarobustmanner,andinparticulartoensurepersistenceofnetworkconfigurationincaseofCPbreakdown/resumecycles.

RequiredforresiliencyintheT-NOVAsystem.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

103

VIM.17 VIM HypervisorAbstractionandAPI

TheVIMControllerSHALLabstractabasicsubsetofhypervisorcommandsinaunifiedinterfaceandinaPlug-Infashion.Thisincludescommandslikestart,stop,rebootetc.

DifferentHypervisorshavevariousadvantagessuchasfullhardwareemulationorparavirtualisation.InordertogetthebestfunctionalityandthebestperformanceforaVM,differentHypervisorsmustbesupported.

VIM.18 VIM QueryAPIandMonitoring

TheVIMControllerSHALLhaveaQueryAPIthatallowsotherT-NOVAcomponentstoretrievemetrics,configurationandusedhypervisortechnologypercomputenode.

Theorchestratormustbeabletomakethebestdecisionregardingperformance,functionalityandaSLAforthecreationofVMs.Theorchestratorrequiresinformationfromthehypervisorandthecomputeinfrastructureunderitscontroltomakeplacementandmanagementdecisions.

VIM.19 VIM VMPlacementFilters

TheVIMControllerSHOULDofferasetofmechanismstoachieveappropriateplatformplacementdecisionsforVNFdeployments.

SomerequirementssetbytheorchestratordoneedamorespecificplacementoftheVM.E.g.aCPUcorefiltercanbeappliedtotheschedulersothattheVMisonlyplacedonacomputenode,ifmorethan3CPUcoresareavailable.

VIM.20 VIM Base-ImageRepositoryintegration

TheVIMControllerSHALLhaveanintegration-moduletointeractdirectlywiththenon-configuredVMimagesthatneedtobedeployed.

ThecomputecontrollermusthaveaccesstotherepositorywiththebasicVMimages.Thosebaseimageswillbedeployedonthedesiredflavourrequestedbytheorchestratorregardingconfiguration,diskspace,CPUandmemory.

VIM.21 UC4 VIM VirtualisedInfrastructureMetrics

TheVIMSHALLcollectperformanceandutilisationmetricsfromthevirtualisedresourcesintheNFVIandreportthisdatainraworprocessedformatsviaanorthboundinterfacetotheOrchestrator.

TheOrchestratorneedsdatatomakescalingdecisionsonVNFsmakingupagivenservice,basedonactingSLAcriteria.

B.2 WANInfrastructureConnectionManagerRequirements

Table6.13:IVMRequirements–WICM

Req.Id. T-NOVA UseCaseAlignment

Domain RequirementName

RequirementDescription JustificationofRequirement

WICM.1 UC2,UC3,UC6 WANManagement

Customer trafficpathsteering

The WICM SHOULD be able to steer the customer trafficacross the network infrastructure on an end-to-end basis,takingappropriateactionsasa resultof specificevents (e.g.subscription/ unsubscription of VNF services). Selectedmetrics and performance targets (e.g. latencyminimization,networkresourceconsumptionminimization,loadbalancing)may also be taken into account in the determination of the

TheconnectivitycomponentoftheT-NOVAservicemayhavetobe reconfiguredasa resultsofanevent related to theT-NOVA service. Events like subscription, unsubscription orreconfiguration of VNF services may imply modifications intheend-to-endpathofthecustomertraffic

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

104

end-to-endpath.

WICM.2 UC2,UC3,UC6 WANManagement

CompatibilitywithWANtechnologies

The WICM SHOULD support current WAN connectivityservices.

ItisexpectedthatcurrentWANconnectivityservicessuchasL2/L3VPNswilldominatefortheforeseeablefuture.

WICM.3 UC2,UC3 WANManagement

Rapid servicereconfiguration

The WICM SHOULD enable reconfiguration of WANconnectivity services in compliance with the expectedperformanceenvelopeoftheinfrastructure.

Subscription/unsubscription of services is expected to takeeffectinnearrealtime.

WICM.4 UC2,UC3 WANManagement

QoScontrol The WICM SHOULD control enforcement of QoS policiesassociatedwithagivencustomerprofile.

T-NOVA service is supposed to provide differentiated SLAlevels

WICM.5 UC2,UC3 WANManagement

Customer IDmapping

The WICM SHALL be able to map customer’s WAN virtualcircuit ID (e.g. VLAN ID,MPLS label) into the correspondingNFVI-PoPinternalvirtualnetworkID.

Consistentidentificationofcustomer/tenantsacrossdifferentnetwork domains is required to guarantee end-to-endconnectivity.

WICM.6 UC2,UC3 WANManagement

Northboundinterface

TheWICMSHOULDexposeanorthboundinterfacetotheT-NOVAorchestrationlayer.

Control of connectivity at WAN level should be consistentwithwholeNFVresourcecontrolprocess.

WICM.7 WANManagement

WANMetrics The WICM SHOULD collect metrics such as allocatedbandwidth,availablebandwidthetc.foreachWANlinkunderits control and SHALL expose these metric the T-NOVAOrchestrationlayerviaitsNorthboundinterface.

ThesemetricsarerequiredtosupportvisualizationNFVI-PoPinterconnects within Orchestrator Management UI. Thesemetrics are also required as an input into the resourcemappingalgorithm

B.3 NFVInfrastructureRequirements

B.3.1 Computing

Table6.14:IVMrequirements–Computing

Req.id T-NOVAUseCaseAlignment

Domain RequirementName

RequirementDescription JustificationofRequirement

C.1 UC2 Compute Nested/Extended HardwarepagevirtualisationSHALLbeutilisedtoimproveperformance.

PerformancebenefitsfromhardwarepagevirtualisationaretiedtotheprevalenceofVMexittransitions.CPUshouldhavelargeTLBs

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

105

C.2 UC2,UC3 Compute CentralStorage Acentralstoragesubsystem(SAN)SHALLbeavailableinordertoenableadvancedfunctionalitieslikeVMmigrationandserverclustering

Requiredtosupportusecases3/3.1/3.2.Alsorequiredforsystemresilience.

C.3 UC4 Compute NoSPOF AllhardwarecomponentsSHALLbedeployedwithproperredundancymechanisms(e.g.redundantSANswitchesandnetworkswitches)inordertoavoidsinglepointsoffailure

Requiredtosupportusecases3/3.1/3.2.Alsorequiredforsystemresilience.

C.4 UC4.1 Compute Performance AllhardwarecomponentsSHALLsatisfyspecifiedperformanceparameters(e.g.IOPSandR/Woperationratioincaseofstorageresources)inordertoproviderequiredperformancelevels

RequiredtoguaranteeproperSLAs

C.5 UC2 Compute Hypervisorcompatibility

ServersandstorageSHALLbecompatiblewiththechosenhypervisor(s)

Requiredtoensurebasicsystemfunctionalityandreliability.

C.6 UC2,UC3 Compute CentralStorage-efficiency

CentralstorageSHALLsupportfunctionalitieslikeAutomaticStorageTiering(AST),thinprovisioninganddeduplication,inordertoreducecosts,improveefficiencyandperformance

RequiredtosupportSLA’sassociatedwithVNFservices.

C.7 UC4 Compute ComputeDomainMetrics

ThecomputedomainSHALLprovidemetricsandstatisticsrelatingtothecapacity,capabilityandutilisationofhardwareresources:

• CPUcores• Memory• IO(includingaccelerators)• Storagesubsystem

Thesemetricshallincludebothstaticanddynamicmetrics

ThisinformationisrequiredattheOrchestrationlayertomakedecisionsabouttheplacementofnewVNF,services,tomanageexistingservicestoensureSLAcomplianceandtoensuresystemreliability.

C.8 UC3 Compute PowerManagement

ThecomputedomainSHOULDprovidepowermanagementfunctionsthatenabletheVIMtoremotelycontrolthepowerstateofthedomain

ThiscapabilitymayberequiredtomeetSLArequirementsonenergyutilisation,servicecostsortimeofdayservicesettings

(Non-functionalrequirement)

C.9 UC2,UC3 Compute HardwareAccelerators

ThecomputedomainSHALLsupportdiscoveryofhardware(HW)/functionalaccelerators

CertainVNFfunctionsmayrequireorexperienceperformancebenefitsfromtheavailabilityofco-processorcardssuchasFPGA’s,MIC(e.g.XEONPHI)orGPU’s(e.g.Nvida).TheOrchestratorshouldbeawareofthesecapabilitiestoensurecorrectplacementdecisions.

C.10 UC2,UC3 Compute HardwareAcceleratorVisibility

AllHWacceleratorsSHOULDbeabletoexposetheirresourcestotheVIMControllers.

TheOrchestratorshouldbeawareofacceleratorcapabilitiesavailablewithinanNFVI-PoPforplacementofVNF’sthatcanutilisethesecapabilitiestoimprovetheirperformance.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

106

C.11 UC2,UC3 Compute HardwareAcceleratorVirtualisation

HWacceleratorresourcesMAYbevirtualisableandthisfeatureSHOULDbemadeavailabletothehostprocessor.

TypicallyacceleratorHWisnotvirtualisablewiththeexceptionofGPUs.Virtualisingtheacceleratorcanprovideallocationandschedulingflexibility

C.12 UC4 Compute HardwareAcceleratorMetrics

HWacceleratorsSHALLprovideperformancemetricstotheVIM.

Necessarytomeasureperformance,guaranteeSLAsanddeterminelimitsforscalingupanddowntheserviceifnecessary.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

107

B.3.2 Hypervisor

Table6.15:IVMRequirements–Hypervisor

Req.id T-NOVA

UseCaseAlignment

Domain RequirementName

RequirementDescription JustificationofRequirement

H.1 UC3

UC4

Hypervisor ComputeDomainMetrics

TheHypervisorSHALLprovidemetricsincludingstatusinformationonvirtualcomputeresources.

TheOrchestratorrequirestheinformationfromthecomputedomaintomakedecisionsregardtheinstantiationofnewVNFsandtheplacementofnewvirtualisationcontainersfornewinstancesofVNFcomponents.MetricsarealsorequiredtoadjustingexistingservicesinordertomaintainSLA’s.MeasurementscollectionisalsoperformedwithinthePerformanceMeasurementfunctionprovidedbytheManagementandOrchestrationfunctionalblock.

H.2 UC3,

UC4

Hypervisor NetworkDomainMetrics

ThehypervisorSHALLprovidemetricsandstatisticsonnetworkingarrangedbyportforeachvirtualnetworkdeviceconfigured.

TheOrchestratorrequirestheinformationfromthenetworkdomaintomakedecisionsregardingtheplacementofnewVNFservicesandadjustexistingservicestomaintainSLA’s(seeabove)

H.3 UC3 Hypervisor VMPortability TheT-NOVAhypervisorSHALLbeabletounbindtheVMfromthehardwareinordertoallowtheVMtobemigratedtoadifferentphysicalresource.

RequiredtoensurethatVNFsarefullyportableintheT-NOVAsystemsforsupportingvariousconditionssuchasscaling,resilience,maintenanceetc.

H.4 UC3 Hypervisor VM–LifecycleManagement

TheT-NOVAhypervisorSHALLsupportinstructionstoprovision,rescale,suspend,terminateonVMscurrentlydeployedorreceivedviaaHypervisor-VIMinterface

ThisisafundamentalrequirementinordertoperformlifecyclemanagementofnetworkservicesVNFs.

H.5 UC2

UC3

Hypervisor PlatformFeaturesAwareness/Exposure

ThehypervisorSHOULDbeabletodiscovertheexistenceoffeaturesandfunctionalityprovidedbyresourcessuchasthecompute,accelerators,storageandnetworkingandtoexposethesefeaturestotheOrchestratorviatheVIM.

EnhancedplatformawarenessbytheHypervisorandmakingthisinformationavailabletotheOrchestratorwillallowtheOrchestratortomakeappropriateplacementdecisions,basedontherequiredcapabilitiesfortheinstantiationofVNFCservicesinstances.

H.6 UC2 Hypervisor VMLowPowerState

ThehypervisorSHALLhavetheabilitytoputresourcesintoalowerpowerstatebasedonutilisation/SLArequirementstoexposealowerpowerstatetotheOrchestrator.

ThiscapabilitymayberequiredtomeetSLArequirementsonenergyutilisation,servicecostsortimeofdayservicesettings.

H.7 UC2 Hypervisor RequestResultsInformation

ThehypervisorSHALLmakeavailabletotheVIMthestatusofexecutedrequests.

RequiredsotheVIMandOrchestratorcanmaintainaconsistentviewoftheinfrastructureresources.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

108

UC6

H.8 UC2,

UC4

Hypervisor Performance–Resourceovercommit

ThehypervisorSHALLbeabletoprovidemechanismstoavoidresourceovercommit.

RequiredtoimproveperformanceandguaranteetherequestedSLAs.

H.9 UC4 Hypervisor Alarm/ErrorPublishing

ThehypervisorSHALLpublishalarmsorerroreventsviatheVIM.

TheOrchestratorrequiresthisinformationtoactonspecificevents.ThisispartoftheFaultManagementfunctionthatshallbeprovidedbytheManagementandOrchestrationfunctionalblock.

H.10 UC2

UC3

Hypervisor Security ThehypervisorSHALLbeabletoguaranteeresource(instruction,memory,deviceaccess,network,storage)isolationinordertoguaranteetheperformanceoftheVMsusingtheseresources.

Necessarytoensurethatnetworkservicesdoesn’tinterferewitheachotheranddoesn’timpactonperformanceorreliabilityofotherservices.

H.11 UC2

UC3

Hypervisor Network ThehypervisorSHALLbeabletocontrolnetworkresourceswithintheVMhostandprovidebasicinter-VMtrafficswitching.

ThisisrequiredtoallowproperVNFcommunicationforVNFsthatareinstantiatedwithinthesamecomputenode.

B.3.3 Networking

Table6.16:IVMRequirements–Networking

Req.id Alignment Domain RequirementName RequirementDescription JustificationofRequirement

N.1 UC1-UC6 Networking Switching ThephysicalandvirtualnetworkingcomponentsoftheT-NOVANFVIPoPSHALLsupportL2andL3connectivity Mandatoryrequirement

N.2 UC1-UC6 Networking Virtualisation NetworkingdevicesoftheT-NOVAIVMSHOULDhavetheabilitytosupportvirtualisednetworkoverlays RequiredtosupportscalabilitywithintheT-NOVAsystem.

N.3 UC1-UC6 Networking QoSconfigurationNetworkingcomponentsΜΑΥallowtheconfigurationofspecificqualityofservice(QoS)parameterssuchasthroughput,servicedifferentiationandpacketloss.

Allow the connectivity between VNFCwith specific networkQoSattributes

N.4 UC1-UC6 Networking TunnellingThenetworkingdevicesoftheT-NOVANFVIPoPΜΑΥsupportthecreationofmultipledistinctbroadcastdomains(VLANs)throughoneormoretunnellingprotocols(e.g.STT,

ThisisarequirementforthedeploymentoftenantnetworksacrossmultiplePoPsandtheisolation.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

109

NVGRE,VxLAN)toallowthecreationofvirtualL2networksinterconnectedwithinL3networks(L2overL3).

N.5 UC1-UC6 Networking UsagemonitoringThenetworkingdevicesoftheT-NOVANFVIPoPSHOULDprovidemonitoringmechanismsthroughcommonlyusedAPIs(i.e.SNMP).

Required for auditing, monitoring, trouble-shooting,events/alarmsdetection,andliveoptimisation

N.6 UC1-UC6 Networking ConfigurationThenetworkingdevicesoftheT-NOVANFVIPoPSHOULDallowconfigurationthroughcommontechnologiesandprotocolssuchasNETCONF.

Requiredfor(remote)configurationaccess.

N.7 UC1-UC6 Networking SDNL2physicalandvirtualnetworkingdevicesoftheT-NOVANFVI-PoPSHOULDsupportthedynamicreconfigurationofthenetwork.

Required toallow thedynamicconfigurationof thenetworkat runtime. This requirement could potentially be fulfilledusinganSDNapproach.

N.8 UC1-UC6 Networking SDNManagement L2networkingdevicesoftheT-NOVANFVIPoPSHOULDbemanagedbytheVIM. Requiredtoensurethecontrolofthenetworkcomponents.

N.9 UC1-UC6 Networking Networkslicing

NetworkingdevicesintheT-NOVANFVIPoPSHOULDallowprogrammabilityoftheirforwardingtables.EachflowSHOULDbehandledandconfiguredseparatelytoenablenetworkslicing.

The VIM should be able to create network slices composedwithdifferentnetworkingdevices,whicharethenconfiguredindependently.

N.10 UC1-UC6 Networking ScalabilityTheNFVIPoPSHOULDbeabletosupportalargenumberofconnectedservers,whichinturn,SHOULDbeabletohostalargenumberofconcurrentVMs.

Requiredtosupportscalabilityandmulti-tenancy.

N.11 UC1-UC6 NetworkingL2Addressspaceandtrafficisolation

ForL2services,theinfrastructurenetworkoftheT-NOVANFVIPoPMUSTprovidetrafficandaddressspaceisolationbetweenvirtualnetworks.

Required to ensure the correct function ofmulti-tenancy intheT-NOVAsystem.

N.12 UC1-UC6 NetworkingL3Addressspaceandtrafficisolation

ForL3services,theinfrastructurenetworkoftheT-NOVANFVIPoPMUSTprovidetrafficisolationbetweenvirtualnetworks.Ifaddressisolationisalsorequireditcanbeachievedusingvarioustechniques:

• Anencapsulationmethodtoprovideoverlaynetworks(L2orL3service).

• Theuseofforwardingtablepartitioningmechanisms(L2service).

• Byapplyingpolicycontrolwithintheinfrastructurenetwork(L3service).

Required to ensure the correct function ofmulti-tenancy intheT-NOVAsystem.

N.13 UC1-6 Networking Trafficsteering ThenetworktrafficbetweentheVMsSHOULDbeforwardedthoughtherequiredpath, RequiredtosupportServiceFunctionChaininginT-NOVA

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

110

N.14 UC2,UC3 Networking Dataplaneencapsulation

TheNFVI-GWSHOULDhandletraffictransitingto/comingfromtheWAN,accordingtotransporttechnologiescommonlydeployedinWANdomains(e.g.IEEE802.1q,MPLS,Q-in-Q).

Encapsulationmethodsusedtoaggregatemulti-tenanttrafficshould be supported by the NFVI-GW to guaranteeinteroperabilitywithWAN.

N.15 UC2,UC3 Networking QoScontrol TheNFVI-GWshouldbeabletosupportQoSdifferentiationinthedataplane.

T-NOVAserviceshouldprovidedifferentSLAlevels.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

111

ANNEXC-TERMINOLOGY

This annex contains general terms used throughout the deliverable in associationwith allmainT-NOVAarchitecturalentities.

Thetermsmarkedwithanasterisk(*)havebeenalignedwithETSINFVISGterminology[22].

C.1 GeneralTerms

Table6.17:GeneralTerms

Name Description

VirtualisedNetworkFunction(VNF)*

Avirtualised(puresoftware-based)versionofanetworkfunction.

VirtualisedNetworkFunctionComponent(VNFC)*

Anindependentlymanageableandvirtualisedcomponent(e.g.aseparateVM)oftheVNF.

T-NOVANetworkService(NS)

Anetworkconnectivityserviceenrichedwithin-networkVNFs,asprovidedbytheT-NOVAarchitecture.

NFVInfrastructure(NFVI)* ThetotalityofallhardwareandsoftwarecomponentswhichbuilduptheenvironmentinwhichVNFsaredeployed.

C.2 OrchestrationDomain

Table6.18:OrchestrationDomainTerminology

Name Description

Orchestrator*The highest-level infrastructure management entity whichorchestratesnetworkandITmanagemententitiesinordertocomposeandprovisionanend-to-endT-NOVAservice.

ResourcesOrchestrator*

The Orchestrator functional entity which interacts with theinfrastructure management plane in order to manage andmonitortheITandNetworkresourcesassignedtoaT-NOVAservice.

NSOrchestrator*

The Orchestrator functional entity in charge of the NSlifecyclemanagement(i.e.on-boarding,instantiation,scaling,update, termination)which coordinates all other entities inordertoestablishandmanageaT-NOVAservice.

VNFManager*TheOrchestrator functionalentity inchargeofVNF lifecyclemanagement (i.e. installation, instantiation, allocation andrelocationofresources,scaling,termination).

NSCatalogue* TheOrchestratorentitywhichprovidesarepositoryofallthedescriptorsrelatedtoavailableT-NOVAservices

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

112

VNFCatalogue* TheOrchestratorentitywhichprovidesarepositorywiththedescriptorsofallavailableVNFPackages.

NS&VNFInstancesRecord*

The Orchestrator entity which provides a repository withinformation on all established T-NOVA services in terms ofVNF instances (i.e. VNF records) and NS instances (i.e. NSrecords).

NFStore TheT-NOVArepositoryholdingtheimagesandthemetadataofallavailableVNFs/VNFCs.

C.3 IVMDomain

Table6.19:IVMDomainTerminology

Name Description

Virtualised InfrastructureManagement(VIM)*

The management entity which manages the virtualised(intra-NFVI-PoP) infrastructure based on instructionsreceivedfromtheOrchestrator.

WANInfrastructureConnectionManager(WICM)

The management entity which manages the transportnetwork for interconnecting service endpoints and NFVI-PoPs,e.g.geographicallydispersedDCs.

VNFManagerAgent* The VIM functional entity which interfaces with theOrchestratortoexposeVNFmanagementcapabilities

OrchestratorAgent* The VIM/WICM functional entity which interfaces with theOrchestratortoexposeresourcemanagementcapabilities.

HypervisorController* TheVIMfunctionalentitywhichcontrolstheVIMHypervisorsforVMinstantiationandmanagement.

ComputeController* The VIM functional entity which manages both physicalresourcesandvirtualisedcomputenodes.

NetworkControllerVIM* The VIM functional entity which instantiates and managesthe virtual networks within the NFVI-PoP, as well as trafficsteering.

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

113

LISTOFACRONYMS

Acronym Description

ACPI AdvancedConfigurationandPowerInterface

API ApplicationProgrammingInterface

AST AutomaticStorageTiering

BSS BusinessSupportingSystem

CAM Control,AdministrationandMonitoring

CAPEX CapitalExpenditure

CLC CloudController

CLI CommandLineInterface

CP ControlPlane

CPU ControlProcessingUnit

D2.1 DeliverableD2.1

D2.22 DeliverableD2.22

D2.42 DeliverableD2.42

DC DataCentre

DCN DataCentreNetwork

DMC DOVEManagementConsole

DOVE DistributedOverlayVirtualEthernet

DP DataPlane

DPDK DataPlaneDevelopmentKit

DPI DeepPacketInspection

E2E End-to-End

EG ExpertsGroup

EM ElementManager

EN EuropeanNorm

EPT ExtendedPageTables

ETSI EuropeanTelecommunicationsStandardsInstitute

EU EndUser

EVB EdgeVirtualBridge

FE FunctionalEntity

FN FutureNetworks

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

114

FP FunctionProvider

FPGA FieldProgrammableGateArray

GS GlobalStandard

GPU GraphicalProcessingUnit

GW Gateway

HG HomeGateway

HW Hardware

I/O Input/Output

IaaS InfrastructureasaService

IEEE InstituteofElectricalandElectronicsEngineer

IETF InternetEngineeringTaskForce

INF Infrastructure

IP InternetProtocol

IP InfrastructureProvider

IPAM IPAddressManagement

IPsec IPsecurity

ISG IndustrySpecificationGroup

ISO InternationalOrganisationforStandardisation

IT InformationTechnology

ITU InternationalTelecommunicationUnion

ITU-T ITUTelecommunicationStandardizationSector

IVM InfrastructureVirtualisationandManagement

KPI KeyParameterIndicator

L2 Layer2

L3 Layer3

LAN LocalAreaNetwork

LINP LogicallyIsolatedNetworkPartition

MAC MdediaAccessControl

MAN MetroAreaNetwork

MANO ManagementandOrchestration

MEF MetroEthernetForum

MIC Multi-IntegratedCores

MPLS MultiprotocolLabelSwitching

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

115

NaaS NetworkasaService

NC NetworkController

NETCONF NetworkConfigurationProtocol

NF NetworkFunction

NFaaS NetworkFunctions-as-a-Service

NFV NetworkFunctionsVirtualisation

NFVI NetworkFunctionsVirtualisationInfrastructure

NFVIaaS NetworkFunctionVirtualisationInfrastructureas-a-Service

NFVI-PoP NFVI-PointofPresence

NFVO NetworkFunctionVirtualisationOrchestrator

NG-OSS NextGenerationOperationsSupportingSystem

NIC NetworkInterfaceCards

NIP NetworkInfrastructureProvider

NOC NetworkOperatorsCouncil

NS NetworkService

NSD NetworkServiceDescriptor

NV NetworkVirtualisation

NVGRE NetworkVirtualisationusingGenericRoutingEncapsulation

OAN OpenAccessNetwork

OCCI OpenCloudComputingInterface

ONF OpenNetworkingFoundation

OPEX OperationalExpenditure

OPN OpenPlatformforNFV

OS OperatingSystem

OSI OpenSystemsInterconnection

OSS OperationsSupportingSystem

PF PhysicalFunction

PNF PhysicalNetworkFunction

PoC ProofofConcept

PC PersonalComputer

PER Performance&PortabilityBestPractices

QPI QuickPathInterconnect

QoS QualityofService

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

116

RAM RandomAccessMemory

RAS ReliabilityAvailabilityandServiceability

RESTAPI RepresentationStateTransferAPI

RFC RequestforComments

RPC RemoteProcedureCall

RTP RealTimeProtocol

SAN StorageAreaNetwork

SBC SessionBorderController

SDN Software-DefinedNetworking

SDO StandardsDevelopmentOrganisation

SG13 StudyGroup13

SLA ServiceLevelAgreement

SNMP SimpleNetworkManagementProtocol

SOTA State-Of-The-Art

SP ServiceProvider

SR-IOV SingleRootI/OVirtualisation

SSD Solid-state-disk

STP SpanningTreeProtocol

STT StatelessTransportTunnelling

SW Software

SWA SoftwareArchitecture

ToR TermsofReference

ToR TopofRack

TMF TeleManagementForum

WICM WANInfrastructureConnectionManager

TR TechnicalReport

TS TechnicalStandard

TSC TechnicalSteeringCommittee

T-NOVA NetworkFunctionsas-a-ServiceoverVirtualisedInfrastructures

UC UseCase

UML UnifiedModellingLanguage

VEB VirtualEdgeBridge

VEPA VirtualEthernetPortAggregator

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

117

VIM VirtualisedInfrastructureManager

VL VirtualLink

VLD VirtualLinkDescriptor

VLAN VirtualLocalAreaNetwork

VM VirtualMachine

VMM VirtualMachineManager

VMX VirtualMachineExtension

VN VirtualNetwork

VNF VirtualNetworkFunction

VNFC VirtualNetworkFunctionComponent

VFND VirtualNetworkFunctionDescriptor

VNFFG VirtualNetworkFunctionFowardingGraph

VNFFGD VirtualNetworkFunctionFowardingGraphDescriptor

VNFM VirtualNetworkFunctionManager

VRF VirtualRoutingandForwarding

VPN VirtualPrivateNetwork

VSAN VirtualStorageAreaNetwork

vNIC VirtualNetworkInterfaceCards

VPN VirtualPrivateNetwork

VTN VirtualTenantNetwork

WAN WideAreaNetwork

WG WorkingGroup

WP WorkPackage

WP WorkingProcedures

XML ExtendedMarkupLanguage

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

118

REFERENCES

[1] V.Travassos.(2012).VirtualizationTrendsTraceTheirOriginsBacktotheMainframe.Available:http://ibmsystemsmag.com/mainframe/administrator/Virtualization/history_virtualization/

[2] M.Chiosiet.al,"AnIntroduction,Benefits,Enablers,Challenges&CallforAction,"2012,Available:https://portal.etsi.org/nfv/nfv_white_paper.pdf

[3] T-NOVA,"NetworkFunctionsas-a-ServiceoverVirtualisedInfrastructures,"2014,Available:http://www.t-nova.eu/results/

[4] T-NOVA,"OverallSystemArchitectureandInterfaces,"2014.Available:http://www.t-nova.eu/results/

[5] T-NOVA,"SpecificationoftheNetworkFunctionFrameworkandT-NOVAMarketplace,"2014.Available:http://www.t-nova.eu/results/

[6] T-NOVA,"UseSystemCasesandRequirements,"2014.Available:http://www.t-nova.eu/results/

[7] ESTI,(2013).NetworkFunctionsVirtualisation(NFV);VirtualisationRequirements.Available:http://www.etsi.org/deliver/etsi_gs/nfv/001_099/004/01.01.01_60/gs_nfv004v010101p.pdf

[8] ITU-T,"Requirementsofnetworkvirtualizationforfuturenetworks,"ITU-T,2012,Available:https://www.itu.int/rec/T-REC-Y.3011-201201-I/en

[9] ETSI,(2014).NetworkFunctionsVirtualisation(NFV);ManagementandOrchestration.Available:http://www.etsi.org/deliver/etsi_gs/NFV-MAN/001_099/001/01.01.01_60/gs_nfv-man001v010101p.pdf

[10] T-NOVA,"SpecificationoftheNetworkFunctionFrameworkandMarketplace,"2015.Available:http://www.t-nova.eu/results/

[11] K.Ahmad,SourcebookofATMandIPInternetworking:IEEEPress.[12] ETSI,(2014).NetworkFunctionsVirtualisation(NFV);InfrastructureOverview.

Available:http://www.etsi.org/deliver/etsi_gs/NFV-INF/001_099/001/01.01.01_60/gs_nfv-inf001v010101p.pdf

[13] ETSI,(2014).NetworkFunctionsVirtualisation(NFV);Infrastructure;ComputeDomainAvailable:http://www.etsi.org/deliver/etsi_gs/NFV-INF/001_099/003/01.01.01_60/gs_NFV-INF003v010101p.pdf

[14] ETSI,(2015).NetworkFunctionsVirtualisation(NFV);Infrastructure;HypervisorDomain-ETSIGSNFV-INF004V1.1.1Available:http://www.etsi.org/deliver/etsi_gs/NFV-INF/001_099/004/01.01.01_60/gs_nfv-inf004v010101p.pdf

[15] ETSI,(2014).NetworkFunctionsVirtualisation(NFV);InfrastructureNetworkDomain.Available:http://www.etsi.org/deliver/etsi_gs/NFV-INF/001_099/005/01.01.01_60/gs_NFV-INF005v010101p.pdf

[16] ETSI,(2014).NetworkFunctionsVirtualisation(NFV);Infrastructure;MethodologytodescribeInterfacesandAbstractionsAvailable:http://www.etsi.org/deliver/etsi_gs/NFV-INF/001_099/007/01.01.01_60/gs_NFV-INF007v010101p.pdf

[17] ETSI,(2015).NetworkFunctionsVirtualisation(NFV);ResiliencyRequirements.Available:http://www.etsi.org/deliver/etsi_gs/NFV-REL/001_099/001/01.01.01_60/gs_nfv-rel001v010101p.pdf

T-NOVA|DeliverableD2.32 SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration-Final

©T-NOVAConsortium

119

[18] ETSI,(2014).NetworkFunctionsVirtualisation(NFV);VirtualNetworkFunctionsArchitecture.Available:http://www.etsi.org/deliver/etsi_gs/NFV-SWA/001_099/001/01.01.01_60/gs_nfv-swa001v010101p.pdf

[19] ONF,"OpenFlow-enabledTransportSDN,2014.Available:[20] T.Kourlas,(2014).SDNforIP/OpticalTranportNetworks.Available:

http://www.commtechshow.com/east/wp-content/uploads/ALU-SDN-for-IPOptical-Transport-Networks.pdf

[21] U.Hölzle,"OpenFlow@Google,"inOpenNetworingSummit,SantaClara,15-17April,2012.

[22] ETSI,(2013).NetworkFunctionsVirtualisation(NFV);TerminologyforMainConceptsinNFV.Available:http://www.etsi.org/deliver/etsi_gs/NFV/001_099/003/01.01.01_60/gs_nfv003v010101p.pdf

[23] B.Landfeldt,P.Sookavatana,andA.Seneviratne,"Thecaseforahybridpassive/activenetworkmonitoringschemeinthewirelessInternet,"presentedattheIEEEInternationalConferenceonNetworks(ICON2000),Singapore,Malaysia,2000.