bonding cm6

40
Implementation of Converged Control Network on Avaya Aura® Communication Manager 6.2 White Paper Abstract The purpose of this white paper is to: Outline the principles that support Avaya’s decision to move away from the restrictive control network assignment rules required for Avaya Aura® Communication Manager prior to release 6.0. The previous control network requirements were developed in the inf ancy of Communication Manager’s move to the Linux operating system and its move into converged networking. Data networking has changed greatly since that time and AVAYA is adapting Communication Manager’s control network architecture accordingly. Provide guidance on addressing reliability and security concerns when migrating from a private control network to the Converged Control Network; the control network connected to the enterprise network. Provide a high-level overview of the process and procedures required to convert a CM control network from a private network (Control Network A (CNA) / Control Network B (CNB) configuration to a design that employs the enterprise network for control networking. Provide implementation and installation recommendations when installing the Converged Control Network of a CM 6.X system on an enterprise network/WAN.

Upload: gaurav-chhabra

Post on 21-Oct-2015

99 views

Category:

Documents


2 download

DESCRIPTION

Bonding CM6

TRANSCRIPT

Page 1: Bonding Cm6

Implementation of Converged Control Network on Avaya Aura® Communication Manager 6.2

White Paper

Abstract

The purpose of this white paper is to: Outline the principles that support Avaya’s decision to move away from the

restrictive control network assignment rules required for Avaya Aura® Communication Manager prior to release 6.0. The previous control network requirements were developed in the infancy of Communication Manager’s move to the Linux operating system and its move into converged networking. Data networking has changed greatly since that time and AVAYA is adapting Communication Manager’s control network architecture accordingly.

Provide guidance on addressing reliability and security concerns when migrating from a private control network to the Converged Control Network; the control network connected to the enterprise network.

Provide a high-level overview of the process and procedures required to convert a CM control network from a private network (Control Network A (CNA) / Control Network B (CNB) configuration to a design that employs the enterprise network for control networking.

Provide implementation and installation recommendations when installing the Converged Control Network of a CM 6.X system on an enterprise network/WAN.

Page 2: Bonding Cm6
Page 3: Bonding Cm6

Converged Control Network -3- Issue 1.0

1 INTRODUCTION ........................................................................................................................... 5

1.1 AVAYA AURA COMMUNICATION MANAGER CONTROL NETWORK .................................................. 6

2 GLOSSARY ................................................................................................................................... 6

3 REASONS FOR MOVING TO A CONVERGED CONTROL NETWORK ....................................... 7

3.1 BACKGROUND .......................................................................................................................... 7 3.2 EVOLUTION TO A CONVERGED NETWORK SOLUTION .................................................................. 8

4 SYSTEM AVAILABILITY AND RELIABILITY ............................................................................. 10

4.1 ENTERPRISE NETWORK RELIABILITY........................................................................................ 10 4.2 ENTERPRISE L3 CORE FOCUSED ARCHITECTURES ................................................................... 10 4.3 ENTERPRISE L2 CORE FOCUSED ARCHITECTURES ................................................................... 10 4.4 PROCESSOR ETHERNET ......................................................................................................... 11 4.5 NIC BONDING ........................................................................................................................ 11 4.6 DESIGN FOR RELIABLE NETWORK AND MEETING 5-NINES AVAILABILITY ..................................... 11

4.6.1 Examples of 5 nines enterprise network configuration ...................................................... 12 4.6.2 Example 1: availability analysis for Enterprise L2 Core Port Networks and H.248 Media Gateways ..................................................................................................................................... 13

5 SECURITY CONCERNS.............................................................................................................. 16

5.1 ENTERPRISE NETWORK IT INFRASTRUCTURE BEST PRACTICES ................................................ 16 5.1.1 Firewalls ........................................................................................................................... 16 5.1.2 Control Network Encryption .............................................................................................. 16 5.1.3 Virtual Private Networks ................................................................................................... 17

5.2 CONSULTING AND SECURITY REVIEW ...................................................................................... 17

6 IMPACTS ..................................................................................................................................... 17

6.1.1 WAN Characteristics and Associated Concerns ............................................................... 17 6.1.2 Impact on Avaya Aura® Communication Manager Converged Systems ........................... 19

6.2 CONVERGED CONTROL NETWORK AND THE IPSI SOCKET SANITY TIMER ................................... 20 6.3 SURVIVABLE CORE SERVER AND CONVERGED CONTROL NETWORK ......................................... 20

7 FAILURE SCENARIOS ............................................................................................................... 21

7.1 LAYER 2 SWITCH TO CM CALL CONTROL SERVER LOSS OF CABLE ............................................ 21 7.1.1 Effects on Enterprise L2 Core Port Networks and H.248 Gateways .................................. 21 7.1.2 Effects on Enterprise L3 Core/WAN Port Networks and H.248 Gateways......................... 22

7.2 LAYER 2 SWITCH FAILURE ...................................................................................................... 22 7.2.1 Effects on Enterprise L2 Core Port Networks and H.248 Gateways .................................. 22 7.2.2 Effects on Enterprise L3 Core/WAN Port Networks and H.248 Gateways......................... 22

7.3 LAYER 2 SWITCH INSANITY (L1 CONNECTIVITY MAINTAINED, NO L2 TRAFFIC) .............................. 23 7.3.1 Effects on Enterprise L2 Core Port Networks and H.248 Gateways .................................. 23 7.3.2 Effects on Enterprise L3 Core/WAN Port Networks and H.248 Gateways......................... 23

7.4 LAYER 3 SWITCH FAILURE ...................................................................................................... 24 7.4.1 Effects on Enterprise L2 Core Port Networks and H.248 Gateways .................................. 24 7.4.2 Effects on Enterprise L3 Core/WAN Port Networks and H.248 Gateways......................... 24

8 MIGRATION PATHS ................................................................................................................... 24

8.1 MULTI-CONNECT TO CONVERGED CONTROL NETWORK ............................................................ 24 8.2 IP CONNECT TO CONVERGED CONTROL NETWORK .................................................................. 24

Page 4: Bonding Cm6

Converged Control Network -4- Issue 1.0

9 NETWORK ENGINEERING GUIDELINES .................................................................................. 25

9.1 CONTROL BANDWIDTH REQUIREMENTS ON THE ENTERPRISE NETWORK .................................... 25 9.2 NETWORK TOLERANCES ......................................................................................................... 25

9.2.1 End-to-End Roundtrip Delay (Jitter, Loss)......................................................................... 25 9.2.2 Network Convergence Time and Routing Protocols .......................................................... 25

9.3 QOS REQUIREMENTS ............................................................................................................. 26 9.3.1 Avaya Aura Communications Manager 802.1p/Q Tagging and DSCP .............................. 26

9.4 NETWORK CONFIGURATION PLANNING .................................................................................... 26 9.4.1 Converged Control Network Configurations ...................................................................... 27 9.4.2 DHCP vs. Static Administration of IPSI ............................................................................. 28 9.4.3 Static Routes .................................................................................................................... 28

10 INSTALLATION AND ADMINISTRATION................................................................................... 29

11 REFERENCES ............................................................................................................................ 29

12 APPENDIX A: DUPLICATED SERVER NIC BONDING ............................................................. 30

13 APPENDIX B: CONTROL NETWORK CONSOLIDATION: CM5.2.1 -> IPSI STATIC ADDRESSING ..................................................................................................................................... 33

14 APPENDIX C: IP SERVER INTERFACE BOARD ...................................................................... 39

List of Figures

Figure 1 CM Control Network A/B Configuration .................................................................................... 8 Figure 2 CM Converged Control Network Configuration ........................................................................ 9 Figure 3 Availability estimate for Enterprise L2 Core Port Networks and H.248 Media Gateway

Availability Analysis for WAN Remote Port Networks and H.248 Media Gateway ........................ 13 Figure 4 Availability Estimate for Remote Port Networks and H.248 Gateways ................................... 15 Figure 5 LAN / WAN Connected Port Networks and Media Gateway ................................................... 19 Figure 6 Loss of Cable Connectivity Between Layer 2 Switch and CM ................................................ 21 Figure 7 Layer 2 Switch Failure ........................................................................................................... 22 Figure 8 Layer 2 Switch Insanity .......................................................................................................... 23 Figure 9 Layer 3 Switch Failure ........................................................................................................... 24 Figure 10 Typical Communications Manager 5.X IP-Connect Network Configuration .......................... 27 Figure 11 Typical Communications Manager 6.X IP-Connect Network Configuration .......................... 28 Figure 12 S8800 Physical Layout on Back Panel ................................................................................. 29

Page 5: Bonding Cm6

Converged Control Network -5- Issue 1.0

Implementation of Communication Manager Converged Control Network on Enterprise Network

1 INTRODUCTION

The purpose of this white paper is to: Outline the principles that support Avaya’s decision to move away from the restrictive

control network assignment rules required for Avaya Aura® Communication Manager prior to release 6.0. The previous control network requirements were developed in the infancy of Communication Manager’s move to the Linux operating system and its move into converged networking. Data networking has changed greatly since that time and AVAYA is adapting Communication Manager’s control network architecture accordingly.

Provide guidance on addressing reliability and security concerns when migrating from a private control network to the Converged Control Network; the control network connected to the enterprise network.

Provide a high-level overview of the process and procedures required to convert a CM control network from a private network (Control Network A (CNA) / Control Network B (CNB) configuration to a design that employs the enterprise network for control networking.

Provide implementation and installation recommendations when installing the Converged Control Network of a CM 6.X system on an enterprise network/WAN. Sections dedicated to these topics include:

System Availability Performance Impact Network Engineering Guidelines Security

Prior to release 6.0, CM imposed restrictions on how NICs were assigned to the LANs connected to the IPSIs in port networks. There were two control networks - CNA and CNB, and these were expected to be dedicated LANs and were required to be private in cases where CM provided DHCP service for its IPSIs. These initial requirements were established approximately fifteen years ago when CM first moved to Linux in the form of the S8700 server. AVAYA felt that it needed to exercise tight control over these networks. At that time, line speeds were slower, autosensing ports were not commonplace, and hubs were still widely in use. Over the years, AVAYA has evolved CM’s control network architecture to address the needs of actual customer configurations. In CM 2.0 AVAYA introduced the Control Network on Customer LAN (CNOCL) option which allowed the use of routed control network with publicly addressed IPSIs. There still remained separate CNA and CNB connections on the CM servers. With CM 3.0, AVAYA introduced a further refinement to the control architecture, called “Control Network C” (CNC), for port networks at remote sites. CNC was a hybrid architecture with private control network at the main site and public control networking to the IPSIs at the remote sites using the CM Servers’ interface to the corporate LAN. This architecture had limitations though. A survivable server at a remote site could not control IPSIs at the main site. In release 6.0 and beyond, Communication Manager extends this natural evolution from dedicated to shared and from private to public and uses a single network interface for all Control Network communication links to Port Networks and H.248 Gateways.

Page 6: Bonding Cm6

Converged Control Network -6- Issue 1.0

1.1 Avaya Aura Communication Manager Control Network

The Control Network is the network that carries control messages between the CM call control servers, the IP Server Interface (IPSI) Boards (TN2312) in port networks, and H.248 Gateways. The IPSI Port Network Control can be duplicated – providing a redundant IP server interface A and an IP server interface B in active/standby mode. H.248 gateways can also support redundant Ethernet interfaces in an active/standby mode. Each CM server has a dedicated Ethernet Interface for control network traffic. This interface can be duplicated using NIC bonding for additional reliability in the case of a layer 1 (physical) switch outage or server chipset failure.

2 GLOSSARY

Term Definition

CM Communication Manager (also Avaya Aura® Communication Manager)

CNA / CNB Control Network A / Control Network B

DoS Denial of Service

DHCP Dynamic Host Configuration Protocol (DHCP) is an auto configuration protocol used on IP networks.

DSCP DiffServ uses the 6-bit Differentiated Services Code Point (DSCP) field in the header of IP packets for packet classification purposes.

H.248 Protocol used to communicate with gateway devices

HSRP / VRRP Hot Standby Router Protocol (HSRP) is a Cisco proprietary redundancy protocol for establishing a fault-tolerant default gateway, and has been described in detail in RFC 2281. The Virtual Router Redundancy Protocol (VRRP) is a standards-based alternative to HSRP defined in IETF standard RFC 3768. The two technologies are similar in concept, but not compatible.

IPSI IP Server Interface

L2 Layer 2 device: a network device that uses the MAC address to forward traffic.

L3 Layer 3 device: a network device that uses layer 3 and various cut-through techniques (inspect the first packet at layer 3 and subsequent packets at layer 2) information to forward traffic at very high speeds.

LLQ Low Latency Queuing (LLQ) allows the user to define a strict priority queue for voice traffic and a weighted fair queue for each of the other classes of traffic.

MAC address Media Access Control address

MII The Media Independent Interface (MII) is a standard interface used to connect a Fast Ethernet (i.e. 100 Mbit/s) MAC-block to a PHY chip

MTBF Mean Time Between Failure

MTTR Mean Time To Restore

OSPF Open Shortest Path First. This is an IP routing protocol that uses adaptive techniques to compute the shortest path tree for each route. OSPF uses link state information from available routes to construct a topology map of the network. OSPF is constantly monitoring the network to detect changes in the topology, such as link failures, and converges on a new routing

Page 7: Bonding Cm6

Converged Control Network -7- Issue 1.0

Term Definition

structure to route IP packets using the most advantageous route.

PE Processor Ethernet

QoS Quality of Service

SLA Service Level Agreement

TTS The “IP Endpoint Time-to-Service (TTS)” feature greatly reduces the time for IP endpoints to recover after long network outages.

VLAN Virtual LAN

VPN Virtual Private Network

WAN Wide Area Network

3 REASONS FOR MOVING TO A CONVERGED CONTROL NETWORK

3.1 Background

Prior to release 6.0, a CM duplicated (duplex) server configuration assigned five Ethernet interfaces to each of the two servers.

1. Enterprise network administration and reporting access 2. Duplication link 3. Local Services access 4. Control Network A (assigned to a private LAN) 5. Control Network B (assigned to a private LAN)

The following diagram illustrates the pre CM 6.0 configuration with Control Network A (CNA) and Control Network B (CNB).

Page 8: Bonding Cm6

Converged Control Network -8- Issue 1.0

Figure 1 CM Control Network A/B Configuration

The purpose of Control Network A and B (CNA/CNB) was to physically isolate the CM control network functions to private segregated LANs for better performance and security. Customer devices were not permitted on these LANs and the isolation was enforced by the CM firewall rules. Over time, Communication Manager allowed customers the ability to connect a subset of their IPSIs to their enterprise network. To support this control network variant, CM was modified to allow the firewall scripts to open the IPSI related ports to the Ethernet interface on the enterprise network. These configurations, as described previously, became known as “Control Network C” and “Control Network over Customer LAN”. CM 6.0 removed support for CNA and CNB as well as the concept of a separate Control Network C. All control network communication is routed through a single interface to allow customers the flexibility to engineer the CM control network interface to operate on the enterprise network. The reliability formerly provided by dedicating a private LAN to the CM control network is now handled by engineering the enterprise network for security and redundancy as well as handling error and failure detection and recovery within the enterprise network. This evolutionary migration of CM’s control network architecture is in keeping with those of other mission critical business applications- including CRM, MS-Exchange, Oracle, Radius, etc.

3.2 Evolution to a Converged Network Solution

In release 6.0, a CM duplicated (duplex) server configuration has three Ethernet interfaces assigned to each of the two servers.

1. Administration and reporting access, PE, and Converged Control Network (with NIC bonding there are 2 physical NICs on this interface)

Page 9: Bonding Cm6

Converged Control Network -9- Issue 1.0

2. Duplication link 3. Local Services access

The following diagram illustrates the CM 6.X configuration with a Converged Control Network.

Figure 2 CM Converged Control Network Configuration

These are the principles underlying the movement of Avaya CM to a Converged Control Network completely on the customer Enterprise Network:

Page 10: Bonding Cm6

Converged Control Network -10- Issue 1.0

System availability with a properly engineered network maintains the 99.999% availability offered by the more complex CNA/CNB/CNC architecture. Full availability analysis is in section 4.6.

Replaces the prescriptive nature of the private control network architecture with an architecture that allows customers the flexibility to design the control network to meet their specific system security and availability needs.

By employing industry network design best practices, implementing the control network on an Enterprise Network can deliver security and availability commensurate with levels provided by the private dedicated control network.

The intent of this change is to reduce the complexity of the control network configuration by eliminating the physical isolation and separate network hardware elements.

4 SYSTEM AVAILABILITY AND RELIABILITY

4.1 Enterprise Network Reliability

The migration of the Control Network from a private dedicated network to the enterprise network requires best practices network engineering and design to provide a highly reliable link between the CM call control servers, the port networks, and the H.248 gateways. Network design and on-going monitoring consistent with industry best practices are critical for ensuring the CM solution delivers the desired availability profile. Assurance of network integrity and quality can only be accomplished by the enterprise design. In the Converged Control Network configuration, the control signaling traffic between the CM call control servers and the IPSI board(s) in port networks as well as the signaling between the CM call control servers and the H.248 Gateways share resources with other enterprise data network processes and activities. With proper engineering and redundancy, the enterprise network has the potential of delivering a high level of availability. The enterprise network availability depends on the configuration of the network topology with redundancy provisions. Protocols should be employed that immediately identify link or device failures, monitor the health of the redundant components, and check for the bandwidth availability.

4.2 Enterprise L3 Core Focused Architectures

The L3 Core architecture for the Avaya Aura® Communication Manager has the large portion of the port networks and H.248 gateways distributed through the enterprise network and outside the L2 domain of the CM call controllers. This allows complete flexibility in engineering the enterprise network “cloud” to the level of availability required by the customer. The recovery methods and timing within the enterprise network become critical to the availability of the system. The “IPSI Socket Sanity Timeout” value setting on the CM call controllers should be closely coordinated with the recovery timing of the enterprise network.

4.3 Enterprise L2 Core Focused Architectures

The L2 Core focused architecture of the Avaya Aura® Communication Manager has a large portion of the port networks and H.248 gateways administered in the same L2 domain as the main servers. Port networks with duplicated IPSIs should have IPSI A connected to a different L2 switch than IPSI B. H.248 gateways with duplicated Ethernets should also connect the two Ethernet links to different L2 switches.

Page 11: Bonding Cm6

Converged Control Network -11- Issue 1.0

4.4 Processor Ethernet

The Processor Ethernet (PE) interface in Communication Manager Release 6.X is the same interface as the Converged Network interface. Thus, all communication to Port Networks, H.248 Gateways, SIP and H.323 trunks, and endpoints that does not go through CLAN boards is through this same interface. This document is primarily concerned with the Converged Control Network and not endpoint and trunk recovery, security, and availability.

4.5 NIC Bonding

Avaya Aura® Communication Manager 6.X servers (except 8300D) provide the ability to administer NIC Bonding between two physical Ethernet ports on the server. NIC bonding creates an Active/Standby of Ethernet interfaces on each server. These two interfaces should be connected to two different L2 switches to provide maximum availability. The active server NIC port will detect a loss of L1 connectivity (MII) on the Active interface and switch traffic transparently within 100 milliseconds (administrable on the NIC bonding web interface - See Appendix A) to the standby interface, thus making that physical interface active. The choice of which interfaces to include in the bond should take into account that CM call control server interfaces are internally paired in dual port NIC devices (eth0/eth1, eth2/eth3, eth4/eth5) and the chosen interfaces should be from different NIC devices (e.g. eth2 and eth4 bonded). Bonding, eth0 with eth1, for instance, runs the risk that a failure of this single NIC device would leave both interfaces of the bond inoperable.

4.6 Design for Reliable Network and Meeting 5-Nines Availability

Availability is a measure of “uptime,” that is the percentage of time a system is performing its useful function. Traditionally, “real-time” applications such as voice have demanded higher availability than applications based on “store and forward” technology. Converged networks can transport both real-time and data traffic effectively by means of classifying real-time packets and giving preference to those packets to minimize latency and jitter. Today’s VoIP solutions are distributed over devices throughout an enterprise network. High availability communications thus require that switches, firewalls, session border controllers, and/or routers have low latency for all real-time packets. Because VoIP is packet-switched (versus circuit switched), with careful network engineering, it is easier to create parallel structures that eliminate single points of failure. Everyone has high expectations for voice service availability. A common goal is to consistently achieve 99.999%+ availability, or conversely unavailability of no more than 5.26 minutes per year. In more recent years, in recognition of growing customer expectations and as the cost of providing bandwidth has dropped, vendors of data networking systems have developed ways of improving availability. Examples include improving the reliability of the data infrastructure through the use of redundant devices, redundant links, failover protocols, server “clustering”, and options of redundancy for potentially critical sub-assemblies, such as power supplies and processing elements. Allowing Avaya customers to flexibly assign Communication Manager control network interfaces to their enterprise network, without physically isolating the interfaces from the rest of the network hardware elements, does not adversely impact achieving 5 nines or better service availability within the LAN. With the addition of the CM 6.0 feature of NIC bonding, this adds yet another level of redundancy to duplicated Communication Manager Server platforms. The ability of two physical NICs to act in parallel (one active, one standby) will allow the customer to provide redundancy in their network connectivity between the server pair and the enterprise network. This feature provides hardware redundancy at the

Page 12: Bonding Cm6

Converged Control Network -12- Issue 1.0

connection between the active server and L2 switches supporting the connection between the server and Port Networks or Media gateways. NIC bonding essentially acts like port redundancy in an Ethernet switch. Two NICS can be bonded such that only one port is active. If that port fails or the link is broken, the other port becomes active and takes over all sessions. No particular configuration is needed in the upstream Ethernet switch.

4.6.1 Examples of 5 nines enterprise network configuration

The availability assessments, which are listed in this section, are based on availability prediction modeling. Simplex servers’ and media gateways’ availability values are according to box level calculations. These calculations involve hardware reliability and software resiliency and maintenance capabilities. A Markov failure transition model is used in evaluating the availability of the components, which work in parallel. The Avaya Communication Manager software architecture is designed to detect and correct errors as they occur. It quickly isolates the fault to a replaceable sub-assembly. Availability, defined from an end user perspective, is the percentage of time the system or network is operational. Availability is defined mathematically as the following ratio

Availability = MTBF / (MTBF+MTTR)

Where MTBF = Mean Time between Failure/Outages and MTTR= Mean Time to Restore Service. This ratio is commonly expressed as a percentage of uptime. It is obvious from this equation that as the time to restore service approaches zero the system availability approaches 100%. A system rated at availability = 99.999% is down less than 0.001% of the time, that is no more than 5.26 minutes over the course of a year. The fundamental constraints in designing a data network that supports five nines or better availability require reliable components, careful networking configuration, and security consideration. The key features of a highly available solution are:

Reliable Components: the primary feature of a highly available system is the high reliability of its component parts. Hardware subsystems must be designed for reliability, using parts with known or reliably estimated MTBF values.

Redundancy: where the probability of failure of a component cannot be made negligible, redundant components are required, and repair or replacement of any components that do fail must not impact the existing service level.

Failure Detection/Fault isolation/service recovery: failure detection and service recovery must be accomplished without operator intervention i.e. the system must be able to programmatically adjust operation to account for a fault and take action to correct it. This requires some level of redundancy and hardware/software capable of monitoring system operation, analyzing faults, and correcting them.

Note that user errors and network support processes can account for significant downtime in any IT infrastructure. The amount of downtime due to a user error or lack of process is difficult to theoretically determine. This downtime is not included in the availability calculations in the examples of this section. It is expected, while designing the enterprise network, these areas and their impacts on the IP telephony environment are identified and resolved proactively. Thus, isolate the fault from impacting the service availability to the end user. In the following case studies, examples of high availability networks are presented:

Page 13: Bonding Cm6

Converged Control Network -13- Issue 1.0

Every location: either Core or Branch, is considered for a full high availability implementation

Duplicated hardware and/or duplicate links are used where possible

Rapid spanning tree is used with either port redundancy or link aggregation

Redundant router protocols are used (HSRP and/or VRRP)

Links to the WAN supplied by two different ISP/Telco providers

4.6.2 Example 1: availability analysis for Enterprise L2 Core Port Networks and H.248 Media Gateways

In the event of a loss in the active network connectivity at the core, server level redundancy provides the following system recovery strategies

In the event of a hung process or a pending hardware failure in the active server, there will be a server interchange. The detection and failover processes take less than 20 seconds. Stable calls and calls in queue are reliably preserved across a server interchange; calls in a transitional state, however, maybe affected by a server interchange.

In the event of a physical failure in the active connection between the active server and the L2 switch there will be a NIC interchange on the active server (assuming NIC Bonding has been implemented). With this interchange, control traffic will be routed through the new active NIC. The failure event and the re-routing of the control traffic are transparent to the end users. Calls in transition, stable calls, and calls in queue are reliably preserved across the NIC interchange.

The availability assessment for this configuration is depicted in the following diagram.

Redundancy at every level over customer Core enterprise network provides better than 5-9s availability. (Availability: 99.9999% x 99.9999% x 99.9995% - 99.9993%)

Figure 3 Availability estimate for Enterprise L2 Core Port Networks and H.248 Media Gateway Availability Analysis for WAN Remote Port Networks and H.248 Media Gateway

Page 14: Bonding Cm6

Converged Control Network -14- Issue 1.0

Core server redundancy with a Survivable Remote Processor deployed over a redundant enterprise network provides recovery strategies at both the core and the remote location.

Core server interchanges or NIC interchanges are transparent to the IPSIs, H.248 Media Gateways, and phones located at the remote location.

Failure of L2 switches or L3 routers located at the core are transparent to the IPSIs, H.248 Media Gateways, and phones located at the remote location.

In the event of WAN link outages which are less than the “IPSI Socket Sanity Timeout” (the default is 15 seconds) there will be no impact to end users. These outages can be the result of short, intermittent WAN failures or WAN Route Flaps. Such events stay transparent to transient and stable calls. When the outage duration is longer than the “IPSI Socket Sanity Timeout” the behavior of IPSI controlled Port Networks is described in Section 6.2. Stable calls survive while H.248 Media Gateways search for and register to the Survivable Remote Processor.

Page 15: Bonding Cm6

Converged Control Network -15- Issue 1.0

Remote Survivable Processor and redundancy at the core and distribution level over customer remote enterprise network provides 99.99% to 99.999% availability at the remote location

Figure 4 Availability Estimate for Remote Port Networks and H.248 Gateways

Page 16: Bonding Cm6

Converged Control Network -16- Issue 1.0

5 SECURITY CONCERNS

A dedicated control network can eliminate some security concerns from that portion of the control network at a cost in reduced flexibility and increased equipment cost. In a Converged Control Network this protection is no longer inherently provided by isolation and must be provided by appropriately provisioning the enterprise network/WAN infrastructure. Most network equipment manufacturers and IT infrastructure Best Practice designs and implementation methods encompass provisioning the Converged Control Network with a similar level of protection as previously provided by simple isolation. A more complete discussion of Avaya security guidelines can be found in the Avaya Aura® Solution Design Considerations and Guidelines .

5.1 Enterprise Network IT Infrastructure Best Practices

The security of the Converged Control Network will depend on the implementation of the enterprise network/WAN Network infrastructure to meet the Best Practices of the network equipment providers and IT management. The use of firewalls, VLANS, Access Control Lists, etc. to meet industry and customer best practices will be the determining factors in building a secure and highly available Avaya Aura Communications Manager Converged Control Network.

5.1.1 Firewalls

Follow industry IT infrastructure best practices engineering guidelines to implement firewalls to protect the control network from attacks and non-control network related traffic. Implementation of firewalls or Access Control Lists (ACLs) to protect the system from attacks and superfluous traffic is highly recommended.

Firewalls should be placed between the enterprise network and the control network segments to protect the CM call control servers against network attacks.

Firewalls should be implemented to prevent unauthorized access to the CM call control servers from the enterprise network in case of a compromise of the enterprise network.

Firewalls should be implemented to prevent unauthorized access to the enterprise network from the CM call control servers.

Firewalls should enforce protection rules that prevent the propagation of ANY traffic that is not needed for VoIP communications. For a list of recommended settings in this area, consult the Avaya Aura® Solution Design Considerations and Guidelines. The current ports that are used by the Avaya Aura Communication Manager can be requested from Avaya sales or sales support. The document “Port Usage Matrix: Updated for CM 6.2” identifies the ports that must be opened for IPSI controlled port networks and H.248 gateways.

Care must be taken that the Firewalls/Routers are suitably sized so that the end-to-end delay is not severely impacted [Refer to section 9.2.1 for details on end-to-end delay].

Access Control Lists can be put in place to ensure that only appropriate traffic from the IPSIs and H.248 Gateways is routed to the control network interfaces of the CM server and vice versa.

5.1.2 Control Network Encryption

The Converged Control Network is implemented on the enterprise network and it is very highly recommended that encryption be enabled on the signaling links between the CM servers, the IPSIs, and H.248 gateways. Even if the CM server to IPSI/H.248 gateway links are routed over private VLANs, it is highly recommended that encryption be enabled. The primary reason to encrypt the Control Network link is to provide protection against disclosure of sensitive data transmitted over the link. An example of sensitive data that is protected when the Control Network link is encrypted includes feature access codes, endpoint registration information, and distributed media encryption keys. The

Page 17: Bonding Cm6

Converged Control Network -17- Issue 1.0

procedure to enable encryption is described in section [Appendix C: IP Server Interface Board] of this document. Enabling server-IPSI socket encryption imposes a 4-14% reduction on the Busy Hour Call Capacity performance of the system. The extent of reduction depends on the type of call mix that the system is carrying and the CPU capacity of the CM server. The type of call mix is dependent upon several factors: IP Calls or DCP Calls, General Business or Call Center type traffic.

5.1.3 Virtual Private Networks

Follow industry IT infrastructure best practices engineering guidelines to engineer bandwidth, availability, and redundancy to LAN/WAN remote port networks and gateways. Efforts should be made to isolate all of the control traffic between the CM call control servers and the remote port networks or H.248 gateways.

Redundant external VPN devices can be implemented to direct control traffic to one of the VPN devices for encryption before it is transmitted across the network and is received and decrypted by the second VPN. This approach does not protect against network flooding in the WAN, but it does ensure that the control traffic is secure and authenticated between the CM servers and a remote port network.

VPNs can also be used to provide protection against unauthorized access to administrative services on the CM server.

5.2 Consulting and Security Review

See page 111 of the Avaya Aura® Solution Deployment; Release 6.1; 03-603978; Issue 1; June 2011.

6 IMPACTS

Impacts to various elements of the system are presented in this section. Refer to Avaya Aura™ Communication Manager Software Based Platforms High Availability Solutions for details on system “availability” of Avaya Aura® Communication Manager under different configurations.

6.1.1 WAN Characteristics and Associated Concerns

The Converged Control Network also allows for WAN remoted IPSI Port Networks and H.248 Gateways. The LAN segments of an enterprise network are typically owned and controlled by the enterprise, which means the enterprise has more strict control over these facilities. This is not always true for WAN links. Many WAN links are leased through service providers, which mean the enterprise has limited control over these WAN links. The enterprise can specify contractual requirements, commonly called Service Level Agreements, regarding the reliability of the WAN links as a condition of the lease.

Page 18: Bonding Cm6

Converged Control Network -18- Issue 1.0

Page 19: Bonding Cm6

Converged Control Network -19- Issue 1.0

Figure 5 LAN / WAN Connected Port Networks and Media Gateway

Due to the high cost of WAN links, whether they are leased through a service provider or owned by the enterprise, they are also generally not as redundant as enterprise LAN segments. It is common to see a higher, typically much higher, degree of redundancy in LAN segments than in WAN links. WAN link redundancy typically employs an N+1 scheme. The enterprise acquires as many links as it needs, plus one more link for redundancy. Related to these two points, here are some things that can and often do happen on WAN links:

Short, intermittent outages: These can last a few seconds or more, but are typically not long enough to be noticed as a problem with traditional data applications, although the router will notice the outage. These can happen as infrequently as once every couple weeks, or as frequently as several times a day. It depends on the service provider, the physical facilities, and the WAN protocols used.

Route flaps: These are cases where a WAN link goes down, comes back, goes down, comes back, etc. in such a manner that the routers continually change their routing tables to compensate for complete outages and recoveries. This is a severe problem in IP networking, and the effects are both unpredictable and difficult to quantify.

Prolonged Outages: First, WAN link outages are more common and frequent than LAN link outages. Second, it is more difficult to compensate for WAN link outages because of the lower level of redundancy in WAN links. Third, WAN links are inherently slower than LAN segments and incur more latency. This is not only due to the limited bandwidth, relative to LAN segments, but also the process required to serialize packets and put them on the WAN link. Typically this is not a huge issue. The incremental increase in delay is not enough to cause problems under normal operation. But if a partial WAN outage (only some of the links go down) causes all the traffic to be carried by the remaining links, it results in heavy congestion on the remaining links, which further increases delay and packet loss. Even with QoS mechanisms in place, the added stress can create unstable conditions that may affect real-time applications, especially ones that are very sensitive to network inconsistencies.

Troubleshooting: The wide range of possible problems on WAN links is difficult to troubleshoot. Often the service provider must be involved in the troubleshooting. It can be difficult to substantiate where the problem lies and who is responsible to correct it. This can result in long outage intervals as seen by the customer when a critical enterprise service is dependent on the resolution.

6.1.2 Impact on Avaya Aura® Communication Manager Converged Systems

Due to the inherent WAN characteristics and problems described in the previous section, the following are some of the characteristic behaviors that a customer potentially might see with the Avaya Aura Communication Manager system. These can be attributed to network congestion; various network failures, slow network recovery, and convergence attributes.

More than normal IPSI Interchanges

More than normal Avaya Aura Communications Manager Interchanges

Port Network restarts – “warm” and “cold”

Port Network outages

Media gateway outages

Survivable Remote Servers (formerly known as LSPs) becoming active

Page 20: Bonding Cm6

Converged Control Network -20- Issue 1.0

A key factor to consider is the Avaya Aura® Communication Manager timing in attempts to recover links between the CM servers, the port networks, and the H.248 gateways. Control packets are passed between the H.248 gateways, port networks, and the CM call control servers to process digits, update lamps, and provide dial tone and more. Because control packets cannot be processed when the links between the servers and the port network are unavailable, there is an aggressive recovery algorithm for IPSI control links. If the active IPSI connection is lost for more than the “IPSI Socket Sanity Timeout” timer in seconds, the system will attempt to migrate the links to the duplicate IPSI, in the case of a port network configured with duplicate IPSI control boards. An interchange between IPSIs will cause a port network warm reset which should be transparent to the user.

6.2 Converged Control Network and the IPSI Socket Sanity Timer

Network design best practices should be applied to ensure a robust enterprise network/WAN networking environment. This includes the proper configuration (QoS, speed, duplex, and etc.) of all nodes in the Control Network, including all server and IPSI network interfaces. The improved IPSI robustness feature delivered with CM 5.2 provides the best behavior for the most common enterprise network configurations. The use of IPSI duplication as well as diverse-path network routing (OSPF or similar redundant routing) is recommended for high availability and service continuity. The IPSI Socket Sanity Timeout has a default value of 15 seconds. Adjustments to the timer value should only be made when there is a good understanding of the source and the nature of the network outages being experienced. For network outages that exceed the IPSI Socket Sanity timer, a port network restart is performed to bring the port network back into service. The length of the network outage determines the port network restart level that is required, either Warm or Cold. The PN Cold Reset Delay Timer value specifies the maximum outage duration that can be recovered via a port network Warm restart. All network outages that exceed the PN Cold Reset Delay Timer value (default value 60 seconds) will require a port network Cold restart. Some customers use duplicated IPSIs to provide a higher level of reliability for their port networks and have configured their enterprise network/WAN to provide independent or redundant paths to support the duplicated IPSIs. If the networks are truly independent, an enterprise network/WAN outage will only affect one of the networks, while the other network remains available to provide service. For a port network, with the active IPSI on the failed enterprise network/WAN, a spontaneous IPSI interchange will quickly restore service. However, a spontaneous IPSI interchange guarantees that some data loss will occur, so it is potentially more disruptive than waiting for the current network to recover. Consequently, the interchange recovery action is delayed until the IPSI Socket Sanity timer expires. Recovery is now a trade-off between either, waiting for the current network to recover or switching to the known good network, accepting the associated data loss. The IPSI Socket Sanity Timeout is an application-level timer that times the total outage, that is, the actual network outage plus the TCP recovery time. If the duration of the typical LAN/WAN outage is known to be greater than 12 to 13 seconds, even using a 15-second timer value will not provide enough time for the network to recover before the timer expires. When the timer expires, a spontaneous IPSI interchange will occur. Since the interchange is inevitable in this case, it is better to set a shorter timer value so that the interchange will occur earlier and thus limit the data loss.

6.3 Survivable Core Server and Converged Control Network

For the purposes of CM release 6.X, CM main call control servers and CM Survivable Core Server (SCS; formerly known as ESS) call control servers will only use the Converged Control Network and

Page 21: Bonding Cm6

Converged Control Network -21- Issue 1.0

need to be configured and engineered for the appropriate levels of security and availability based on their location in the enterprise network.

7 FAILURE SCENARIOS

7.1 Layer 2 Switch to CM Call Control server Loss of Cable

This scenario covers the malfunction or removal of the Ethernet connection cable serving the active CM call controller server. In the case of a server with NIC bonding there will be a very quick switchover (approx 100ms) to make the standby bonded NIC active to the other L2 switch. Control Network sockets as well as traffic should not be affected. In a server without NIC bonding the recovery will escalate to an interchange to the standby CM call controller server resulting in potential service disruption due to the time to detect the failure, the interchange, and CM WARM restart of up to 20 to 28 seconds. This time is highly dependent on the administration of PE priorities and the IPSI Socket Sanity Timer value.

1. The cable between L2-1 switch and Communications Manager Server A is cut or fails. 2. After the NIC bonding takeover interval (default 100ms) has passed the standby NIC on

Communications Manager Server A becomes active. 3. Communication to port networks and H.248 Gateways continues through L2-2 with some packet

loss usually handled at the TCP retry level. Experiments under load have shown this takeover is pretty close to seamless.

Figure 6 Loss of Cable Connectivity Between Layer 2 Switch and CM

7.1.1 Effects on Enterprise L2 Core Port Networks and H.248 Gateways

No effect should be seen on core Port Networks or H.248 Gateways.

Page 22: Bonding Cm6

Converged Control Network -22- Issue 1.0

7.1.2 Effects on Enterprise L3 Core/WAN Port Networks and H.248 Gateways

No effects should be seen on remote Port Networks or H.248 Gateways.

7.2 Layer 2 Switch Failure

This scenario covers the loss of a Layer 2 switch serving the active CM Call controller server due to loss of power.

1. The L2-1 switch fails 2. In the case of a server with NIC bonding there will be a very quick switchover (approx 100ms) to

make the standby bonded NIC active to the other L2 switch. Control Network sockets, as well as traffic, should not be affected.

In a server without NIC bonding the recovery will escalate to an interchange to the standby CM call controller server resulting in potential service disruption due to the interchange and CM WARM restart of between 4 and 12 seconds. Also, refer to section 4.6.2 for a detailed availability assessment.

Figure 7 Layer 2 Switch Failure

7.2.1 Effects on Enterprise L2 Core Port Networks and H.248 Gateways

A failure mode specific to a L2 core focused architecture is when an L2 switch serving an active IPSI interface fails then the recovery path for that port network is via an IPSI interchange. The failure detection interval will be a few seconds more than the “IPSI Socket Sanity Timeout” interval and result in a WARM restart of the port network. H.248 Gateways with dual control links use the L1 link state (MII) to detect failures and automatically fail the active Ethernet interface over to the standby Ethernet interface.

7.2.2 Effects on Enterprise L3 Core/WAN Port Networks and H.248 Gateways

No effects should be seen on remote Port Networks or H.248 Gateways.

Page 23: Bonding Cm6

Converged Control Network -23- Issue 1.0

7.3 Layer 2 Switch Insanity (L1 connectivity maintained, no L2 traffic)

1. Layer 2 switch L2-1 goes insane losing the ability to handle L2 traffic while maintaining the L1 link state to the attached devices.

2. Communications Manager Server A detects loss of handshake to the Port Networks and H.248 Gateways.

3. After the “IPSI Socket Sanity Timeout” interval expires the State of Health of Communication Manager Server A is marked as degraded.

The arbitration control between Server A and Server B chooses Server B (talking through L2-2) as the server with the most connectivity and control interchanges from Server A to Server B.

Figure 8 Layer 2 Switch Insanity

7.3.1 Effects on Enterprise L2 Core Port Networks and H.248 Gateways

Since NIC bonding depends on L1 link state to determine the active physical NIC to use the insanity of the local L2 switch will not cause the switch over to the standby physical NIC. Thus, the only recovery method is for the Communications Manager application to detect loss of connectivity at the application handshake layer and call for a server interchange to the standby. In addition, since the H.248 Gateway link duplication depends on the L1 link state any H.248 gateways connected to the insane L2 switch will have connectivity lost until the switch is powered down or restored to service.

7.3.2 Effects on Enterprise L3 Core/WAN Port Networks and H.248 Gateways

No effect should be seen on remote Port Networks or H.248 Gateways.

Page 24: Bonding Cm6

Converged Control Network -24- Issue 1.0

7.4 Layer 3 Switch Failure

1. Layer 3 router L3-1 fails 2. Layer 3 router L3-2 takes over using HSRP/VRRP

7.4.1 Effects on Enterprise L2 Core Port Networks and H.248 Gateways

No effect should be seen on core Port Networks or H.248 Gateways.

7.4.2 Effects on Enterprise L3 Core/WAN Port Networks and H.248 Gateways

If the HSRP/VRRP failure recovery interval is more than the “IPSI Socket Sanity Timeout” interval then the failure of the L3 route will result in a WARM restart of the port network.

Figure 9 Layer 3 Switch Failure

8 MIGRATION PATHS

8.1 Multi-Connect to Converged Control Network

Convert all EI/ATM connected Port Networks to IPSI controlled Port Networks.

Refer to [Appendix E] of Upgrading to Avaya Aura® Communication Manager for detailed instructions on upgrading fiber controlled port networks to IP Connect.

8.2 IP Connect to Converged Control Network

Customers should consider designing new 5.2.1 installations using the single interface converged network design. High-level summary:

Page 25: Bonding Cm6

Converged Control Network -25- Issue 1.0

Create static IP addressing schema for all IPSIs to fit enterprise network

Connect the L2 switches with appropriately configured enterprise network interface (L3 routers).

Convert each IPSI from DHCP to static addressing connected through Control Network C one side (A side, B side) at a time.

Verify all IPSI connectivity through Control Network C.

Remove the interface cables for CNA and CNB interfaces.

Proceed with CM 5.2.1 to CM 6.X call control server migration. Full description in [Appendix B: Control Network Consolidation: CM5.2.1 -> IPSI static addressing].

9 NETWORK ENGINEERING GUIDELINES

9.1 Control Bandwidth Requirements on the Enterprise Network

The bandwidth required will depend on the number of port networks (PNs), H.248 gateways, the number of endpoints (both trunk and station), the number and the complexity of the calls being controlled. Delay and congestion must be kept to a minimum. Signaling delays can noticeably affect the user interface and in worst case scenarios can cause disruptions in service. The enterprise network will need to be engineered to support the basic connectivity and background activity on the port networks and H.248 gateways as well as the randomly arriving call traffic and periodic maintenance. There will be cases where the control path traverses several segments, each with different configurations and sometimes carrying several signaling channels. In these cases, each segment will have to be engineered for its specific needs. Detailed values and engineering guidelines can be found in Signaling Bandwidths: Estimating IP Bandwidths for ECLIPS Signaling Connections. While the document currently addresses IP-connect systems, the values and principles also apply for Converged Control Network systems.

9.2 Network Tolerances

9.2.1 End-to-End Roundtrip Delay (Jitter, Loss)

As all the connections of concern are TCP, loss and jitter will translate into delay at the application layer. The following include jitter and loss effects i.e., delay at the TCP level for IP server (IPSI) and H.248 Gateway connections:

Required: Round Trip Delay<=300ms

Above 100ms, users may notice sluggishness in the user interface. Above 300ms, or with significant packet loss, there will be excessive retransmission at the application layer that may bring the link down. Delays, even momentary delays, exceeding 300ms may cause interchange of control networks or Servers, and may generate alarms.

9.2.2 Network Convergence Time and Routing Protocols

The enterprise network should be engineered with routing protocols that are robust and will converge quickly when there are changes in the network configuration. OSPF is a high function, non-proprietary Internal Gateway Protocol. Convergence times of less than 10 seconds can usually be achieved. This will minimize outages and may be fast enough to see minimal disruption based on the setting for the “IPSI Socket Sanity Timeout” interval. Breaking the network into smaller logical areas will improve the performance of OSPF.

Page 26: Bonding Cm6

Converged Control Network -26- Issue 1.0

9.3 QoS Requirements

QoS requirements are to protect the signaling traffic from network events. The local connections between the servers and the port networks and H.248 gateways can be contained in dedicated switches, just as they are typically installed in a pre-6.0 installation. Layer 2 connections to the Converged Control network may be isolated on a separate VLAN. This VLAN should have priority over non-voice traffic on the same switch. When tying the local LAN/VLAN into the enterprise network; strong access lists or firewalls should be used to prevent Denial of Service (DoS) attacks and broadcast storms from entering the local control networks in the event of a virus on the corporate network. The current ports that are used by the Avaya Aura Communication Manager can be requested from Avaya sales or sales support. The document “Port Usage Matrix: Updated for CM 6.2” identifies the ports that must be opened for IPSI controlled port networks and H.248 gateways. Access Control Lists can be implemented to prevent unwanted traffic from entering the control subnets, but care must be taken to see that unwanted traffic does not overload the router providing access to the enterprise network. This may disrupt the routed links to remote port networks. Dedicating routers and WAN links for signaling can avoid this weakness at an increased cost in network resources. Signaling traffic should be carried through the network using low latency queuing mechanisms. Typically, you would tag the signaling with a DSCP of 34, and implement the appropriate queuing on the intervening routers. The enterprise network equipment supplier should be able to provide specific guidelines on setting up an LLQ or other suitable QoS design. Call Admission Control (CAC) can be used to control the bandwidth of bearer traffic across routed links. If you cannot assure adequate bandwidth in your LLQ for both Signaling and bearer traffic, consider putting only your Converged Control Network traffic in the Priority queue and taking a possible hit on the bearer traffic to assure robustness of the port network and H.248 gateway control. This must be engineered on a case-by-case basis.

9.3.1 Avaya Aura Communications Manager 802.1p/Q Tagging and DSCP

On an Avaya Aura Communication Manager Converged IP-Connect system the port network and H.248 Gateway control traffic will traverse the enterprise IP network. Support for 802.1p/q priority tagging will not be provided in the Avaya Aura Communication Manager call control servers. If QoS is desired and properly configured on this network it will be necessary to have the L2 interface to the server tag the port network control traffic. With only a single interface available all traffic on this interface will be tagged at the L2 switch interface. DSCP marking of the control network packets as specified on the “change ipserver-interface X” form will continue to be supported.

9.4 Network Configuration Planning

Figure 10depicts a typical Avaya Aura Communication Manager IP-Connect Network configuration for any large enterprise where the Control Network is part of the enterprise network, but yet isolated with its own subnet and VLANs. Note that the Control and Voice network have their own VLANs.

Page 27: Bonding Cm6

Converged Control Network -27- Issue 1.0

Figure 10 Typical Communications Manager 5.X IP-Connect Network Configuration

9.4.1 Converged Control Network Configurations

For Avaya Aura Communication Manager 6.0 and later all control network traffic for IPSI Port Networks and H.248 Gateways will route through a single active network interface. Even with NIC bonding administered one interface will be active and the other interface will be in standby (inactive) mode.

Page 28: Bonding Cm6

Converged Control Network -28- Issue 1.0

Figure 11 Typical Communications Manager 6.X IP-Connect Network Configuration

9.4.2 DHCP vs. Static Administration of IPSI

For Avaya Aura Communication Manager 6.X all IPSIs will be assigned static IP addresses. CM’s DHCP capability for this purpose is no longer supported.

9.4.3 Static Routes

Page 29: Bonding Cm6

Converged Control Network -29- Issue 1.0

For Avaya Aura Communication Manager 6.X there should be no need to administer or keep any static routes. All control network traffic is routed through the single active interface.

10 INSTALLATION AND ADMINISTRATION

Specific details of the steps to migrate from a CNA/CNB control network to a Converged Control Network are in [Appendix B: Control Network Consolidation: CM5.2.1 -> IPSI static addressing];

11 REFERENCES

1. Administration for Network Connectivity 2. Avaya Aura® Solution Design Considerations and Guidelines 3. Avaya Aura™ Communication Manager Security Design 4. Avaya Aura™ Communication Manager Software Based Platforms High Availability Solutions 5. Avaya IP Voice Quality Network Requirements 6. Signaling Bandwidths: Estimating IP Bandwidths for ECLIPS Signaling Connections

Figure 12 S8800 Physical Layout on Back Panel

Page 30: Bonding Cm6

Converged Control Network -30- Issue 1.0

12 APPENDIX A: DUPLICATED SERVER NIC BONDING

All CM server interfaces should be set for auto-negotation.

Using the Avaya Aura System Platform web console enter the following menu:

Server Management -> Network Configuration

Click on “Add Bond” to bring up the bonding options

Add a name, usually “bond#”, where # is 0, 1, 2, …

Choose a Slave Ethernet from the list of free Ethernet devices

Page 31: Bonding Cm6

Converged Control Network -31- Issue 1.0

Clicking on “More” under the “Advanced” column will allow specific options to be administered.

Page 32: Bonding Cm6

Converged Control Network -32- Issue 1.0

Clicking “Save” will result in a server reboot and loss of service. It is suggested that this

be done only on the Standby server to prevent service outages.

Click “Save” at the bottom of the screen to commit the changes.

Page 33: Bonding Cm6

Converged Control Network -33- Issue 1.0

13 APPENDIX B: CONTROL NETWORK CONSOLIDATION: CM5.2.1 -> IPSI STATIC ADDRESSING

Executed on pre-CM6 (CM5.2.1) system

Planning:

– Collect the location of all IPSI boards in CN-A and/or CN-B

– On a worksheet, assign new public network IP addresses to each CN-A and/or CN-B

IPSIs in the system. These addresses should be consistent with later merging into a

single L2 domain.

– Note whether all port networks have duplicated IPSI boards. If an IPSI port network contains only one IPSI (not duplicated IPSIs, which may be the case for some port networks), the conversion to a Converged Control Network affects the service for that port network.

Note: this activity will determine whether or not the changes will be service impacting; duplicate IPSIs infer the changes may be executed on each board as it is in standby mode Prerequisites • IPSIs are running the latest available firmware. • DHCP service for private control networks is disabled on the Communication Manager servers. • SAT session is started.

The network switches supporting the IPSIs will need a port assigned to connect to the

enterprise network routing system

Cabling for the network switch connections to the enterprise network will be needed (do not plan

to use the former private network cable)

Busy out the standby server (server –b) to prevent an inadvertent interchange. Note: If a CM

restart occurs during this process recovery may not be complete and you may temporarily lose

service to multiple port networks until they are re-administered.

Connect the network switch devices (supporting the IPSIs) to the public network.

Note: Do not disconnect the Ethernet switch private network from the Communication Manager

servers until instructed.

If there has never been a public network assigned to the server community…:

Login to each of the servers as craft

From the command line, type cnc on

Via the SAT, set ipserver-interface a-all

Page 34: Bonding Cm6

Converged Control Network -34- Issue 1.0

Once the activity has completed, list ip-server-interface to confirm all a-side boards are active

If all b-side boards are not in standby, interchange the boards individually and debug issues

until the b-side boards in all port networks are in-service and in standby mode

Via the SAT, change ipserver-interface X for each IPSI on the B control network

Tab to the Secondary IPSI fields and enter the new addresses into the form, then submit the

changes

Continue to change all b-side IPSI addresses

When all b-side IPSIs have been successfully changed, run the list ipserver-interface

command and confirm the b-side IPSIs have come into service (the network switch is connected

to the public network, so these boards should be able to show State of Health 0.0.0.0 on that

form, indicating all aspects of connectivity are good)

Page 35: Bonding Cm6

Converged Control Network -35- Issue 1.0

At this point all B side IPSIs should have static IP addresses administered that will be valid on the enterprise network. Once all b-side IPSIs have been deemed in service, run the set ipserver-interface b-all

command to cause the IPSIs to interchange (all a-side IPSIs need to be in standby and in

service, debug if not)

Repeat the address changes for all a-side IPSIs. Tab to the Primary IPSI fields

Via the list ipserver-interface command, confirm all IPSI interfaces show State of Health 0.0.0.0

on the form

Via the list sys-link command, confirm the associated EAL links are up (data taken from a

different maintenance interface than the previous command)

Page 36: Bonding Cm6

Converged Control Network -36- Issue 1.0

Place a few IP and TDM type calls and exercise a couple of IPSI interchanges to confirm

reliability

Run “save translation” on the active CM server. It should return success otherwise the standby

server was not updated.

Page 37: Bonding Cm6

Converged Control Network -37- Issue 1.0

Release the standby server from busy out (server –r). Make sure all IPSI boards come into

service on the standby server by running the statusserver command on either the active or

standby server after the refresh until all links are up.

Page 38: Bonding Cm6

Converged Control Network -38- Issue 1.0

Change the management IP addresses of the CNA/B Ethernet switches to an available address

in the newly connected enterprise network.

Note that on most Ethernet switches a reboot will be required to make this new address active;

reboot only when the respective set of IPSI’s is in standby mode to prevent loss of service

Disconnect and remove Ethernet cables from the server CNA/B ports and the former CNA/B

switches

For complete integration with CM6.X in a Converged Control Network all of the IPSI/CM L2 switches need to be joined into a single L2 domain due to the requirements of NIC bonding. After completing the Converged Control Network conversion upgrade the specific server

configuration as per Upgrading to Avaya Aura® Communication Manager.

Page 39: Bonding Cm6

Converged Control Network -39- Issue 1.0

14 APPENDIX C: IP SERVER INTERFACE BOARD

The IP Server Interface (IPSI) board is installed in a G600, G650, CMC1, MCC1, or SCC1 gateway, and it is the gateway’s interface to communicate with the servers. Most of the programming for an IPSI board is done on the SAT ipserver-interface form, which has commands change ipserver-interface #, display ipserver-interface #, and list ipserver-interface. Location is the board location, or slot #. Host is the board’s static IP address or the hostname if DHCP is used. DHCP ID is the hostname. Socket Encryption is enabled by default on IP-Connect systems and disabled on Multi-Connect systems. When QoS is enabled the DiffServ parameters contain the values to be applied to the Call control links from the call server to this IPSI board. These values are not applied to the IPSI board itself. Even though their administration remains visible, the 802.1p priority tagging values will not be used in Avaya Aura Communication Manager 6.0. This form can be used to activate encryption on the IPSI control link.

The IPSI’s speed/duplex and L2/L3 tagging parameters are configured on the board itself, instead of via SAT forms. From the IPSI board type ipsilogin at the [IPSI]: prompt, and enter the login name and password to access the [IPADMIN]: prompt. The commands to display and configure the speed and duplex are listed in appendix E. The commands to display and configure the L2 and L3 priority values are show qos, set vlan tag, set vlan priority, and set diffserv. Please consult a Network Administrator before settings these values.

Page 40: Bonding Cm6

Converged Control Network -40- Issue 1.0

©2010 Avaya Inc. All Rights Reserved. Avaya, the Avaya Logo, and Avaya Aura are trademarks of Avaya Inc. All trademarks identified by ® and ™ are registered trademarks or trademarks, respectively, of Avaya Inc. All other trademarks are the property of their respective owners. The information provided in this White Paper is subject to change without notice. The configurations, technical data, and recommendations provided are believed to be accurate and dependable, but are presented without express or implied warranty. Users are responsible for their application of any products specified in this White Paper.