integrate bullion s server with flexpod converged …...ending with the lowest module number...

23
White Paper © 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 23 Integrate Bullion S Server with FlexPod Converged Infrastructure Technical Considerations August 2017

Upload: others

Post on 12-Jan-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Integrate Bullion S Server with FlexPod Converged …...ending with the lowest module number (master), and following a specific slot order: 3, 6, 2, 5. Network considerations for SAP

White Paper

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 23

Integrate Bullion S Server with FlexPod Converged Infrastructure

Technical Considerations

August 2017

Page 2: Integrate Bullion S Server with FlexPod Converged …...ending with the lowest module number (master), and following a specific slot order: 3, 6, 2, 5. Network considerations for SAP

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 2 of 23

Contents Introduction

Purpose of this document Audience Document reference

Reference architecture overview

Connecting bullion server to FlexPod Bullion server PCIe blade slots and their constituents Network considerations for SAP HANA TDI Strategy for bullion server connectivity to FlexPod Bullion server: Network configuration best practices for SAP HANA TDI Additional network optimization

For more information

Page 3: Integrate Bullion S Server with FlexPod Converged …...ending with the lowest module number (master), and following a specific slot order: 3, 6, 2, 5. Network considerations for SAP

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 3 of 23

Introduction

Industry trends indicate a major shift in data center design and management toward converged infrastructure.

Converged infrastructure can provide the agility, flexibility, and scalability needs of modern data centers and ease

deployment of applications such as SAP HANA. SAP HANA Tailored Datacenter Integration (TDI) delivery model

allows customers to use existing hardware and infrastructure components for their SAP HANA environments.

Cisco and NetApp have partnered to deliver a series of FlexPod solutions that combine servers, storage resources,

and the network fabric to create an agile, efficient, and scalable platform for hosting applications. With Cisco

Unified Computing System™

(Cisco UCS®) servers and NetApp storage arrays listed in SAP’s Certified and

Supported SAP HANA Hardware Directory, in the Certified Appliances and Certified Enterprise Storage sections

respectively, FlexPod qualifies for TDI implementations of SAP HANA.

Bull SAS bullion is a powerful, reliable, and flexible high-end x86 server designed to help organizations improve

performance and increase agility. The TDI solution with Bull combines bullion S servers certified in appliance mode

with other SAP certified infrastructure components that already exist in the customer’s landscape.

Bullion S servers integrated with FlexPod provide a unique combination with similar benefits when you implement

SAP HANA. In such a scenario, NetApp storage provides the common base for Cisco UCS and bullion servers

from which they can boot and also derive SAP HANA persistence partitions.

Purpose of this document

This document provides configuration guidelines for connecting bullion servers to FlexPod converged

infrastructure. It focuses on networking aspects for seamless integration.

Audience

This document is intended for architects, engineers, and administrators who are responsible for configuring and

deploying SAP HANA TDI on shared and integrated infrastructure using FlexPod and bullion servers. It assumes

that the reader has a basic knowledge of FlexPod, Cisco UCS, NetApp storage, bullion S servers, Linux, and SAP

HANA TDI.

Document reference

Please refer the Cisco® Validated Design document FlexPod Datacenter for SAP Solution published in July 2017

for a deeper understanding of FlexPod converged infrastructure design principles and configuration. This document

builds on that design, incorporating bullion servers for SAP HANA and interconnecting them, addressing the

networking aspects of the TDI scenario.

Also refer to the bullion for SAP HANA User's Guide to familiarize yourself with bullion for SAP HANA and to gain a

deeper understanding of bullion S system setup.

Reference architecture overview

The FlexPod data center solution for SAP HANA with NetApp FAS storage provides an end-to-end architecture

with Cisco and NetApp technologies that support SAP HANA workloads with high availability and server

redundancy features.

The architecture uses Cisco UCS B-Series Blade Servers and C-Series Rack Servers with NetApp All-Flash FAS

(AFF) A300 series storage attached to Cisco Nexus 9000 Series Switches. It provides Network File System (NFS)

access and uses Small Computer System Interface over IP (iSCSI). The C-Series Rack Servers are connected

directly to Cisco UCS fabric interconnects with the single-wire management feature. This infrastructure is deployed

Page 4: Integrate Bullion S Server with FlexPod Converged …...ending with the lowest module number (master), and following a specific slot order: 3, 6, 2, 5. Network considerations for SAP

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 4 of 23

to provide Preboot eXecution Environment (PXE) and iSCSI boot options for hosts with file-level and block-level

access to shared storage.

Figure 1 shows the FlexPod data center reference architecture for SAP HANA workloads, highlighting the

hardware components and the network connections for a configuration with IP-based storage.

Figure 1. FlexPod data center reference architecture for SAP HANA

Page 5: Integrate Bullion S Server with FlexPod Converged …...ending with the lowest module number (master), and following a specific slot order: 3, 6, 2, 5. Network considerations for SAP

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 5 of 23

Bullion servers are composed of compute module/s populated with memory and I/O blades and interconnected by

a connection box as shown in Figure 2.

Figure 2. Main components of bullion S (Source: Bull)

Bullion for SAP HANA servers for the TDI use case are equipped with the same hardware as the certified SAP

HANA appliance. Server configuration follows a modular approach, with 1 to 8 compute modules connected with

the connecting box, paving way from S2 to S16 server configurations as shown in Figure 3. The storage differs

from that delivered with an appliance, or storage is not delivered at all because existing storage is used.

Figure 3. Modular approach to server configuration with bullion S server (Source: Bull)

Page 6: Integrate Bullion S Server with FlexPod Converged …...ending with the lowest module number (master), and following a specific slot order: 3, 6, 2, 5. Network considerations for SAP

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 6 of 23

The following lists the particular bullion server configurations that Cisco resells. (Source: Bull)

Page 7: Integrate Bullion S Server with FlexPod Converged …...ending with the lowest module number (master), and following a specific slot order: 3, 6, 2, 5. Network considerations for SAP

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 7 of 23

Connecting bullion server to FlexPod

This section discusses how to connect the bullion server to FlexPod.

Bullion server PCIe blade slots and their constituents

The bullion S servers support up to 7 PCI Express (PCIe) blades. PCIe blades support storage and Ethernet cards.

The PCIe blade slots are numbered from left to right, at the rear side of the server (Figure 4).

Figure 4. Rear view of a sample S2 node: PCIe slots numbered

The bullion server supports PCIe blades as follows:

● Only host bus adapter (HBA) storage cards are supported.

● Only one MegaRAID card is allowed per server. Slot 0 is reserved for this card.

● A minimum of two storage cards are required and must be added to the master module. Storage cards are

added beginning with the lowest module number (master), ending with the highest module number (slave

farthest from the master), and following a specific slot order: 4, 1, 5, 2.

● A minimum of three network cards are required: two Ethernet 10-Gbps cards and one Ethernet 1-Gbps

card. Network cards are added beginning with the highest module number (slave farthest from the master),

ending with the lowest module number (master), and following a specific slot order: 3, 6, 2, 5.

Network considerations for SAP HANA TDI

SAP categorizes the SAP HANA system networks into three logical communication zones (Figure 5):

● Client zone: For external communication networks (for example, for application server and client network

connectivity)

● Internal zone: For internal communication (for example, internode communication in a scale-out system

and replication traffic in a distributed configuration)

● Storage zone: For all storage-related networks (for example, NFS-based /hana/shared, backup network,

and NFS-based data and log file system access when NFS is used)

For bullion server, the recommended approach is to boot locally from the available MegaRaid of SAS internal disks

configured in RAID 1. The bullion server BIOS doesn’t support iSCSI booting. However, PXE booting is supported.

Page 8: Integrate Bullion S Server with FlexPod Converged …...ending with the lowest module number (master), and following a specific slot order: 3, 6, 2, 5. Network considerations for SAP

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 8 of 23

Figure 5. SAP HANA TDI network requirements (Source: SAP SE)

Page 9: Integrate Bullion S Server with FlexPod Converged …...ending with the lowest module number (master), and following a specific slot order: 3, 6, 2, 5. Network considerations for SAP

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 9 of 23

Table 1 summarizes the various use cases that these networks serve and their relevance to a bullion server and

FlexPod integration scenario.

Table 1. SAP HANA TDI network requirements mapping for bullion and FlexPod integration scenario

Name Use case FlashStack virtualized SAP HANA solution relevance

Minimum bandwidth requirements

Client zone

Application server network Communication between SAP application server and database

Required 1 or 10 Gigabit Ethernet

Client network Communication between user or client application and database

Required 1 or 10 Gigabit Ethernet

Internal zone

Internode network Node-to-node communication within a scale-out configuration

Required for scale-out configurations only

10 Gigabit Ethernet

System replication network SAP HANA system replication Optional To be determined with customer

Storage zone

Backup network Data backup Optional 10 Gigabit Ethernet or 8-Gbps Fibre Channel

Storage network Communication between nodes and storage

● Optional for scale-up systems

● Required for NFS-based /hana/shared file system access in scale-out scenario

10 Gigabit Ethernet

Infrastructure-related networks

Administration and out-of-band management network

Bullion server management Required 1 Gigabit Ethernet

Boot network Systems boot through PXE Required only when you plan to use IP-based PXE boot mechanism defined in FlexPod

10 Gigabit Ethernet

Page 10: Integrate Bullion S Server with FlexPod Converged …...ending with the lowest module number (master), and following a specific slot order: 3, 6, 2, 5. Network considerations for SAP

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 10 of 23

Strategy for bullion server connectivity to FlexPod

You should use one of the 1-Gbps network interface card (NIC) ports for the administration and out-of-band (OOB)

management network through connection to the 1 Gigabit Ethernet port on the Cisco Nexus 2348TQ Switch. The

Cisco Nexus 93180LC-EX Switch port directly connects to the host and is configured as a port channel, which

becomes part of a virtual port channel (vPC). At the host OS level, a Link Aggregation Control Protocol (LACP)

bond interface is configured, enslaving all the interfaces of the host. Various VLAN interfaces are created based on

the LACP bond interface to handle the required networks’ traffic in a secure and efficient way. Figure 6 summarizes

the connectivity topology.

Figure 6. Simplified topology of bullion connectivity to Cisco Nexus 9000 Series Switches

Page 11: Integrate Bullion S Server with FlexPod Converged …...ending with the lowest module number (master), and following a specific slot order: 3, 6, 2, 5. Network considerations for SAP

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 11 of 23

Figure 7 shows the landscape with connectivty details. The steps to achieve this connectivity are as follows:

1. Configure out-of-band management. Connect a 1 Gbps NIC port to a port on the Cisco Nexus 2000 Series

Fabric Extender for administration and management access. Configure this fabric extender port as an access

port to allow the out-of-band administration and management VLAN.

2. Connect one 10-Gbps NIC port to N9K-A and the other to N9K-B using QSFP-4X10G-AOC5M cables (Figure

8) or any other compatible transceiver modules (refer to the compatibility matrix for more options). Use cables

such as QSFP-4X10G-AOC5M because the Cisco Nexus 93180LC-EX has only 40- and 100-Gbps ports, but

only 10-Gbps NICs are supported in the bullion server for SAP HANA configuration.

Figure 7. Connectivity diagram: Bullion server to FlexPod

Page 12: Integrate Bullion S Server with FlexPod Converged …...ending with the lowest module number (master), and following a specific slot order: 3, 6, 2, 5. Network considerations for SAP

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 12 of 23

Figure 8. Cable QSFP-4X10G-AOC5M

The configuration principles on the Cisco Nexus 9000 Series Switch side follow standard vPC configuration

guidelines as part of FlexPod. The following section discusses the bullion server host-side network configuration

that complements it.

Bullion server: Network configuration best practices for SAP HANA TDI

On the server side, the recommended approach is to configure LACP bonding, enslaving all the available

interfaces, and then configure VLANs over that LACP bond interface. The example here uses a bullion S8 server

equipped with four 10-Gbps NICs and running SUSE Linux Enterprise Server (SLES) 12 for SAP SP1 OS. The

configuration steps for this example are presented here.

1. Check the available interfaces on the server.

cishanabs:~ # ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen

1000

link/ether 08:00:38:b0:1a:6e brd ff:ff:ff:ff:ff:ff

3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen

1000

link/ether 08:00:38:b0:1a:6f brd ff:ff:ff:ff:ff:ff

4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default

qlen 1000

link/ether a0:36:9f:e3:52:d4 brd ff:ff:ff:ff:ff:ff

inet 172.23.0.50/24 brd 172.23.0.255 scope global eth2

valid_lft forever preferred_lft forever

5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen

1000

link/ether a0:36:9f:e3:52:d5 brd ff:ff:ff:ff:ff:ff

6: eth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default

qlen 1000

Page 13: Integrate Bullion S Server with FlexPod Converged …...ending with the lowest module number (master), and following a specific slot order: 3, 6, 2, 5. Network considerations for SAP

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 13 of 23

link/ether 00:10:9b:0c:f0:fa brd ff:ff:ff:ff:ff:ff

7: eth5: <BROADCAST,MULTICAST> mtu 9000 qdisc mq state DOWN group default qlen 1000

link/ether 00:10:9b:0c:f1:02 brd ff:ff:ff:ff:ff:ff

8: eth6: <BROADCAST,MULTICAST> mtu 9000 qdisc mq state DOWN group default qlen 1000

link/ether 00:90:fa:c5:af:04 brd ff:ff:ff:ff:ff:ff

9: eth7: <BROADCAST,MULTICAST> mtu 9000 qdisc mq state DOWN group default qlen 1000

link/ether 00:90:fa:c5:af:0c brd ff:ff:ff:ff:ff:ff

10: eth8: <BROADCAST,MULTICAST> mtu 9000 qdisc mq state DOWN group default qlen 1000

link/ether 00:90:fa:f7:47:68 brd ff:ff:ff:ff:ff:ff

11: eth9: <BROADCAST,MULTICAST> mtu 9000 qdisc mq state DOWN group default qlen 1000

link/ether 00:90:fa:f7:47:70 brd ff:ff:ff:ff:ff:ff

12: eth10: <BROADCAST,MULTICAST> mtu 9000 qdisc mq state DOWN group default qlen

1000

link/ether 00:90:fa:c5:a7:36 brd ff:ff:ff:ff:ff:ff

13: eth11: <BROADCAST,MULTICAST> mtu 9000 qdisc mq state DOWN group default qlen

1000

link/ether 00:90:fa:c5:a7:3e brd ff:ff:ff:ff:ff:ff

cishanabs:~ #

2. Identify the 10 Gigabit Ethernet interfaces using the ethtool command. In the sample bullion S server

configuration, eth4 through eth11 are 10 Gigabit Ethernet interfaces.

cishanabs:~ # for i in `seq -w 0 9`; do ethtool eth$i > /tmp/ethinfo; head -n 5

/tmp/ethinfo; done

Settings for eth0:

Supported ports: [ TP ]

Supported link modes: 10baseT/Half 10baseT/Full

100baseT/Half 100baseT/Full

1000baseT/Full

Settings for eth1:

Supported ports: [ TP ]

Supported link modes: 10baseT/Half 10baseT/Full

100baseT/Half 100baseT/Full

1000baseT/Full

Settings for eth2:

Supported ports: [ TP ]

Supported link modes: 10baseT/Half 10baseT/Full

100baseT/Half 100baseT/Full

1000baseT/Full

Settings for eth3:

Supported ports: [ TP ]

Supported link modes: 10baseT/Half 10baseT/Full

100baseT/Half 100baseT/Full

1000baseT/Full

Settings for eth4:

Supported ports: [ FIBRE ]

Page 14: Integrate Bullion S Server with FlexPod Converged …...ending with the lowest module number (master), and following a specific slot order: 3, 6, 2, 5. Network considerations for SAP

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 14 of 23

Supported link modes: 1000baseT/Full

10000baseT/Full

Supported pause frame use: Symmetric

Settings for eth5:

Supported ports: [ FIBRE ]

Supported link modes: 1000baseT/Full

10000baseT/Full

Supported pause frame use: Symmetric

Settings for eth6:

Supported ports: [ FIBRE ]

Supported link modes: 1000baseT/Full

10000baseT/Full

Supported pause frame use: Symmetric

Settings for eth7:

Supported ports: [ FIBRE ]

Supported link modes: 1000baseT/Full

10000baseT/Full

Supported pause frame use: Symmetric

Settings for eth8:

Supported ports: [ FIBRE ]

Supported link modes: 1000baseT/Full

10000baseT/Full

Supported pause frame use: Symmetric

Settings for eth9:

Supported ports: [ FIBRE ]

Supported link modes: 1000baseT/Full

10000baseT/Full

Supported pause frame use: Symmetric

cishanabs:~ # for i in `seq -w 10 11`; do ethtool eth$i > /tmp/ethinfo; head -n 5

/tmp/ethinfo; done

Settings for eth10:

Supported ports: [ FIBRE ]

Supported link modes: 1000baseT/Full

10000baseT/Full

Supported pause frame use: Symmetric

Settings for eth11:

Supported ports: [ FIBRE ]

Supported link modes: 1000baseT/Full

10000baseT/Full

Supported pause frame use: Symmetric

cishanabs:~ #

3. Configure slave interfaces using the available 10 Gigabit Ethernet interfaces. In the sample configuration, the

eth4 through eth11 configuration files are updated as shown here.

Page 15: Integrate Bullion S Server with FlexPod Converged …...ending with the lowest module number (master), and following a specific slot order: 3, 6, 2, 5. Network considerations for SAP

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 15 of 23

cishanabs:~ # cat /etc/sysconfig/network/ifcfg-eth4

BOOTPROTO='none'

BROADCAST=''

ETHTOOL_OPTIONS=''

IPADDR=''

MTU='9000'

NAME='OneConnect NIC (Skyhawk)'

NETMASK=''

NETWORK=''

REMOTE_IPADDR=''

STARTMODE='hotplug'

cishanabs:~ #

4. Configure LACP bonding interface ifcfg-bond0.

cishanabs:~ # cat /etc/sysconfig/network/ifcfg-bond0

BONDING_MASTER='yes'

BONDING_MODULE_OPTS='mode=802.3ad miimon=100'

BONDING_SLAVE0='eth4'

BONDING_SLAVE1='eth5'

BONDING_SLAVE2='eth6'

BONDING_SLAVE3='eth7'

BONDING_SLAVE4='eth8'

BONDING_SLAVE5='eth9'

BONDING_SLAVE6='eth10'

BONDING_SLAVE7='eth11'

BOOTPROTO='none'

ETHTOOL_OPTIONS=''

MTU='9000'

STARTMODE='auto'

xmit_hash_policy='layer3+4'

BONDING_MASTER_UP_ENSLAVE='yes'

cishanabs:~ #

5. Restart the network to make the changes take effect.

cishanabs:~ # service network restart

6. Check the status of interfaces on the server. Observe that slave Ethernet interfaces eth4 through eth11 share

the same MAC address as the master bond interface.

cishanabs:~ # ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

Page 16: Integrate Bullion S Server with FlexPod Converged …...ending with the lowest module number (master), and following a specific slot order: 3, 6, 2, 5. Network considerations for SAP

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 16 of 23

valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen

1000

link/ether 08:00:38:b0:1a:6e brd ff:ff:ff:ff:ff:ff

3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen

1000

link/ether 08:00:38:b0:1a:6f brd ff:ff:ff:ff:ff:ff

4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default

qlen 1000

link/ether a0:36:9f:e3:52:d4 brd ff:ff:ff:ff:ff:ff

inet 172.23.0.50/24 brd 172.23.0.255 scope global eth2

valid_lft forever preferred_lft forever

5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen

1000

link/ether a0:36:9f:e3:52:d5 brd ff:ff:ff:ff:ff:ff

6: eth4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0

state UP group default qlen 1000

link/ether 00:90:fa:f7:47:70 brd ff:ff:ff:ff:ff:ff

7: eth5: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0

state UP group default qlen 1000

link/ether 00:90:fa:f7:47:70 brd ff:ff:ff:ff:ff:ff

8: eth6: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0

state UP group default qlen 1000

link/ether 00:90:fa:f7:47:70 brd ff:ff:ff:ff:ff:ff

9: eth7: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0

state UP group default qlen 1000

link/ether 00:90:fa:f7:47:70 brd ff:ff:ff:ff:ff:ff

10: eth8: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0

state UP group default qlen 1000

link/ether 00:90:fa:f7:47:70 brd ff:ff:ff:ff:ff:ff

11: eth9: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0

state UP group default qlen 1000

link/ether 00:90:fa:f7:47:70 brd ff:ff:ff:ff:ff:ff

12: eth10: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0

state UP group default qlen 1000

link/ether 00:90:fa:f7:47:70 brd ff:ff:ff:ff:ff:ff

13: eth11: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0

state UP group default qlen 1000

link/ether 00:90:fa:f7:47:70 brd ff:ff:ff:ff:ff:ff

27: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP

group default

link/ether 00:90:fa:f7:47:70 brd ff:ff:ff:ff:ff:ff

7. Create VLAN interface files according to the number of VLANs defined in the FlexPod configuration. The

sample configuration shows the VLAN interface configuration files for VLANs 500, 501, and 502.

cishanabs:~ # vi /etc/sysconfig/network/ifcfg-vlan500

STARTMODE='auto'

Page 17: Integrate Bullion S Server with FlexPod Converged …...ending with the lowest module number (master), and following a specific slot order: 3, 6, 2, 5. Network considerations for SAP

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 17 of 23

ETHERDEVICE='bond0'

VLAN_ID='500'

IPADDR='192.168.100.50/24'

cishanabs:~ # vi /etc/sysconfig/network/ifcfg-vlan501

STARTMODE='auto'

ETHERDEVICE='bond0'

VLAN_ID='501'

IPADDR='192.168.101.50/24'

cishanabs:~ # vi /etc/sysconfig/network/ifcfg-vlan502

STARTMODE='auto'

ETHERDEVICE='bond0'

VLAN_ID='502'

IPADDR='192.168.102.50/24'

Note: Be sure that the VLANs configured here match those defined and used in the landscape.

8. Restart the network to make the changes take effect.

cishanabs:~ # service network restart

9. Check the status of interfaces on the server. Observe that VLAN interfaces are now available.

cishanabs:~ # ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen

1000

link/ether 08:00:38:b0:1a:6e brd ff:ff:ff:ff:ff:ff

3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen

1000

link/ether 08:00:38:b0:1a:6f brd ff:ff:ff:ff:ff:ff

4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default

qlen 1000

link/ether a0:36:9f:e3:52:d4 brd ff:ff:ff:ff:ff:ff

inet 172.23.0.50/24 brd 172.23.0.255 scope global eth2

valid_lft forever preferred_lft forever

5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen

1000

link/ether a0:36:9f:e3:52:d5 brd ff:ff:ff:ff:ff:ff

6: eth4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0

state UP group default qlen 1000

Page 18: Integrate Bullion S Server with FlexPod Converged …...ending with the lowest module number (master), and following a specific slot order: 3, 6, 2, 5. Network considerations for SAP

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 18 of 23

link/ether 00:10:9b:0c:f1:02 brd ff:ff:ff:ff:ff:ff

7: eth5: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0

state UP group default qlen 1000

link/ether 00:10:9b:0c:f1:02 brd ff:ff:ff:ff:ff:ff

8: eth6: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0

state UP group default qlen 1000

link/ether 00:10:9b:0c:f1:02 brd ff:ff:ff:ff:ff:ff

9: eth7: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0

state UP group default qlen 1000

link/ether 00:10:9b:0c:f1:02 brd ff:ff:ff:ff:ff:ff

10: eth8: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0

state UP group default qlen 1000

link/ether 00:10:9b:0c:f1:02 brd ff:ff:ff:ff:ff:ff

11: eth9: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0

state UP group default qlen 1000

link/ether 00:10:9b:0c:f1:02 brd ff:ff:ff:ff:ff:ff

12: eth10: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0

state UP group default qlen 1000

link/ether 00:10:9b:0c:f1:02 brd ff:ff:ff:ff:ff:ff

13: eth11: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0

state UP group default qlen 1000

link/ether 00:10:9b:0c:f1:02 brd ff:ff:ff:ff:ff:ff

14: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP

group default

link/ether 00:10:9b:0c:f1:02 brd ff:ff:ff:ff:ff:ff

15: vlan502@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP

group default

link/ether 00:10:9b:0c:f1:02 brd ff:ff:ff:ff:ff:ff

inet 192.168.102.50/24 brd 192.168.102.255 scope global vlan502

valid_lft forever preferred_lft forever

16: vlan501@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP

group default

link/ether 00:10:9b:0c:f1:02 brd ff:ff:ff:ff:ff:ff

inet 192.168.101.50/24 brd 192.168.101.255 scope global vlan501

valid_lft forever preferred_lft forever

17: vlan500@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP

group default

link/ether 00:10:9b:0c:f1:02 brd ff:ff:ff:ff:ff:ff

inet 192.168.100.50/24 brd 192.168.100.255 scope global vlan500

valid_lft forever preferred_lft forever

Additional network optimization

It is important to optimize the network configuration further at the OS level to meet the SAP HANA TDI network key

performance indicator (KPI) requirement, especially the internode bandwidth if the bullion server is part of a scale-

out system configuration. In addition, the correct Ethernet interface settings are beneficial for overall network

performance. The main network configuration settings for the bullion server to complement the FlexPod setup are

presented here.

Page 19: Integrate Bullion S Server with FlexPod Converged …...ending with the lowest module number (master), and following a specific slot order: 3, 6, 2, 5. Network considerations for SAP

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 19 of 23

1. Tune the network adapter receive (RX) and transmit (TX) buffer settings.

Adapter buffer defaults are commonly set to a smaller size than the maximum. Be sure that the current

hardware settings are near the preset maximum values.

cishanabs:~ # ethtool -g eth4

Ring parameters for eth4:

Pre-set maximums:

RX: 1024

RX Mini: 0

RX Jumbo: 0

TX: 2048

Current hardware settings:

RX: 1024

RX Mini: 0

RX Jumbo: 0

TX: 2048

Page 20: Integrate Bullion S Server with FlexPod Converged …...ending with the lowest module number (master), and following a specific slot order: 3, 6, 2, 5. Network considerations for SAP

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 20 of 23

2. Tune interrupt coalescence.

Turn off adaptive interrupt coalescence for the bonding slaves. This setting effectively tells the adapter to

interrupt the kernel immediately upon reception of any traffic. Correspondingly, with microseconds (usecs) set

to 0, the NIC will not have to wait to interrupt the kernel.

cishanabs:~ # ethtool -C eth4 adaptive-rx off rx-usecs 0 rx-frames 0

rx-usecs unmodified, ignoring

rx-frames unmodified, ignoring

cishanabs:~ # ethtool -c eth4

Coalesce parameters for eth4:

Adaptive RX: off TX: off

stats-block-usecs: 0

sample-interval: 0

pkt-rate-low: 0

pkt-rate-high: 0

rx-usecs: 0

rx-frames: 0

rx-usecs-irq: 0

rx-frames-irq: 0

tx-usecs: 0

tx-frames: 0

tx-usecs-irq: 0

tx-frames-irq: 0

rx-usecs-low: 0

rx-frame-low: 0

tx-usecs-low: 0

tx-frame-low: 0

rx-usecs-high: 128

rx-frame-high: 0

tx-usecs-high: 128

tx-frame-high: 0

3. Tune the adapter transmit queue length.

The transmit queue length value determines the number of packets that can be queued before transmission.

The default value is 1000, but for the bond and VLAN interfaces this value is reset to 0. It is a good practice to

verify this value and to set it to 1000 explicitly, if needed.

cishanabs:~ # ip link set dev bond0 txqueuelen 1000

cishanabs:~ # ip link set dev vlan500 txqueuelen 1000

cishanabs:~ # ip link set dev vlan501 txqueuelen 1000

Page 21: Integrate Bullion S Server with FlexPod Converged …...ending with the lowest module number (master), and following a specific slot order: 3, 6, 2, 5. Network considerations for SAP

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 21 of 23

cishanabs:~ # ip link set dev vlan502 txqueuelen 1000

4. Tune the /etc/sysctl.conf parameters.

The default values for the read and write memory buffers are usually small. You should use higher values as

suggested here to satisfy the SAP HANA TDI network KPI requirements.

#disable IPv6

net.ipv6.conf.all.disable_ipv6 = 1

#

# Controls IP packet forwarding

net.ipv4.ip_forward = 0

# Do not accept source routing

net.ipv4.conf.default.accept_source_route = 0

# Controls the use of TCP syncookies

net.ipv4.tcp_syncookies = 1

fs.inotify.max_user_watches = 65536

kernel.shmmax = 9223372036854775807

kernel.sem = 1250 256000 100 8192

kernel.shmall = 1152921504606846720

kernel.shmmni = 524288

# SAP HANA Database

# Next line modified for SAP HANA Database on 2016.01.04_06.52.38

vm.max_map_count=588100000

fs.file-max = 20000000

fs.aio-max-nr = 196608

vm.memory_failure_early_kill = 1

#

net.core.rmem_max = 25165824

net.core.wmem_max = 25165824

net.core.rmem_default = 1048576

net.core.wmem_default = 1048576

##

net.core.optmem_max = 16777216

net.core.netdev_max_backlog = 300000

net.ipv4.tcp_slow_start_after_idle = 0

net.ipv4.conf.default.promote_secondaries = 1

net.ipv4.conf.all.promote_secondaries = 1

net.ipv4.icmp_echo_ignore_broadcasts = 1

net.ipv4.tcp_rmem = 524288 16777216 25165824

net.ipv4.tcp_wmem = 524288 16777216 25165824

##

net.core.somaxconn=1024

net.ipv4.conf.default.accept_source_route = 0

net.ipv4.tcp_syncookies = 1

Page 22: Integrate Bullion S Server with FlexPod Converged …...ending with the lowest module number (master), and following a specific slot order: 3, 6, 2, 5. Network considerations for SAP

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 22 of 23

##

net.ipv4.tcp_no_metrics_save = 1

net.ipv4.tcp_moderate_rcvbuf = 1

net.ipv4.tcp_window_scaling = 1

net.ipv4.tcp_timestamps = 0

net.ipv4.tcp_sack = 0

net.ipv4.tcp_dsack = 0

net.ipv4.tcp_fsack = 0

net.ipv4.tcp_max_syn_backlog = 16348

net.ipv4.tcp_synack_retries = 3

net.ipv4.tcp_retries2 = 6

net.ipv4.tcp_keepalive_time = 1000

net.ipv4.tcp_tw_recycle = 1

net.ipv4.tcp_tw_reuse = 1

net.ipv4.tcp_mtu_probing=1

# Linux SAP swappiness recommendation

vm.swappiness=10

# Next line added for SAP HANA Database on 2015.09.16_02.09.34

net.ipv4.ip_local_port_range=40000 65300

#For background information, see SAP Note 2205917 and 1557506

vm.pagecache_limit_mb = 0

vm.pagecache_limit_ignore_dirty = 1

Page 23: Integrate Bullion S Server with FlexPod Converged …...ending with the lowest module number (master), and following a specific slot order: 3, 6, 2, 5. Network considerations for SAP

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 23 of 23

For more information

● Cisco Nexus 9000 Series NX-OS Interfaces Configuration Guide

● vPC configuration with Cisco Nexus 9000 Series Switches

● Configuring a VLAN device over a bonded interface with Red Hat Enterprise Linux (RHEL)

● Red Hat Enterprise Linux Network Performance Tuning Guide

● SUSE OS Tuning and Optimization Guide

Printed in USA C11-739548-00 08/17