adapter cards - login · ethernet – ieee 802.3ae 10 gigabit ethernet – ieee 802.3ad link...

2
ADAPTER CARDS One adapter for InfiniBand, 10 Gigabit Ethernet or Data Center Bridging fabrics World-class cluster performance High-performance networking and storage access Guaranteed bandwidth and low-latency services Reliable transport I/O consolidation Virtualization acceleration Scales to tens-of-thousands of nodes ConnectX-2 adapter cards with Virtual Protocol Interconnect (VPI) supporting InfiniBand and Ethernet connectivity provide the highest performing and most flexible interconnect solution for Enterprise Data Centers, High-Performance Computing, and Embedded environments. Clustered data bases, parallel processing, transactional services and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation. ConnectX-2 with VPI also simplifies network deployment by consolidating cables and enhancing performance in virtualized server environments. Virtual Protocol Interconnect VPI-enabled adapters make it possible for any standard networking, clustering, storage, and management protocol to seamlessly operate over any converged network leveraging a consoli- dated software stack. With auto-sense capability, each ConnectX-2 port can identify and operate on InfiniBand, Ethernet, or Data Center Bridging (DCB) fabrics. FlexBoot provides additional flexibility by enabling servers to boot from remote InfiniBand or LAN storage targets. ConnectX-2 with VPI and FlexBoot simplifies I/O system design and makes it easier for IT managers to deploy infrastructure that meets the challenges of a dynamic data center. World-Class Performance InfiniBand —ConnectX-2 delivers low latency, high bandwidth, and computing efficiency for performance-driven server and storage clustering applications. Efficient computing is achieved by offloading from the CPU routine activities which allows more processor power for the application. Network protocol processing and data movement overhead such as InfiniBand RDMA and Send/ Receive semantics are completed in the adapter without CPU intervention. CORE-Direct brings the next level of performance improvement by offloading application overhead (e.g. MPI collectives operations), such as data broadcasting and gathering as well as global synchronization communi- cation routines. GPU communication acceleration provides additional efficiencies by eliminating unnecessary internal data copies to significantly reduce application run time. ConnectX-2 advanced acceleration technology enables higher cluster efficiency and large scalability to tens of thousands of nodes. RDMA over Converged Ethernet — ConnectX-2 utilizing IBTA RoCE technology delivers similar low-latency and high-performance over Ethernet networks. Leveraging Data Center Bridging capabilities, RoCE provides efficient low latency RDMA services over Layer 2 Ethernet. The RoCE software stack maintains existing and future compatibility with bandwidth and latency sensitive applications. With link-level interoperability in existing Ethernet infrastructure, Network Adminis- trators can leverage existing data center fabric management solutions TCP/UDP/IP Acceleration — Applications utilizing TCP/UDP/IP transport can achieve industry- leading throughput over InfiniBand or 10GigE. The hardware-based stateless offload engines in ConnectX-2 reduce the CPU overhead of IP packet transport, freeing more processor cycles to work on the application. ConnectX ® -2 VPI with CORE-Direct Technology Single/Dual-Port Adapters with Virtual Protocol Interconnect Virtual Protocol Interconnect 1us MPI ping latency Selectable 10, 20, or 40Gb/s InfiniBand or 10 Gigabit Ethernet per port Single- and Dual-Port options available PCI Express 2.0 (up to 5GT/s) CPU offload of transport operations CORE-Direct application offload GPU communication acceleration End-to-end QoS and congestion control Hardware-based I/O virtualization Fibre Channel encapsulation (FCoIB or FCoE) RoHS-R6 BENEFITS KEY FEATURES ConnectX-2 VPI Adapter Cards ®

Upload: others

Post on 18-Jul-2020

10 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: ADAPTER CARDS - Login · ETHERNET – IEEE 802.3ae 10 Gigabit Ethernet – IEEE 802.3ad Link Aggregation and Failover – IEEE 802.1Q, .1p VLAN tags and priority – IEEE P802.1au

ADAPTER CARDS

– OneadapterforInfiniBand,10GigabitEthernetorDataCenterBridgingfabrics

– World-classclusterperformance– High-performancenetworkingandstorageaccess

– Guaranteedbandwidthandlow-latencyservices

– Reliabletransport– I/Oconsolidation– Virtualizationacceleration– Scalestotens-of-thousandsofnodes

ConnectX-2adaptercardswithVirtual Protocol Interconnect (VPI)supportingInfiniBandandEthernetconnectivityprovidethehighestperformingandmostflexibleinterconnectsolutionforEnterpriseDataCenters,High-PerformanceComputing,andEmbeddedenvironments.Clustereddatabases,parallelprocessing,transactionalservicesandhigh-performanceembeddedI/Oapplicationswillachievesignificantperformanceimprovementsresultinginreducedcompletiontimeandlowercostperoperation.ConnectX-2withVPIalsosimplifiesnetworkdeploymentbyconsolidatingcablesandenhancingperformanceinvirtualizedserverenvironments.

Virtual Protocol InterconnectVPI-enabledadaptersmakeitpossibleforanystandardnetworking,clustering,storage,andmanagementprotocoltoseamlesslyoperateoveranyconvergednetworkleveragingaconsoli-datedsoftwarestack.Withauto-sensecapability,eachConnectX-2portcanidentifyandoperateonInfiniBand,Ethernet,orDataCenterBridging(DCB)fabrics.FlexBoot™providesadditionalflexibilitybyenablingserverstobootfromremoteInfiniBandorLANstoragetargets.ConnectX-2withVPIandFlexBootsimplifiesI/OsystemdesignandmakesiteasierforITmanagerstodeployinfrastructurethatmeetsthechallengesofadynamicdatacenter.

World-Class PerformanceInfiniBand—ConnectX-2deliverslowlatency,highbandwidth,andcomputingefficiencyforperformance-drivenserverandstorageclusteringapplications.EfficientcomputingisachievedbyoffloadingfromtheCPUroutineactivitieswhichallowsmoreprocessorpowerfortheapplication.NetworkprotocolprocessinganddatamovementoverheadsuchasInfiniBandRDMAandSend/ReceivesemanticsarecompletedintheadapterwithoutCPUintervention.CORE-Direct™bringsthenextlevelofperformanceimprovementbyoffloadingapplicationoverhead(e.g.MPIcollectivesoperations),suchasdatabroadcastingandgatheringaswellasglobalsynchronizationcommuni-cationroutines.GPUcommunicationaccelerationprovidesadditionalefficienciesbyeliminatingunnecessaryinternaldatacopiestosignificantlyreduceapplicationruntime.ConnectX-2advancedaccelerationtechnologyenableshigherclusterefficiencyandlargescalabilitytotensofthousandsofnodes.

RDMA over Converged Ethernet —ConnectX-2utilizingIBTARoCEtechnologydeliverssimilarlow-latencyandhigh-performanceoverEthernetnetworks.LeveragingDataCenterBridgingcapabilities,RoCEprovidesefficientlowlatencyRDMAservicesoverLayer2Ethernet.TheRoCEsoftwarestackmaintainsexistingandfuturecompatibilitywithbandwidthandlatencysensitiveapplications.Withlink-levelinteroperabilityinexistingEthernetinfrastructure,NetworkAdminis-tratorscanleverageexistingdatacenterfabricmanagementsolutions

TCP/UDP/IP Acceleration —ApplicationsutilizingTCP/UDP/IPtransportcanachieveindustry-leadingthroughputoverInfiniBandor10GigE.Thehardware-basedstatelessoffloadenginesinConnectX-2reducetheCPUoverheadofIPpackettransport,freeingmoreprocessorcyclestoworkontheapplication.

ConnectX®-2 VPI with CORE-Direct™ TechnologySingle/Dual-Port Adapters with Virtual Protocol Interconnect

– VirtualProtocolInterconnect– 1usMPIpinglatency– Selectable10,20,or40Gb/sInfiniBandor10GigabitEthernetperport

– Single-andDual-Portoptionsavailable– PCIExpress2.0(upto5GT/s)– CPUoffloadoftransportoperations– CORE-Directapplicationoffload– GPUcommunicationacceleration– End-to-endQoSandcongestioncontrol– Hardware-basedI/Ovirtualization– FibreChannelencapsulation(FCoIBorFCoE)– RoHS-R6

BENEFITS

KEY FEATURES

ConnectX-2 VPI Adapter Cards

®

Page 2: ADAPTER CARDS - Login · ETHERNET – IEEE 802.3ae 10 Gigabit Ethernet – IEEE 802.3ad Link Aggregation and Failover – IEEE 802.1Q, .1p VLAN tags and priority – IEEE P802.1au

2ConnectX-2® VPI Single/Dual-PortAdapterswithVirtual Protocol Interconnect™

©Copyright2010.MellanoxTechnologies.Allrightsreserved.Mellanox, BridgeX, ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, InfiniPCI, and Virtual Protocol Interconnect are registered trademarks of Mellanox Technologies, Ltd. CORE-Direct, FabricIT, and PhyX are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.

350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085Tel: 408-970-3400 • Fax: 408-970-3403www.mellanox.com

I/O Virtualization—ConnectX-2withVirtualIntelligentQueuing(Virtual-IQ)technologyprovidesdedicatedadapterresourcesandguaranteedisolationandprotectionforvirtualmachines(VM)withintheserver.I/OvirtualizationwithConnectX-2givesdatacentermanagersbetterserverutilizationandLANandSANunificationwhilereducingcost,power,andcablecomplexity.

Storage Accelerated —Aconsolidatedcomputeandstoragenetworkachievessignificantcost-performanceadvantagesovermulti-fabricnetworks.StandardblockandfileaccessprotocolscanleverageInfiniBandRDMAforhigh-performancestorageaccess.T11compliantencapsula-tion(FCoIBorFCoE)withfullhardwareoffloadssimplifiesthestoragenetworkwhilekeepingexistingFibreChanneltargets.

Software Support AllMellanoxadaptercardsaresupportedbyafullsuiteofdriversforMicrosoftWindows,Linuxdistributions,VMware,andCitrixXENServer.ConnectX-2VPIadapterssupportOpenFabrics-basedRDMAprotocolsandsoftware.StatelessoffloadarefullyinteroperablewithstandardTCP/UDP/IPstacks.ConnectX-2VPIadaptersarecompatiblewithconfigurationandmanage-menttoolsfromOEMsandoperatingsystemvendors.

COMPATIBILITY

INFINIBAND– IBTASpecification1.2.1compliant– RDMA,Send/Receivesemantics– Hardware-basedcongestioncontrol– Atomicoperations– 16millionI/Ochannels– 256to4KbyteMTU,1Gbytemessages– 9virtuallanes:8data+1management

ENHANCED INFINIBAND– Hardware-basedreliabletransport– Collectiveoperationsoffloads– GPUcommunicationacceleration– Hardware-basedreliablemulticast– ExtendedReliableConnectedtransport– EnhancedAtomicoperations

ETHERNET– IEEE802.3ae10GigabitEthernet– IEEE802.3adLinkAggregationandFailover– IEEE802.1Q,.1pVLANtagsandpriority– IEEEP802.1auD2.0CongestionNotification– IEEEP802.1azD0.2ETS– IEEEP802.1bbD1.0Priority-basedFlowControl– Jumboframesupport(10KB)– 128MAC/VLANaddressesperport

HARDWARE-BASED I/O VIRTUALIZATION– SingleRootIOV– Addresstranslationandprotection– Dedicatedadapterresources– Multiplequeuespervirtualmachine– EnhancedQoSforvNICs– VMwareNetQueuesupport

ADDITIONAL CPU OFFLOADS– RDMAoverConvergedEthernet– TCP/UDP/IPstatelessoffload– Intelligentinterruptcoalescence

STORAGE SUPPORT– T11.3FC-BB-5FCoE

FLEXBOOT™ TECHNOLOGY– RemotebootoverInfiniBand– RemotebootoverEthernet– RemotebootoveriSCSI

PCI EXPRESS INTERFACE– PCIeBase2.0compliant,1.1compatible– 2.5GT/sor5.0GT/slinkratex8(20+20Gb/sor40+40Gb/sbidirectionalbandwidth)

– Auto-negotiatestox8,x4,x2,orx1– SupportforMSI/MSI-Xmechanisms

CONNECTIVITY– InteroperablewithInfiniBandor10GigEswitches

– microGiGaCNorQSFPconnectors– 20m+(10Gb/s),10m+(20Gb/s)or7m+(40Gb/sInfiniBandor10GigE)ofpassivecoppercable

– Externalopticalmediaadapterandactivecablesupport

– QSFPtoSFP+connectivitythroughQSAmodule

MANAGEMENT AND TOOLSInfiniBand– OpenSM– Interoperablewiththird-partysubnetmanagers

– Firmwareanddebugtools(MFT,IBDIAG)Ethernet– MIB,MIB-II,MIB-IIExtensions,RMON,RMON2

– Configurationanddiagnostictools

OPERATING SYSTEMS/DISTRIBUTIONS– NovellSLES,RedHatEnterpriseLinux(RHEL),Fedora,andotherLinuxdistributions

– MicrosoftWindowsServer2003/2008/CCS2003

– OpenFabricsEnterpriseDistribution(OFED)– OpenFabricsWindowsDistribution(WinOF)– VMwareESXServer3.5/vSphere4.0

PROTOCOL SUPPORT– OpenMPI,OSUMVAPICH,IntelMPI,MSMPI,PlatformMPI

– TCP/UDP,EoIB,IPoIB,SDP,RDS– SRP,iSER,NFSRDMA,FCoIB,FCoE– uDAPL

2770PBRev3.5

Ordering Part Number

Ports Power (Typ)

MHGH19B-XTR Single4X20Gb/sInfiniBandor10GigE 6.7W

MHRH19B-XTR Single4XQSFP20Gb/sInfiniBand 6.7W

MHQH19B-XTR Single4XQSFP40Gb/sInfiniBand 7.0W

Ordering Part Number

Ports Power (Typ)

MHGH29B-XTR Dual4X20Gb/sInfiniBandor10GigE 8.1W(bothports)

MHRH29B-XTR Dual4XQSFP20Gb/sInfiniBand 8.1W(bothports)

MHQH29B-XTR Dual4XQSFP40Gb/sInfiniBand 8.8W(bothports)

MHZH29-XTR 4XQSFP40Gb/sInfiniBand,SFP+10GigE 8.0W(bothports)

Adapter Cards

www.iol.unh.edu

FEATURE SUMMARY