InfiniBand адаптеры Mellanox Connect X3 Pro
Сетевые адаптеры ConnectX-3 Pro имеют дополнительную встроенную функцию hardware offloads (аппаратная разгрузка центрального процессора) для виртуальных сетей (VXLAN, NVGRE) и для использования в облачных средах (IaaS). Эти новые средства виртуализации позволяют поставщикам облачных сервисов эффективно расширять свои дата центры и спектр услуг, которые они могут предложить.MCX353A-FCCT
MCX354A-FCCT
FEATURES SUMMARY | |
INFINIBAND | HARDWARE-BASED I/O VIRTUALIZATION |
– IBTA Specification 1.2.1 compliant | – Single Root IOV |
– Hardware-based congestion control | – Address translation and protection |
– 16 million I/O channels | – Dedicated adapter resources |
– 256 to 4Kbyte MTU, 1Gbyte messages | – Multiple queues per virtual machine |
ENHANCED INFINIBAND | – Enhanced QoS for vNICs |
– Hardware-based reliable transport | – VMware NetQueue support |
– Collective operations offloads | ADDITIONAL CPU OFFLOADS |
– GPU communication acceleration | – RDMA over Converged Ethernet |
– Hardware-based reliable multicast | – TCP/UDP/IP stateless offload |
– Extended Reliable Connected transport | – Intelligent interrupt coalescence |
– Enhanced Atomic operations | FLEXBOOT™ TECHNOLOGY |
ETHERNET | – Remote boot over InfiniBand |
– IEEE Std 802.3ae 10 Gigabit Ethernet | – Remote boot over Ethernet |
– IEEE Std 802.3ba 40 Gigabit Ethernet | – Remote boot over iSCSI |
– IEEE Std 802.3ad Link Aggregation | PROTOCOL SUPPORT |
– IEEE Std 802.3az Energy Efficient Ethernet | – Open MPI, OSU MVAPICH, Intel MPI, MS |
– IEEE Std 802.1Q, .1p VLAN tags and priority | – MPI, Platform MPI |
– IEEE Std 802.1Qau Congestion Notification | – TCP/UDP, EoIB, IPoIB, SDP, RDS |
– IEEE Std 802.1Qbg | – SRP, iSER, NFS RDMA |
– IEEE P802.1Qaz D0.2 ETS | – uDAPL |
– IEEE P802.1Qbb D1.0 Priority-based Flow Control | OVERLAY NETWORKS |
– IEEE 1588v2 | – VXLAN: A Framework for Overlaying Virtualized Layer 2 Networks over Layer 3 Networks |
– Jumbo frame support (9.6KB) | – NVGRE: Network Virtualization using Generic Routing Encapsulation |
COMPATIBILITY | |
PCI EXPRESS INTERFACE | OPERATING SYSTEMS/DISTRIBUTIONS |
– PCIe Base 3.0 compliant, 1.1 and 2.0 compatible | – RHEL/CentOS 5.X and 6.X, Novell SLES10 SP4; SLES11 SP1 , SLES 11 SP2, OEL, Fedora 14,15,17, Ubuntu 12.0 |
– 2.5, 5.0, or 8.0GT/s link rate x8 | – Windows: Windows 2008, Windows 2008 R2, Win7, Win Server 2012 |
– Auto-negotiates to x8, x4, x2, or x1 | – FreeBSD |
– Support for MSI/MSI-X mechanisms | – OpenFabrics Enterprise Distribution (OFED) |
CONNECTIVITY | – Ubuntu 12.04 |
– Interoperable with InfiniBand or 10/40 Ethernet switches. | – VMware ESXi 4.x and 5.x |
– Passive copper cable with ESD protection | – OpenFabrics Enterprise Distribution (OFED) |
– Powered connectors for optical and active cable support | – OpenFabrics Windows Distribution (WinOF) |
– QSFP to SFP+ connectivity through QSA module | – VMware ESXi 5.x, ESXi 4.x |
Ordering Part Number | VPI Ports | Dimensions w/o Brackets |
MCX311A-XCCT | Single 10GbE | 14.2cm x 6.9cm |
MCX312B-XCCT | Dual 10GbE | 14.2cm x 6.9cm |
MCX313A-BCCT | Single 40GbE | 14.2cm x 6.9cm |
MCX314A-BCCT | Dual 40GbE | 14.2cm x 6.9cm |
MCX353A-FCCT | Single VPI FDR/40GbE | 14.2cm x 6.9cm |
MCX354A-FCCT | Dual VPI FDR/40GbE | 14.2cm x 6.9cm |
MCX353A-FCCT | MCX354A-FCCT |