NVIDIA ConnectX-6 MCX653106A-HDAT 200Gb/s Dual-Port InfiniBand Smart Adapter
Product Details:
| Brand Name: | Mellanox |
| Model Number: | MCX653106A-HDAT |
| Document: | connectx-6-infiniband.pdf |
Payment & Shipping Terms:
| Minimum Order Quantity: | 1pcs |
|---|---|
| Price: | Negotiate |
| Packaging Details: | outer box |
| Delivery Time: | Based on inventory |
| Payment Terms: | T/T |
| Supply Ability: | Supply by project/batch |
|
Detail Information |
|||
| Products Status: | Stock | Application: | Server |
|---|---|---|---|
| Condition: | New And Original | Type: | Wired |
| Max Speed: | Up To 200 Gb/s | Ethernet Connector: | QSFP56 |
| Model: | MCX653106A-HDAT | ||
Product Description
MCX653106A-HDAT – Dual-Port 200Gb/s
Engineered for demanding HPC, AI, and hyperscale cloud infrastructures, the NVIDIA® ConnectX®-6 MCX653106A-HDAT smart adapter card delivers up to 200Gb/s bandwidth per port with In-Network Computing acceleration. Offloading computation from the CPU, it dramatically improves efficiency, scalability, and security — from deep neural network training to real-time data analytics.
As a core component of the NVIDIA Quantum InfiniBand platform, ConnectX-6 enables end-to-end RDMA, hardware-based reliable transport, and advanced congestion control. The MCX653106A-HDAT model features dual-port QSFP56, supporting both InfiniBand and Ethernet (up to 200Gb/s). It integrates block-level XTS-AES encryption, NVMe over Fabrics (NVMe-oF) offloads, and GPUDirect RDMA acceleration — making it the ideal choice for GPU-accelerated clusters, software-defined storage, and virtualized networks.
NVIDIA ConnectX-6 extends Remote Direct Memory Access (RDMA) beyond conventional limits. By implementing hardware offloads for MPI tag matching, out-of-order RDMA supporting Adaptive Routing, and Dynamically Connected Transport (DCT), it ensures efficient scaling across thousands of nodes. The adapter’s In-Network Memory capability enables registration-free RDMA memory access, reducing software overhead. Combined with PCIe Gen 4.0, data moves directly between memory and network, freeing CPU cycles for application logic.
With support for RoCE (RDMA over Converged Ethernet) and overlay network tunneling offloads, ConnectX-6 provides a unified smart fabric for both InfiniBand and Ethernet environments.
- High Performance Computing (HPC): Large-scale simulations, weather modeling, and research clusters requiring low latency and high message rate.
- AI & Machine Learning: Accelerate distributed training of deep neural networks with GPUDirect RDMA and high-throughput 200Gb/s links.
- NVMe-oF Storage Arrays: Build high-performance NVMe/TCP or NVMe/RDMA storage targets with hardware offloads, reducing CPU load.
- Hyperscale Cloud & NFV: Efficient service chaining, OVS offload (ASAP²), and SR-IOV for up to 1K virtual functions per adapter.
- Big Data Analytics: In-network computing acceleration for streaming engines and distributed databases.
ConnectX-6 MCX653106A-HDAT is compatible with a wide range of servers, switches, and OS environments. It supports InfiniBand switches up to 200Gb/s (HDR) and Ethernet switches up to 200Gb/s with auto-negotiation. The adapter works across x86, Power, Arm, GPU, and FPGA-based platforms.
| Category | Supported Options / Standards |
|---|---|
| Operating Systems | RHEL, SLES, Ubuntu, other major Linux distributions, Windows Server, FreeBSD, VMware vSphere |
| InfiniBand Spec | IBTA 1.3 compliant, 200/100/50/25/10Gb/s, 8 virtual lanes + VL15 |
| Ethernet Standards | 200/100/50/40/25/10/1GbE, IEEE 802.3bj, 802.3by, PFC, ETS, DCB, 1588v2 |
| CPU offloads & virtualization | SR-IOV (1K VFs), NPAR, DPDK, ASAP² OVS offload, Tunneling (VXLAN, NVGRE, Geneve) |
| Management & Boot | NC-SI, MCTP over SMBus/PCIe, PLDM (DSP0248/DSP0267), UEFI, PXE, iSCSI remote boot |
| Parameter | Detail |
|---|---|
| Product Model | MCX653106A-HDAT |
| Form Factor | PCIe Stand-up, low-profile bracket included, tall bracket mounted, short bracket accessory |
| Network Ports | 2x QSFP56 (dual-port) |
| Supported Speeds | InfiniBand: 200/100/50/25/10 Gb/s; Ethernet: 200/100/50/40/25/10/1 Gb/s |
| Host Interface | PCIe Gen 3.0/4.0 x16 (also supports x8, x4, x2, x1) |
| Maximum Bandwidth | 200Gb/s per port |
| Message Rate | Up to 215 million messages per second |
| Latency | Extremely low (sub-microsecond RDMA) |
| Hardware Encryption | XTS-AES 256/512-bit block-level encryption, FIPS capable |
| Storage Offloads | NVMe-oF target/initiator, T10-DIF, SRP, iSER, NFS RDMA, SMB Direct |
| Virtualization | SR-IOV (up to 1K VFs), VMware NetQueue, QoS per VM |
| Remote Boot | InfiniBand, Ethernet, iSCSI, UEFI, PXE |
| Dimensions (without bracket) | 167.65mm x 68.90mm |
| RoHS & Compliance | RoHS compliant, ODCC compatible |
| Ordering Part Number (OPN) | Ports / Speed | Host Interface | Key Features |
|---|---|---|---|
| MCX653106A-HDAT | 2x QSFP56, up to 200Gb/s | PCIe 3.0/4.0 x16 | Dual-port, Crypto, standard bracket |
| MCX653105A-HDAT | 1x QSFP56, 200Gb/s | PCIe 3.0/4.0 x16 | Single-port, crypto support |
| MCX653106A-ECAT | 2x QSFP56, 100Gb/s | PCIe 3.0/4.0 x16 | 100Gb/s variant, no crypto |
| MCX653436A-HDAT (OCP 3.0) | 2x QSFP56, 200Gb/s | PCIe x16 | OCP 3.0 small form factor |
| MCX654106A-HCAT | 2x QSFP56, Socket Direct | 2x PCIe 3.0 x16 | Dedicated per-CPU access, Socket Direct |
Note: For variants with cold plate for liquid-cooled Intel Server System D50TNP, please contact Starsurge for customized ordering.
100% authentic NVIDIA ConnectX-6 adapters, batch traceable.
Warehouses & partner hubs serving Americas, EMEA, APAC.
Firmware configuration, RDMA tuning, NVMe-oF validation.
Long-term relationship with NVIDIA partners, cost effective.
Hassle-free replacement and advanced cross-ship available.
English, Chinese, and custom integration support.
Hong Kong Starsurge Group provides end-to-end support: from compatibility checks, firmware customization, to on-site deployment guidance. We offer dedicated technical account managers for data center upgrades and proof-of-concept (PoC) testing. All adapters are shipped with anti-static packaging and optional installation kits.
✔ 24h engineering support ticketing system ✔ Advanced replacement for business-critical environments ✔ Driver & software stack assistance (OFED, WinOF-2, DPDK).
This model offers dual-port 200Gb/s capability, full crypto offload (XTS-AES), and supports both InfiniBand and Ethernet on the same adapter. It is optimized for high-density servers requiring maximum throughput.
Yes, it fully supports NVIDIA GPUDirect RDMA (PeerDirect) enabling direct GPU-to-network communication, eliminating unnecessary memory copies and reducing latency for AI training.
Absolutely — it is backward compatible with PCIe Gen 3.0, Gen 2.0, and Gen 1.1, although maximum throughput may be limited compared to Gen 4.0 host.
Passive copper cables with ESD protection, active optical cables, and powered connectors. For InfiniBand, HDR compliant breakouts are supported.
Yes, the ConnectX-6 features NVMe over Fabrics offloads for both target and initiator, drastically reducing CPU overhead and improving IOPS scalability.
- Confirm server mechanical clearance: standard height PCIe bracket included; low-profile bracket also provided as accessory.
- For liquid-cooled platforms (Intel D50TNP), verify cold plate option availability before ordering.
- Please confirm driver compatibility with your Linux distribution version — NVIDIA OFED recommended.
- Not publicly specified: Exact power consumption per port at full 200Gb/s load — refer to NVIDIA user manual or contact Starsurge for typical values (approx 15-18W total).
- FIPS certification is hardware-capable but may require specific firmware — notify sales team if FIPS compliance is mandatory.
Founded in 2008, Hong Kong Starsurge Group Co., Limited is a technology-driven provider of network hardware, IT services, and system integration solutions. With a global customer base spanning government, healthcare, manufacturing, education, finance, and enterprise sectors, Starsurge delivers high-performance networking equipment including switches, NICs, wireless solutions, and tailored software. The company combines experienced sales and technical teams to support complex infrastructure projects, IoT deployments, and network management systems. Customer-first approach, reliable quality, and responsive global delivery make Starsurge a trusted partner for next-generation data centers.
| Fact | Value |
|---|---|
| Max Throughput | 200Gb/s per port (aggregate 400Gb/s theoretical) |
| On-chip Acceleration | Tag matching, rendezvous offload, collective offloads, burst buffer |
| Virtual Functions | Up to 1024 VFs per adapter |
| Encryption Standard | XTS-AES 256/512-bit, offloaded from CPU |
| Adaptive Routing | Out-of-order RDMA support |
| Server / Platform | CPU Architecture | Tested OS |
|---|---|---|
| Dell PowerEdge R750 | Intel Xeon Scalable (Gen 4) | RHEL 8.6, Ubuntu 22.04 |
| HPE ProLiant DL380 Gen10 Plus | Intel Xeon | SLES 15 SP4, VMware ESXi 7.0 |
| Supermicro GPU SuperServer | AMD EPYC 7003 | Ubuntu 20.04, NVIDIA HPC SDK |
| Lenovo ThinkSystem SR650 | Intel Xeon | Windows Server 2022 |
| NVIDIA DGX / HGX base | NVIDIA Arm / x86 | Ubuntu with MLNX_OFED |
- ☑ PCIe slot type: x16 mechanical (electrical x16/x8/x4 supported)
- ☑ Required port speed: 200Gb/s or lower; cable type (QSFP56 passive/active)
- ☑ OS driver availability: Check MLNX_OFED or WinOF-2 version
- ☑ Encryption requirement: FIPS mode or standard AES-XTS
- ☑ Cooling and bracket: standard or cold plate option needed?
- ☑ Quantity and lead time: Stock confirmation with Starsurge sales
QM8700 / QM9700 series, 40 ports HDR 200Gb/s
Optimized for Ethernet and RoCE, 200Gb/s
InfiniBand/Ethernet with programmable data path
DAC, AOC, and active fiber cables for 200G
- ▸ NVIDIA ConnectX-6 User Manual (Firmware & Configuration)
- ▸ RDMA over Converged Ethernet (RoCE) Deployment Guide
- ▸ NVMe-oF with ConnectX-6: Best Practices
- ▸ Performance Tuning for MPI and GPUDirect
- ▸ Block-level Encryption Setup for FIPS environments
* Specifications and features are based on published NVIDIA datasheet and may be updated. For exact technical details, please refer to official NVIDIA documentation or contact Starsurge pre-sales engineering.







