MCX653106A-HDAT-SP Mellanox 200gbe Network Card ConnectX-6 VPI Adapter
Product Details:
Brand Name: | Mellanox |
Model Number: | MCX653106A-HDAT-SP |
Document: | connectx-6-infiniband.pdf |
Payment & Shipping Terms:
Minimum Order Quantity: | 1pcs |
---|---|
Price: | Negotiate |
Packaging Details: | outer box |
Delivery Time: | Based on inventory |
Payment Terms: | T/T |
Supply Ability: | Supply by project/batch |
Detail Information |
|||
Products Status: | Stock | Application: | Server |
---|---|---|---|
Condition: | New And Original | Type: | Wired |
Max Speed: | Up To 200 Gb/s | Ethernet Connector: | QSFP56 |
Model: | MCX653106A-HDAT | Name: | MCX653106A-HDAT-SP Mellanox Network Card ConnectX-6 VPI Adapter |
Highlight: | MCX653106A-HDAT-SP 200gbe network card,ConnectX-6 VPI Adapter 200gbe network card,MCX653106A-HDAT-SP mellanox network card |
Product Description
NVIDIA ConnectX-6 InfiniBand Adapter MCX653106A-HDAT
The NVIDIA ConnectX-6 InfiniBand Smart Adapter delivers breakthrough performance for high-performance computing, AI, and cloud data centers. Featuring 200Gb/s per port bandwidth, ultra-low latency, and advanced in-network computing offloads, this network card significantly accelerates data-intensive workloads while reducing CPU overhead.
Key Features:
- Dual-port 200Gb/s InfiniBand/Ethernet connectivity
- Hardware offloads for RDMA, NVMe over Fabrics, and encryption
- In-Network Computing acceleration for AI and HPC workloads
- Block-level AES-XTS 256/512 encryption with FIPS compliance
- Support for PCIe Gen 4.0 and NVIDIA Socket Direct technology
Characteristics
- Up to 200 Gb/s per port bandwidth
- 215 million messages per second
- Hardware-based encryption and T10-DIF offload
- Support for SR-IOV and ASAP² virtualization
- Low-profile PCIe and OCP 3.0 form factors
Technology
The ConnectX-6 NIC leverages cutting-edge technologies including RDMA over Converged Ethernet (RoCE), InfiniBand Verbs, and NVIDIA GPUDirect® RDMA. It supports NVMe over Fabrics (NVMe-oF), Tunneling protocols (VXLAN, Geneve), and hardware-offloaded encryption compliant with IEEE AES-XTS standards.
Working Principle
The ConnectX-6 adapter uses dedicated processing engines to offload network, storage, and security operations from the host CPU. Through In-Network Computing, it performs data aggregation and reduction within the switch fabric, minimizing data movement and accelerating distributed applications.
Application & Usage
This network card is ideal for:
- High-Performance Computing (HPC) clusters
- AI and Deep Learning training infrastructures
- Hyperscale cloud and enterprise data centers
- Storage disaggregation with NVMe-oF
- Virtualized and containerized environments
Specification Table
Attribute | Specification |
---|---|
Model | MCX653106A-HDAT |
Interface | PCIe Gen 4.0 x16 |
Data Rate | Up to 200 Gb/s per port |
Ports | Dual QSFP56 |
Protocols | InfiniBand, Ethernet |
Encryption | AES-XTS 256/512 |
Form Factor | Low-profile PCIe |
OS Support | Linux, Windows, VMware, FreeBSD |
Advantages & Selling Points
- Superior throughput and lower latency compared to previous-generation NICs
- Hardware offloading reduces CPU load and increases system efficiency
- Enhanced security with inline encryption and FIPS compliance
- Seamless integration with NVIDIA GPUs and NVMe storage
- Robust support for virtualization and cloud-native environments
Service & Support
We offer comprehensive technical support, 24/7 customer service, and worldwide shipping. All products come with a standard warranty. Custom configuration and bulk order discounts are available.
FAQ
Q: What is the difference between ConnectX-6 and earlier versions?
A: The ConnectX-6 offers higher bandwidth (200Gb/s), improved offload capabilities, and enhanced encryption features.
Q: Is this network card compatible with my existing switch?
A: Yes, it supports backward compatibility with InfiniBand and Ethernet switches supporting lower data rates.
Q: Does it support virtualization?
A: Yes, it supports SR-IOV with up to 1,000 virtual functions per port.
Precautions
- Ensure proper cooling in high-density deployments
- Verify PCIe slot generation compatibility for optimal performance
- Use qualified optical or DAC cables for best results
- Update to latest firmware and drivers from NVIDIA
Company Introduction

With over a decade of industry experience, our company operates a large-scale facility supported by a strong technical team. We have built a vast customer base and offer competitive pricing on high-quality networking products. Our portfolio includes top brands such as Mellanox, Ruckus, Aruba, and Extreme. We maintain a $10 million inventory of network switches, NIC cards, wireless access points, controllers, and cables, enabling us to supply products in large quantities with reliable delivery and round-the-clock support.