NVIDIA MCX653106A-ECAT ConnectX-6 100Gb/s Dual-Port InfiniBand & Ethernet Smart Network Interface Card
Product Details:
| Brand Name: | Mellanox |
| Model Number: | MCX653106A-ECAT |
| Document: | connectx-6-infiniband.pdf |
Payment & Shipping Terms:
| Minimum Order Quantity: | 1pcs |
|---|---|
| Price: | Negotiate |
| Packaging Details: | outer box |
| Delivery Time: | Based on inventory |
| Payment Terms: | T/T |
| Supply Ability: | Supply by project/batch |
|
Detail Information |
|||
| Products Status: | Stock | Application: | Server |
|---|---|---|---|
| Interface Type:: | InfiniBand | Ports: | Dual |
| Max Speed: | 100GbE | Type: | Wired |
| Condition: | New And Original | Warranty Time: | 1 Year |
| Model: | MCX653106A-ECAT | Name: | MCX653106A-ECAT Mellanox 100gb Nic ConnectX- 6 VPI Hdr100 Edr Ib Dual Port |
| Keyword: | Mellanox Network Card | ||
| Highlight: | Mellanox ConnectX-6 network card,100Gb/s dual-port NIC,InfiniBand Ethernet smart adapter |
||
Product Description
1. Product Overview
The NVIDIA® ConnectX®-6 MCX653106A-ECAT is a versatile dual-port Smart network card designed for high-performance data center deployments. Supporting both InfiniBand EDR (up to 100Gb/s per port) and Ethernet connectivity, this adapter delivers exceptional bandwidth, ultra-low latency, and comprehensive hardware offloads. Ideal for accelerating HPC, AI, cloud, and storage workloads, this advanced nic card balances performance with flexibility, making it a cornerstone for modern, efficient data center fabrics.
2. Key Features & Specifications
- High-Speed Connectivity: Dual-port 100Gb/s EDR InfiniBand or 100GbE Ethernet per port.
- Protocol Flexibility: Supports both InfiniBand and Ethernet protocols on the same hardware.
- Advanced Offload Engine: Hardware offloads for RDMA, TCP/IP, NVMe over Fabrics (NVMe-oF), and storage protocols.
- High Efficiency: Delivers high message rates and extremely low latency, reducing application runtimes.
- Robust Virtualization: SR-IOV support for up to 1000 virtual functions, enabling secure multi-tenant environments.
- Modern Host Interface: PCIe 3.0/4.0 x16 host interface for maximum bandwidth compatibility.
- Enhanced Software Ecosystem: Broad driver support for Linux, Windows Server, and VMware.
3. Core Technology
This network card leverages the proven ConnectX-6 architecture and key industry standards:
- Dual-Protocol ASIC: Single-chip design supporting native InfiniBand (IBTA 1.3) and high-performance Ethernet (RoCE).
- PCIe Gen4 Ready: Maximizes host-to-adapter throughput, eliminating bottlenecks for 100Gb/s flows.
- Hardware Offloads: RDMA, checksum, TSO/LSO, and NVMe-oF processing are handled on the nic card, freeing CPU resources.
- Switch ASAP² Technology: Provides hardware acceleration for Open vSwitch (OVS), improving virtualized network performance.
- Quality of Service (QoS): Advanced traffic management with granular control for predictable performance.
4. How It Works: The Smart Offload Path
The MCX653106A-ECAT transforms server networking by handling complex tasks in dedicated hardware:
- Data Direct Access: Utilizes RDMA to enable direct data placement between application memory and the network, bypassing CPU and OS overhead.
- Protocol Processing: TCP/IP stack operations and storage protocol headers are processed on the adapter at line rate.
- Virtualization Bypass: SR-IOV allows virtual machines to directly access the network card hardware, delivering near-native performance.
- Storage Acceleration: NVMe-oF initiator and target offloads streamline access to remote NVMe storage arrays.
This intelligent offload architecture makes the nic card an active performance accelerator, not just a connectivity component.
5. Target Applications & Use Cases
- High-Performance Computing (HPC): Connects compute nodes in clusters running MPI-based scientific and engineering simulations.
- AI/ML Training Infrastructure: Provides high-bandwidth links between GPU servers, accelerating data-parallel training jobs.
- Cloud & Hyperscale Data Centers: Ideal for storage disaggregation (NVMe-oF), virtualized network functions, and data analytics platforms.
- Enterprise Storage & Databases: Enhances performance for SAN (SRP, iSER) and database clusters requiring low-latency RDMA.
- Consolidated Workload Servers: Perfect for servers running mixed workloads due to its dual-protocol support and versatile offloads.
6. Technical Specifications
7. Competitive Advantages
Why select the MCX653106A-ECAT for your infrastructure?
- Dual-Protocol Versatility: One network card supports both InfiniBand and Ethernet, offering deployment flexibility and investment protection.
- Superior CPU Efficiency: Comprehensive hardware offloads dramatically reduce CPU utilization for networking tasks, freeing cores for applications.
- Proven ConnectX-6 Reliability: Built on NVIDIA's industry-leading adapter technology with extensive field validation.
- Optimized for Modern Workloads: Native support for NVMe-oF and RDMA accelerates storage and HPC applications.
- Future-Ready Platform: PCIe 4.0 compatibility ensures readiness for next-generation server platforms.
8. Support & Services
We stand behind every MCX653106A-ECAT adapter with comprehensive support services:
- Full Manufacturer Warranty: Genuine NVIDIA product with standard warranty coverage.
- Guaranteed Supply: We maintain stable inventory to ensure reliable and prompt delivery.
- Expert Technical Support: Our team provides integration guidance, compatibility verification, and performance tuning assistance.
- Long-Term Partnership: We offer ongoing support for driver/firmware updates and best practice recommendations for your specific use case.
9. Frequently Asked Questions
Q: Can I use this card for both InfiniBand and Ethernet networks simultaneously?
A: The card operates in one protocol mode at a time per port, as configured by the driver/ firmware. It cannot run InfiniBand and Ethernet simultaneously on the same port, but offers flexible deployment across different environments.
Q: What type of cables are needed for 100Gb/s connectivity?
A: For InfiniBand EDR, use QSFP56 EDR cables (DAC or AOC). For 100GbE, use QSFP56 100GbE cables. The specific cable type (passive DAC, active AOC, or optical transceivers) depends on distance and switch compatibility.
Q: Is this adapter compatible with NVIDIA GPU Direct technology?
A: Yes, it fully supports NVIDIA GPUDirect® RDMA (PeerDirect), enabling direct data transfer between GPU memory and the network, which is critical for AI and HPC workloads.
Q: Does it support legacy Ethernet speeds like 10GbE or 25GbE?
A: Yes, through backward compatibility and auto-negotiation, the ports support lower Ethernet speeds including 10, 25, 40, and 50 GbE when using appropriate cables or breakouts.
10. Installation & Operational Notes
- System Compatibility: Ensure your server has a PCIe x16 slot (Gen3 or Gen4) and meets power/cooling requirements for a high-performance nic card.
- Software Requirements: Install the latest NVIDIA OFED or WinOF-2 drivers from the official NVIDIA support portal before use.
- Thermal Considerations: Provide adequate chassis airflow across the card's heatsink for optimal performance and longevity.
- ESD Protection: Always handle the adapter using proper anti-static measures. Install with the system powered off.
- Firmware Updates: Periodically check for and apply firmware updates to ensure stability, security, and access to the latest features.
11. About Our Company
With over a decade of experience in the enterprise networking sector, we have established ourselves as a trusted global supplier. Our operations are supported by substantial manufacturing facilities and a skilled technical team, enabling us to serve a diverse and growing customer base worldwide.
As an authorized distributor for leading brands including NVIDIA Mellanox, Ruckus, Aruba, and Extreme, we provide authentic, brand-new networking hardware. Our extensive product range encompasses switches, network cards, wireless solutions, and cabling systems.
```







