NVIDIA ConnectX-7 MCX75310AAS-NEAT Dual-Port 400Gb/s InfiniBand & Ethernet Smart Adapter – PCIe 5.0 x16, NDR
Product Details:
| Brand Name: | Mellanox |
| Model Number: | MCX75310AAS-NEAT(900-9X766-003N-SQ0) |
| Document: | Connectx-7 infiniband.pdf |
Payment & Shipping Terms:
| Minimum Order Quantity: | 1pcs |
|---|---|
| Price: | Negotiate |
| Packaging Details: | outer box |
| Delivery Time: | Based on inventory |
| Payment Terms: | T/T |
| Supply Ability: | Supply by project/batch |
|
Detail Information |
|||
| Model NO.: | MCX75310AAS-NEAT(900-9X766-003N-SQ0) | Ports: | Single-Port |
|---|---|---|---|
| Technology: | Infiniband | Interface Type: | Osfp56 |
| Specification: | 16.7cm X 6.9cm | Origin: | India / Israel / China |
| Transmission Rate: | 400gbe | Host Interface: | Gen3 X16 |
| Highlight: | NVIDIA ConnectX-7 400Gb/s network adapter,Dual-port InfiniBand Ethernet adapter,PCIe 5.0 x16 smart adapter |
||
Product Description
Next-generation 400Gb/s dual-port adapter bridging InfiniBand NDR and 400GbE networks—featuring PCIe 5.0 x16, inline hardware security (IPsec/TLS/MACsec), NVIDIA In-Network Computing engines, and NVMe-oF offloads for AI, HPC, and hyperscale data centers.
- Dual QSFP112 ports supporting 400Gb/s InfiniBand (NDR) and 400/200/100/50/25/10GbE
- PCIe Gen 5.0 x16 (up to 32 lanes capable) | Ultra-low latency and 215+ million messages/sec
- Hardware offloads: NVMe-oF target/initiator, XTS-AES 256/512-bit encryption, MPI tag matching
- Inline security engines: IPsec, TLS 1.3, MACsec with AES-GCM 128/256-bit
- PCIe half-height half-length (HHHL) form factor, RoHS compliant, advanced timing (PTP/SyncE)
- 400Gb/s Throughput: Dual ports operating at up to 400Gb/s InfiniBand (NDR) or 400GbE with full bidirectional bandwidth.
- In-Network Computing: Offloads collective operations (MPI, NCCL, SHMEM) using NVIDIA SHARP technology.
- Inline Security: Hardware encryption/decryption for IPsec, TLS 1.3, and MACsec at line rate; secure boot with root-of-trust.
- NVMe-oF Offloads: Target and initiator offloads for NVMe over Fabrics (including NVMe/TCP), reducing CPU utilization.
- Precision Timing: IEEE 1588v2 PTP with 12ns accuracy, SyncE, and configurable PPS in/out.
The MCX75310AAS-NEAT integrates NVIDIA In-Network Computing engines (SHARP), RDMA (IBTA 1.5), RoCE, and NVMe-oF. It supports PCIe Gen 5.0 (x16, up to 32 lanes), PAM4 (100G) and NRZ (10G/25G) SerDes, and advanced features like Dynamically Connected Transport (DCT), On-Demand Paging (ODP), and Adaptive Routing. Overlay offloads for VXLAN, GENEVE, NVGRE are hardware-accelerated. Compliant with IEEE 802.3ck, 802.3bj, and InfiniBand Trade Association specifications.
ConnectX-7 offloads communication, storage, and security tasks from the host CPU to the adapter hardware. For MPI collectives, the adapter processes data in transit using SHARP, reducing endpoint traffic. For storage, NVMe-oF commands are processed directly on the adapter, freeing CPU cores. Inline encryption engines (IPsec/TLS/MACsec) encrypt/decrypt packets at wire speed without CPU involvement. The result is lower latency, higher message rate, and improved application scalability—critical for 400G environments.
- AI Training Clusters: GPU-to-GPU communication with GPUDirect RDMA and NCCL collectives.
- Exascale HPC: MPI-based simulations requiring ultra-low latency and high message rate.
- NVMe-oF Storage: Target/initiator offload for high-performance NVMe storage access.
- Secure Cloud Data Centers: Inline IPsec/TLS for multi-tenant security without CPU overhead.
- Financial Services: Precision PTP timing for high-frequency trading and timestamping.
| Model | Ports & Speed | Host Interface | Form Factor | Security Offloads | Protocols | OPN |
|---|---|---|---|---|---|---|
| ConnectX-7 | 2x QSFP112 (400Gb/s NDR/400GbE) | PCIe 5.0 x16 (32 lanes) | PCIe HHHL | IPsec, TLS 1.3, MACsec, AES-XTS | InfiniBand, Ethernet, NVMe-oF | MCX75310AAS-NEAT |
| ConnectX-7 | 1x QSFP112 (400Gb/s) | PCIe 5.0 x16 | PCIe FHHL | IPsec/TLS/MACsec | IB/Eth | MCX75510AAS-NEAT |
| ConnectX-7 | 2x QSFP112 (200Gb/s) | PCIe 5.0 x16 | OCP 3.0 | IPsec/TLS/MACsec | IB/Eth | MCX75310AAS-NEAT (OCP var.) |
Note: MCX75310AAS-NEAT supports 400Gb/s InfiniBand (NDR) and 400/200/100/50/25/10GbE. Dimensions: 167.65mm x 68.90mm (HHHL). Includes tall and low-profile brackets. Power consumption < 25W typical.
- vs. ConnectX-6: Double the bandwidth (400Gb/s vs. 200Gb/s), PCIe 5.0, inline IPsec/TLS/MACsec, and advanced PTP with 12ns accuracy.
- vs. Competitor NICs: True hardware offload for NVMe-oF, MPI collectives, and full security suite—all at line rate.
- Integrated Security: Eliminates need for external encryption appliances; FIPS compliance ready.
- Multi-Host Support: NVIDIA Multi-Host technology enables connection of up to 4 hosts to a single adapter.
We offer 24/7 technical consultation, RMA services, and integration support for ConnectX-7 adapters. Each card is backed by a 1-year warranty (extendable). Our team provides driver validation for major Linux distributions (RHEL, Ubuntu), Windows Server, and VMware. Pre-sales configuration assistance for NDR InfiniBand fabric design is available. All cards are shipped from our 10M+ inventory with same-day dispatch.
A: Yes, it is fully interoperable with NVIDIA Quantum-2 QM9700/QM9790 switches using NDR mode at 400Gb/s.
A: Yes, it supports both InfiniBand and Ethernet protocols. The firmware auto-detects the switch type and configures the appropriate mode.
A: Yes, ConnectX-7 fully supports RoCE, providing low-latency RDMA in Ethernet environments.
A: Inline hardware engines for IPsec (AES-GCM 128/256), TLS 1.3, MACsec, and block-level XTS-AES 256/512-bit encryption. Also features secure boot with hardware root-of-trust.
A: Yes, it is backward compatible with PCIe Gen 4.0 and Gen 3.0 slots, though bandwidth will be limited to the slot's capability.
- PCIe Slot Requirement: For full 400Gb/s performance, install in a PCIe Gen 5.0 x16 slot. Gen 4.0 slots will limit throughput to ~200Gb/s per port.
- Cooling: Ensure adequate airflow in server chassis; passive cooling requires minimum 300 LFM at 400G operation.
- Cabling: Use QSFP112 passive/active copper or optical modules rated for 400Gb/s (NDR).
- Driver Support: Use latest NVIDIA MLNX_OFED for Linux or WinOF-2 for Windows.
- Operating Temperature: 0°C to 70°C; store between -40°C and 85°C.
With over a decade of experience, we operate a large-scale factory backed by a strong technical team. Our extensive customer base and domain expertise enable us to offer competitive pricing without compromising on quality. As authorized distributors for Mellanox, Ruckus, Aruba, and Extreme, we stock original network switches, network card (nic card) solutions, wireless Access Points, controllers, and cabling. We maintain a 10 million USD inventory to ensure rapid fulfillment across diverse product lines. Every shipment is verified for accuracy, and we provide 24/7 consultation and technical support. Our professional sales and technical teams have earned a high reputation in global markets—partner with us for reliable infrastructure solutions.







