NVIDIA ConnectX-7 MCX75310AAS-NEAT 400G InfiniBand Ethernet SmartNIC – High-Performance Network Adapter
Product Details:
| Brand Name: | Mellanox |
| Model Number: | MCX75310AAS-NEAT(900-9X766-003N-SQ0) |
| Document: | Connectx-7 infiniband.pdf |
Payment & Shipping Terms:
| Minimum Order Quantity: | 1pcs |
|---|---|
| Price: | Negotiate |
| Packaging Details: | outer box |
| Delivery Time: | Based on inventory |
| Payment Terms: | T/T |
| Supply Ability: | Supply by project/batch |
|
Detail Information |
|||
| Model NO.: | MCX75310AAS-NEAT(900-9X766-003N-SQ0) | Ports: | Single-Port |
|---|---|---|---|
| Technology: | Infiniband | Interface Type: | Osfp56 |
| Specification: | 16.7cm X 6.9cm | Origin: | India / Israel / China |
| Transmission Rate: | 400gbe | Host Interface: | Gen3 X16 |
| Highlight: | NVIDIA ConnectX-7 400G network adapter,Mellanox InfiniBand SmartNIC,high-performance Ethernet network card |
||
Product Description
NVIDIA ConnectX-7 MCX75310AAS-NEAT 400G InfiniBand/Ethernet Network Interface Card
Product Alias: mq9700 | Core Search Terms: network card, NIC card, InfiniBand adapter, Ethernet SmartNIC, high-speed network adapter
Key Selling Points
- Unmatched Performance: Dual-port 400Gb/s (NDR InfiniBand or 400GbE) throughput with ultra-low latency, powered by PCIe Gen5.
- Hardware Acceleration: Comprehensive offloads for networking, storage, and security (TLS/IPsec/MACsec, NVMe-oF, SHARP™).
- AI & HPC Optimized: Built-in GPUDirect® RDMA/Storage and NVIDIA SHARP™ for accelerated distributed computing and AI workloads.
- Dual-Protocol Flexibility: One network card supports both InfiniBand and Ethernet (RoCE), simplifying infrastructure.
- Enhanced Reliability: Advanced features like adaptive routing, congestion control, and SR-IOV for demanding data centers.
1. Product Overview & Specifications
The NVIDIA ConnectX-7 MCX75310AAS-NEAT is a premier dual-port network interface card (NIC card) designed for next-generation data centers. This network adapter supports both InfiniBand NDR (400Gb/s) and Ethernet (up to 400GbE) protocols, delivering exceptional bandwidth and latency for AI, high-performance computing (HPC), cloud, and enterprise storage. Leveraging PCI Express 5.0 and hardware-based acceleration engines, it offloads critical tasks from the CPU, maximizing application performance and efficiency.
2. Key Characteristics
- High-Speed Interfaces: Two network ports, each configurable for InfiniBand (NDR, HDR, EDR) or Ethernet (400/200/100/50/25/10GbE).
- Hardware Offloads: RDMA (RoCE, InfiniBand), GPUDirect® RDMA/Storage, NVMe over Fabrics (NVMe-oF), TLS/IPsec/MACsec inline encryption.
- Advanced Networking: Support for NVIDIA SHARP™ in-network computing, adaptive routing, enhanced congestion control, and SR-IOV.
- Precision Timing: IEEE 1588 PTP with nanosecond accuracy and SyncE for timing-sensitive applications.
- Robust Management: Comprehensive manageability via NC-SI, PLDM (Redfish, FRU), SPDM, and secure firmware updates.
- This specific model, also known under the alias mq9700, is a top-tier solution for scalable cluster deployment.
3. Core Technologies & Standards
This NIC card is built on industry-leading technologies:
- Protocols: InfiniBand (IBTA 1.5), Ethernet (IEEE 802.3), RoCE (RDMA over Converged Ethernet).
- Compute Acceleration: GPUDirect technologies, NVIDIA SHARP™ for collective communication offload.
- Security: Hardware-accelerated TLS 1.3, IPsec, MACsec (AES-GCM 128/256-bit).
- Virtualization: Single Root I/O Virtualization (SR-IOV), VirtIO acceleration.
- Standards Compliance: PCIe Gen5, IEEE 1588, PLDM, SPDM, OCP 3.0 (for specific form factors).
4. Working Principle
The ConnectX-7 network card operates as a sophisticated network co-processor. Data flows from the host server via the PCIe Gen5 x16 interface into the adapter's onboard intelligence. Key processing steps include:
- Packet Processing & Steering: The programmable parser classifies traffic, directing it to appropriate hardware acceleration engines.
- Hardware Offload: Network (RoCE/IB transport), storage (NVMe-oF/TCP), and security (encryption/decryption) operations are executed directly on the card, bypassing the CPU.
- RDMA Transfer: For low-latency communication, RDMA enables direct memory access between servers or GPU memory (via GPUDirect RDMA).
- In-Network Computing: NVIDIA SHARP™ engines perform collective operations (like reductions) within the switch network, drastically reducing host CPU and GPU involvement.
- Quality of Service (QoS): Traffic is prioritized and scheduled according to configured policies before transmission through the high-speed SerDes interfaces (PAM4/NRZ).
5. Application Scenarios
- AI & Machine Learning Clusters: Accelerates multi-node training with GPUDirect and SHARP™.
- High-Performance Computing (HPC): Provides ultra-low latency for scientific simulations and modeling.
- Hyperscale & Cloud Data Centers: Enables high-density, software-defined infrastructure with SR-IOV and overlay network offloads (VXLAN, GENEVE).
- High-Frequency Trading & Financial Services: Nanosecond-precision timing and deterministic latency are critical.
- Enterprise Storage & Disaggregated Infrastructure: Optimizes performance for NVMe-oF and RDMA-based storage networks.
6. Specifications & Selection Table
| Attribute | Specification for MCX75310AAS-NEAT | |
|---|---|---|
| Product Model | MCX75310AAS-NEAT (Alias: mq9700) | |
| Network Card Type | Dual-Port Adapter | |
| Supported Protocols | InfiniBand, Ethernet (RoCE) | |
| Max Speed per Port | InfiniBand: NDR 400Gb/s, HDR 200Gb/s, EDR 100Gb/s Ethernet: 400/200/100/50/25/10GbE |
|
| Host Interface | PCI Express 5.0 x16 (backward compatible) | |
| Form Factor | FHHL (Full Height, Half Length) | |
| Key Technologies | RDMA, GPUDirect, SHARP™, SR-IOV, Hardware Offload (Crypto, Storage) | |
| Typical Power Consumption | Approx. 35W (Max under load) | |
| Operating Temperature | 0°C to 55°C | |
| Compatible OS | Linux (RHEL, Ubuntu), Windows Server, VMware ESXi |
7. Advantages & Competitive Edge
- Performance Leadership: Compared to previous generations and competitor NIC cards, ConnectX-7 doubles the throughput to 400G and significantly reduces latency for AI workloads.
- Total Cost of Ownership (TCO): Hardware offloads free up valuable CPU cores, allowing for higher application density and reduced licensing costs.
- Future-Proof Design: Dual-protocol support on a single network adapter protects infrastructure investment against technology shifts.
- Ecosystem Integration: Deep optimization with NVIDIA GPUs, VMware, Kubernetes, and leading HPC software stacks like OpenMPI and NCCL.
- The mq9700 alias model is recognized for its reliability in large-scale deployments, a testament to its robust design.
8. Service & Support
Backed by over a decade of industry experience, we provide comprehensive support for this network interface card:
- Warranty: Standard manufacturer warranty applies. Extended warranty options are available.
- Availability & Delivery: In stock for immediate shipment. We maintain a $10M inventory to ensure supply continuity.
- Technical Support: 24/7 customer consultation and technical support from our expert team.
- Pre- & Post-Sales Service: Assistance with solution design, compatibility verification, and integration.
- Global Logistics: We ensure accurate and timely arrival of goods to customers worldwide.
9. Frequently Asked Questions (FAQ)
Q1: What is the difference between this model (MCX75310AAS-NEAT) and other ConnectX-7 variants?
A: This specific model is a dual-port, FHFL PCIe card. Variants differ in port count (1/2/4), form factor (OCP 3.0, SFF), and sometimes pre-configured protocol (VPI = InfiniBand & Ethernet, or EN = Ethernet-only). The alias mq9700 often refers to this high-performance dual-port configuration.
Q2: Can this network card operate in an Ethernet-only environment?
A: Yes. The ConnectX-7 is a VPI (Virtual Protocol Interconnect) NIC card. It can be configured via firmware/driver to operate in Ethernet-only mode, supporting RoCE for RDMA over standard Ethernet networks.
Q3: Does it support legacy PCIe generations?
A: Yes. The PCIe Gen5 interface is backward compatible with Gen4 and Gen3 slots, though maximum bandwidth will be limited by the host slot's capability.
Q4: What cables are required for 400GbE or NDR InfiniBand connections?
A: For 400G speeds, you will need QSFP-DD or OSFP optical transceivers and fiber cables, or appropriate Direct Attach Copper (DAC) cables. The choice depends on distance and infrastructure.
Q5: Is this card suitable for GPU-based AI servers?
A: Absolutely. This is a primary use case. Features like GPUDirect RDMA and SHARP™ are specifically designed to eliminate bottlenecks in multi-GPU, multi-server AI training clusters, making this an ideal network adapter.
10. Precautions & Notes
- Compatibility: Verify server has a PCIe Gen4 x16 or Gen5 x16 slot for full performance. Ensure adequate chassis cooling, as this is a high-performance network card.
- Drivers & Firmware: Always use the latest NVIDIA-approved drivers and firmware from the official portal for stability and feature access.
- Electromagnetic Compatibility (EMC): Install in compliance with local EMC regulations. Use manufacturer-provided brackets and ensure proper grounding.
- Security: Utilize the hardware root-of-trust and secure boot features to prevent unauthorized firmware modification.
- Handling: Follow ESD (electrostatic discharge) precautions during installation. Avoid touching the gold-plated connectors.
- When searching for this product, using terms like "NIC card", "network adapter", or the alias "mq9700" will help locate specific technical resources and support.
11. Company Introduction
With over ten years of established presence in the network hardware industry, we have built a robust foundation comprising a large-scale operational facility and a deeply experienced technical team. Our long-term operations have allowed us to build a significant customer base and accumulate extensive domain expertise.
We are a trusted distributor for leading brands including NVIDIA (Mellanox), Ruckus, Aruba, and Extreme Networks. Our core portfolio encompasses original, brand-new networking equipment such as switches, network interface cards (NIC cards), wireless access points, controllers, and cabling solutions.







