Brand Name:
Mellanox
Model Number:
MFS1S50-H010E
CONTACT US
Mellanox® MFS1S50-HxxxE is a QSFP56 VCSEL-based (Vertical Cavity Surface-Emitting Laser), costeffective 200Gb/s to 2 x 100Gb/s active optical splitter cable (AOC) designed for use in 200Gb/s InfiniBand HDR (High Data Rate) systems.
The MFS1S50-HxxxE cable is compliant with SFF-8665 for the QSFP56 pluggable solution. It provides connectivity between system units with a 200Gb/s connector on one side and two separate 100Gb/s connectors on the other side, such as a switch and two servers. The cable connects data signals from each of the 4 MMF (Multi Mode Fiber) pairs on the single QSFP56 end to the dual pair of each of the QSFP56 multiport ends. Each QSFP56 end of the cable comprises an EEPROM providing product and status monitoring information, which can be read by the host system.
Rigorous production testing ensures the best out-of-the-box installation experience, performance and durability.
Mellanox’s unique quality active fiber cable solutions provide power-efficient connectivity for data center interconnects. They enable higher port bandwidth, density and configurability at a low cost, and reduced power requirement in the data centers.
NVIDIA® MFS1S50 is a QSFP56 VCSEL-based (Vertical Cavity Surface-Emitting Laser), cost-effective 200Gb/s to 2 x 100Gb/s active optical splitter cable (AOC) designed for use in 200Gb/s InfiniBand (IB) HDR (High Data Rate) systems.
The MFS1S50 cable is compliant with SFF-8665 for the QSFP56 pluggable solution. It provides connectivity between system units with a 200Gb/s connector on one side and two separate 100Gb/s connectors on the other side, such as a switch and two servers. The cable connects data signals from each of the 4 MMF (Multi Mode Fiber) pairs on the single QSFP56 end to the dual pair of each of the QSFP56 multiport ends. Each QSFP56 end of the cable comprises an EEPROM providing product and status monitoring information, which can be read by the host system.
NVIDIA's unique-quality cable solutions provide power-efficient connectivity for data center interconnects. It enables higher port bandwidth, density and configurability at a low cost, and reduced power requirement in the data centers.
Rigorous production testing ensures the best out-of-the-box installation experience, performance and durability.
Ordering Information
Ordering Part Number
|
Description
|
---|---|
MFS1S50-H003V | NVIDIA active optical cable, 200Gb/s to 2x100Gb/s IB HDR, QSFP56 to 2xQSFP56, 3m |
MFS1S50-H005V | NVIDIA active optical cable, 200Gb/s to 2x100Gb/s IB HDR, QSFP56 to 2xQSFP56, 5m |
MFS1S50-H010V | NVIDIA active optical cable, 200Gb/s to 2x100Gb/s IB HDR, QSFP56 to 2xQSFP56, 10m |
MFS1S50-H015V | NVIDIA active optical cable, 200Gb/s to 2x100Gb/s IB HDR, QSFP56 to 2xQSFP56, 15m |
MFS1S50-H020V | NVIDIA active optical cable, 200Gb/s to 2x100Gb/s IB HDR, QSFP56 to 2xQSFP56, 20m |
MFS1S50-H030V | NVIDIA active optical cable, 200Gb/s to 2x100Gb/s IB HDR, QSFP56 to 2xQSFP56, 30m |
Absolute maximum ratings are those beyond which damage to the device may occur.
Prolonged operation between the operational specifications and absolute maximum ratings is not intended and may cause permanent device degradation.
Parameter
|
Min
|
Max
|
Units
|
---|---|---|---|
Supply voltage | -0.3 | 3.6 | V |
Data input voltage | -0.3 | 3.465 | V |
Control input voltage | -0.3 | 4.0 | V |
Damage Threshold | 3.4 | --- | dBm |
This table shows the environmental specifications for the product.
Parameter
|
Min
|
Max
|
Units
|
---|---|---|---|
Storage temperature | -40 | 85 | °C |
This section shows the range of values for normal operation. The host board power supply filtering should be designed as recommended in the SFF Committee Spec.
Parameter
|
Min
|
Typ
|
Max
|
Units
|
Notes
|
---|---|---|---|---|---|
Supply voltage (Vcc) | 3.135 | 3.3 | 3.465 | V | --- |
Power consumption 200Gb/s end | --- | 4.5 | 5.0 | W | --- |
Power consumption 100Gb/s end | --- | 3.0 | 3.5 | W | --- |
Supply noise tolerance (10Hz – 10MHz) | 66 | --- | --- | mVpp | --- |
Operating case temperature | 0 | --- | 70 | °C | --- |
Operating relative humidity | 5 | --- | 85 | % | --- |
Parameter (per lane)
|
Min
|
Typ
|
Max
|
Units
|
---|---|---|---|---|
Signaling rate | -100 ppm | 53.125 | +100 ppm | GBd |
Differential data input swing at TP1a | TBD | --- | 900 | mVpp |
Differential data output swing at TP4 | --- | --- | 900 | mVpp |
Near-end ESMW | 0.265 | --- | --- | UI |
Near-end output eye height | 70 | --- | --- | mVpp |
Output transition time, 20% to 80% | 9.5 | --- | --- | ps |
Notes:
This product is compatible with ESD levels in typical data center operating environments and certified in accordance with the standards listed in the Regulatory Compliance Section. The product is shipped with protective caps on all connectors to protect it during shipping. In normal handling and operation of high-speed cables and optical transceivers, ESD is of concern during insertion into the QSFP cage of the server/switch. Hence, standard ESD handling precautions must be observed. These include use of grounded wrist/shoe straps and ESD floor wherever a cable/transceiver is extracted/inserted. Electrostatic discharges to the exterior of the host equipment chassis after installation are subject to system level ESD requirements.
The transceiver can be damaged by exposure to current surges and over voltage events. Take care to restrict exposure to the conditions defined in Absolute Maximum Ratings. Observe normal handling precautions for electrostatic discharge-sensitive devices. The transceiver is shipped with dust caps on both the electrical and the optical port. The cap on the optical port should always be in place when there is no fiber cable connected. The optical connector has a recessed connector surface which is exposed whenever it has no cable nor cap.
Prior to insertion of the fiber cable, clean the cable connector to prevent contamination from it. The dust cap ensures that the optics remain clean and no additional cleaning should be needed. In the event of contamination, standard cleaning tools and methods should be used. Liquids must not be applied.
For configurations tested with the AOCs please refer to the system level product (SLP) qualification report.
The AOC supports rate select, which is controlled by writing to registers 0x57-0x58. Two bits are assigned for each receiver lane in byte 0x57 (87dec, Rxn_Rate_Select) and two bits for each transmitter lane in byte 0x58 (88dec, Txn_Rate_Select) to specify up to four bitrates, as defined in SFF-8636 Rev 2.9.2 Table 6-5 XN_RATE_SELECT ENCODINGS. All four lanes are required to have the same rate select value.
The below table specifies the rate for each rate select setting.
Rate Select Encodings
Rate Select Value
|
Operating Rate (GBd)
|
---|---|
01 | 10.31250 NRZ |
10 | 25.78125 NRZ |
11 | 26.56250 PAM4 |
Parameter |
Value | Units | |
---|---|---|---|
Diameter | 3 +/-0.2 | mm | |
Minimum bend radius | 30 | mm | |
Length tolerance | length < 5 m | +300 /-0 | mm |
5 m ≤ length < 50 m | +500 / -0 | ||
50 m ≤ length | +1000 /-0 | ||
Cable color | Aqua | --- |
High-Speed Interconnects
The latest rapid growth in data center traffic and cloud computing has resulted in tremendous increases in machine-to-machine traffic due to rapid adoption of server virtualization, software-defined networks (SDNs), and in-cloud computing. Now add the massive bandwidth requirements for networking terabytes of data for training Artificial Intelligence (AI) machine learning (ML) GPU-based systems.
This is driving massive demands for high-bandwidth and low-latency networks. High-speed DAC and AOC cables and optical transceiver interconnects are playing a critical role in these modern data center technologies.
The NVIDIA® Mellanox® LinkX® product family of cables and transceivers provides the industry’s most complete line of 10, 25, 40, 50, 100, 200, and 400GbE in Ethernet and EDR, HDR, and NDR in InfiniBand products for Cloud, HPC, Web 2.0, Enterprise, telco, storage and artificial intelligence, data center applications.
LinkX cables and transceivers are often used to link top-of-rack switches downwards to network adapters in NVIDIA GPUs and CPU servers, and storage and/or upwards in switch-to-switch applications throughout the network infrastructure.
FAQ
Q1.What can you buy from us?
A:Mellanox, Aruba, Rukus, Extreme brand products, including switches, network cards, cables, Access
Point, etc.
Q2.How about the delivery date?
A:It usually takes 3-5 working days. For specific models, please contact us to check the stock. In the end,
the actual consultation shall prevail.We will do our best to deliver as soon as possible.
Q3.What's your warranty terms?
A: We supply 12 months warranty time.
Q4.How about the Shipping Method?
A: We use Fedex/DHL/UPS/TNT and other air shipments,sea shipments are also workable. In one words,
we could do any shipments that you wanted.
Q5.Can I get some samples?
A:Yes, sample order is available for quality check and market test. You only need to pay the sample cost
and express cost.
Q6.What are your core strengths?
A:First-hand supply, Original and new products with the favorable price and perfect after-sales.
Send your inquiry directly to us