NVIDIA MCP4Y10-N002 800Gb/s Twin-port OSFP to 2x400Gb/s OSFP Passive Direct Attach Copper Cable (DAC)
Datos del producto:
| Nombre de la marca: | Mellanox |
| Número de modelo: | MCP4Y10-N002 (980-9ia0i-00n002) |
| Documento: | mcp4y10-nxxx-twin-port-2x40...ns.pdf |
Pago y Envío Términos:
| Cantidad de orden mínima: | 1 Uds. |
|---|---|
| Precio: | Negotiate |
| Detalles de empaquetado: | Caja exterior |
| Tiempo de entrega: | Basado en el inventario |
| Condiciones de pago: | T/T |
| Capacidad de la fuente: | Suministro por proyecto/lote |
|
Información detallada |
|||
| Tipo: | Cable óptico | Tipo de conductor: | Sólido |
|---|---|---|---|
| Certificación: | CE, ISO, RoHS, CCC, VDE | Capacidad de producción: | 5000 |
| Material de la cubierta: | PCB LF (Free Free) HF (Free Halogen) | Forma del material: | Alambre redondo |
| Especificación: | 2m | Origen: | India / Israel / China |
Descripción de producto
Ultra-lowest latency · Near-zero power · 2-meter reach · NDR InfiniBand & Ethernet ready
The NVIDIA MCP4Y10-N002 is a 2‑meter passive copper DAC featuring twin‑port OSFP to 2x OSFP connectors, delivering 800Gb/s aggregate bandwidth (2x400Gb/s) based on 8 channels of 100G‑PAM4 modulation. As the most cost‑effective and power‑efficient high‑speed interconnect, this cable is ideal for short‑reach top‑of‑rack connections inside data centers. With zero active electronics, it provides sub‑nanosecond latency and near‑zero power consumption (<0.1W per end), supporting both InfiniBand NDR and Spectrum‑4 Ethernet protocols automatically.
The MCP4Y10 series from NVIDIA delivers high‑density 800Gb/s passive DAC connectivity in a twin‑port OSFP form factor. The MCP4Y10-N002 (2‑meter variant) uses 26AWG copper pairs to provide reliable, loss‑optimized transmission for distances up to 2 meters. Designed for Quantum‑2 InfiniBand and Spectrum‑4 Ethernet environments, this cable requires no external power or signal conditioning — making it the simplest, lowest‑latency solution for switch‑to‑switch and switch‑to‑DGX H100 connections inside the same rack. EEPROM memory provides host systems with product configuration data, and the cable is fully compliant with OSFP MSA Rev. 1.12 and SFF‑8636 management interface.
- 800Gb/s aggregate bandwidth – Twin-port OSFP to 2x OSFP, supporting 2x400Gb/s breakout.
- Passive design – No active components, zero equalization latency, and negligible power draw (<0.1W per end).
- 2-meter reach (26 AWG) – Ideal for intra‑rack and adjacent switch connectivity.
- Lowest total cost of ownership – Most economical high‑speed interconnect for short distances.
- Hot‑pluggable & RoHS compliant – LSZH jacket, halogen‑free, lead‑free, and RoHS compliant.
- Protocol‑agnostic auto‑detection – Works seamlessly with InfiniBand NDR or Ethernet (800GbE) without configuration.
- I²C management interface – Based on SFF‑8636, enables real‑time cable monitoring and identification.
- Rugged and durable – Production‑tested for signal integrity, insertion loss, and crosstalk.
NVIDIA MCP4Y10 passive DACs are built around 8 high‑speed copper differential pairs per port, each operating at 100Gb/s PAM4 modulation. The twin‑port OSFP architecture allows a single cable to carry two independent 400Gb/s data streams, effectively acting as a dual‑lane 800G link. Without retimers or equalizers, the signal path remains purely passive — guaranteeing deterministic, sub‑nanosecond latency and virtually zero power consumption. Every cable length is factory‑tuned for impedance matching (100Ω nominal) to minimize reflections and signal noise. The LSZH jacket ensures safety and low smoke emission, suitable for high‑density data centers.
- Quantum‑2 InfiniBand switch to Quantum‑2 switch – Short‑range stacking or spine‑leaf interconnection within 2 meters.
- Quantum‑2 switch to DGX H100 / HGX H100 – Direct attach for GPU clusters requiring ultra‑low latency and minimal power.
- Spectrum‑4 Ethernet switch to switch or server – 800GbE top‑of‑rack cabling for AI/ML backend networks.
- High‑performance computing (HPC) and storage systems – Reliable, low‑cost links for compute nodes in same rack.
The MCP4Y10-N002 passive DAC is fully compatible with NVIDIA Quantum‑2 InfiniBand switches, Spectrum‑4 Ethernet switches, and DGX H100 server platforms. It also operates with any standard OSFP cage that follows OSFP MSA Rev. 1.12. The cable automatically adapts to InfiniBand NDR or Ethernet based on the attached switch’s signaling, requiring no manual intervention. For liquid‑cooled systems or specific mechanical constraints, flat‑top variants (‑FLT suffix) are available for 0.5m and 1m lengths; please contact Starsurge for custom mechanical requirements.
| Platform / Switch Family | Port Type | Supported Speed | Qualified Length |
|---|---|---|---|
| NVIDIA Quantum‑2 QM9700 / QM9790 | OSFP 800G NDR | 800Gb/s (2x400G breakout) | 2m |
| NVIDIA Spectrum‑4 SN5600 / SN5700 | OSFP 800G Ethernet | 800Gb/s | 2m |
| NVIDIA DGX H100 (8‑GPU) | OSFP 400G (dual‑port) | 2x400Gb/s per cable | 2m (finned‑top or flat‑top variants) |
| Third‑party OSFP 800G switches (MSA compliant) | OSFP | 800G / 2x400G | Consult validation |
| Parameter | Specification (MCP4Y10-N002) |
|---|---|
| Product Name | NVIDIA Passive Copper Cable, InfiniBand twin port NDR, up to 800Gb/s, OSFP, 2m |
| Ordering Part Number | MCP4Y10-N002 |
| Data Rate | 800Gb/s aggregate (2x400Gb/s); 8 lanes @ 100G-PAM4 per port |
| Connector Type | Twin-port OSFP to twin-port OSFP (finned top standard) |
| Cable Length | 2 meters (±25 mm tolerance for length <2m) |
| Wire Gauge | 26 AWG, 2x8 pairs |
| Cable Outer Diameter | 8.9 ±0.03 mm (26AWG) |
| Minimum Bend Radius (Single) | 5x cable diameter (~44.5 mm) |
| Minimum Bend Radius (Repeated) | 10x cable diameter (~89 mm) |
| Supply Voltage | 3.135V – 3.465V (3.3V nominal) |
| Max Power per End | 0.1 W (near zero power consumption) |
| Operating Case Temperature | 0°C to +70°C |
| Storage Temperature | -40°C to +85°C |
| Operating Relative Humidity | 5% to 85% (non-condensing) |
| Characteristic Impedance | 100 Ω nominal (90-110 Ω range) |
| Time Propagation Delay | ≤4.5 ns/m (informative) |
| Regulatory Compliance | RoHS, REACH, CE, FCC Class A, UKCA, VCCI, RCM, TUV, CB |
| Jacket Material | LSZH (Low Smoke Zero Halogen), halogen‑free, lead‑free |
| Hot Pluggable | Yes, OSFP MSA compliant |
| Ordering PN | Length | AWG | Description |
|---|---|---|---|
| MCP4Y10-N00A | 0.5m | 30 AWG | Standard finned‑top, 800Gb/s passive DAC |
| MCP4Y10-N001 | 1.0m | 30 AWG | Standard finned‑top, 800Gb/s passive DAC |
| MCP4Y10-N01A | 1.5m | 30 AWG | Standard finned‑top, 800Gb/s passive DAC |
| MCP4Y10-N002 | 2.0m | 26 AWG | Thicker gauge for 2m reach, finned‑top connectors |
| MCP4Y10-N00A-FLT | 0.5m | 30 AWG | Flat‑top variant for liquid‑cooled switches / DGX H100 |
| MCP4Y10-N001-FLT | 1.0m | 30 AWG | Flat‑top variant for special mechanical clearance |
- Lowest cost per port – No active components, minimal BOM.
- Zero power consumption – 0.1W max per end vs. 1.5W for ACC and >3W for AOC.
- Deterministic sub‑nanosecond latency – No equalization or retiming delays.
- Extreme reliability – No optical interfaces to clean or fail.
- Simple cable management – Thinner 30AWG (shorter lengths) or 26AWG for 2m, flexible routing.
Hong Kong Starsurge Group Co., Limited provides full supply chain, logistics, and technical support for NVIDIA original DAC cables. Our engineers can assist with cable length selection, mechanical compatibility (flat‑top vs finned‑top), and integration with existing Quantum‑2 or Spectrum‑4 fabrics. We offer global delivery, volume pricing, and RMA services. Contact us for project‑based quotations and fast shipping from regional hubs.
- Q: Is MCP4Y10-N002 compatible with 400G QSFP ports?
A: No, it requires OSFP cages. For QSFP112 ports, use appropriate QSFP‑to‑OSFP adapters or different cable series. - Q: What is the maximum distance for this passive DAC?
A: 2 meters for MCP4Y10-N002 (26AWG). For 3‑5m, consider active copper (ACC) or optical solutions. - Q: Does it support Ethernet as well as InfiniBand?
A: Yes, the firmware and electrical interface auto‑negotiate protocol (InfiniBand NDR or 800GbE). - Q: What is the difference between flat‑top and finned‑top connectors?
A: Finned‑top is standard for air‑cooled switches; flat‑top (‑FLT) is designed for liquid‑cooled switches and DGX H100 clearance. - Q: Can I use this cable for switch‑to‑server beyond 2m?
A: No, for longer reaches please use ACC (MCA4J80) or AOC to maintain signal integrity.
- Always use ESD‑grounded wrist straps when installing or removing DAC cables.
- Do not exceed minimum bend radius: single bend 5x diameter (≈44.5mm), repeated bend 10x diameter.
- Keep protective dust caps on until installation to avoid contamination.
- Avoid pulling on the cable jacket; hold the backshell connector when plugging/unplugging.
- Operate within case temperature range 0°C to 70°C for optimal performance.
- ✓ Verify switch ports are OSFP 800G (NDR or 800GbE) and support 2x400G breakout if needed.
- ✓ Confirm required length ≤2 meters for passive DAC; beyond that choose ACC or AOC.
- ✓ Check connector type: standard finned‑top for air‑cooled switches; flat‑top variant for DGX H100 / liquid cooling.
- ✓ Ensure bend radius space in rack (minimum 45mm single bend).
- ✓ Confirm operating environment temperature within 0-70°C case temperature.
Founded in 2008, Starsurge is a technology‑driven provider of network hardware, IT services, and system integration solutions. We serve global customers across government, healthcare, manufacturing, education, finance, and enterprise sectors. Our portfolio includes network switches, NICs, wireless access points, IoT solutions, custom software development, and high‑speed cabling (including NVIDIA InfiniBand and Ethernet products). With a customer‑first approach and multilingual support, Starsurge ensures reliable quality, responsive service, and tailored network infrastructure. Our experienced sales and technical team provides end‑to‑end support from design to global delivery.
Partner with Starsurge for genuine NVIDIA passive DACs, ACC, and switch solutions.







