You're at
[Industry News] - Optical modules considerations in data centers beyond 100G


An important metric for data center switches is front panel bandwidth, which is the aggregate bandwidth of all transceivers that can fit in a 19”-wide and 1RU tall switching hardware. The ability to cool the modules through air flow is one critical constraint, though in many cases the density of electrical connections to the transceiver can become a limiting factor. As a conequence, a common switch can typically accommodate 32 QSFP ports on the front panel. If the ports are QSFP+, the corresponding front panel bandwidth is 1.28Tbps (32 x 40G). With the upgrade to QSFP28, this bandwidth increases to 3.2Tbps. The upgrade path after QSFP28 is a subject of ongoing discussions. Next generation switching ASICs are expected to have native port speeds of 50G and 128 ports, which correspond to a net throughput of 6.4Tbps. Following the 4x trend set by 40G and 100G, this implies the need for 200G QSFP modules (“QSFP56”). 32 QSFP56 ports on the front panel would result in a front panel bandwidth of 6.4Tbps. The difficulty with this path, however, is that a 200G Ethernet standard does not exist. Discussion on its need has recently begun, but the completion of a standard would be later than 400G Ethernet, which is already in progress. If 400G ports are assumed, an alternative path to 6.4Tbps front panel bandwidth is to have fewer ports and a larger optical module. A module larger than QSFP is already anticipated for the first generation 400G modules, as the module must accommodate either 16 x 25G or 8 x 50G electrical input lanes, which exceeds the 4 lanes defined for the QSFP. Furthermore, meeting the 3.5W power limit of QSFP modules appears infeasible for some 400G implementations. The 2km duplex single mode fiber standard 400GBASE-FR8 is specified with 8 multiplexed wavelengths modulated by 50G PAM4. This is twice the number of optical lanes of a QSFP28-CWDM4 module, which is already close to the 3.5W limit. Proposals for larger form factors for 400G can be anticipated from groups such as the CFP MSA, which has had large success in 100G with CFP, CFP2, and CFP4. A key requirement in that case will be that the size allows for at least 16 ports on the front panel (16 x 400G = 6.4T, and possibly more). If maintaining the QSFP size is important, the only suitable 400G standard currently under development is 400GBASE-DR4, which specifies 4 optical 100G PAM4 channels operating over 500m of PSM4 cable. In addition, 4-wavelength 100G PAM4 implementations over duplex SM fiber are expected to be defined in the future. Based on what has been demonstrated in QSFP28-CWDM4 modules, the need for only 4 wavelengths increases the likelihood that the power limits of QSFP can be met. However, unless a 4 x 100G electrical interface also becomes available (“CDAUI-4”), the number of electrical input lanes must still be increased to at least 8. This requires a new module definition, and various solutions are currently being considered. Among them is moving beyond the pluggable modules to a new paradigm based on on-board-optics (OBO). OBO modules have the advantage of moving the optics closer to the ASIC, which can help increase signal integrity and lower power by eliminating retimers. The Consortium for On-Board Optics (COBO) was recently formed to accelerate the development of such solutions and has the support of at least one large data center provider. Other solutions are also on the table, such as those proposed by the microQSFP MSA, which aims to realize the function of a QSFP in a size similar to an SFP, resulting in a front panel bandwidth of 7.2Tbps.


Optical modules are key to building the switching fabrics of mega-scale data centers. The transition from 40G to 100G is imminent, and several possible paths exist for the next stage of evolution, which will likely be based around 200G or 400G interconnects. New optical module concepts will be necessary, and optical module vendors must work closely with networking equipment manufacturers and data center operators to ensure the development of solutions that meet future data center requirements of cost and power per gigabit.