You're at
[Industry News] - Optical Modules Evolving for Tomorrow’s Data Centers



Optical transceivers play a key role in data centers, and their importance will continue to grow as server access and switch-to-switch interconnects require increasingly higher speeds to meet the rising demands for bandwidth driven by streaming video, cloud computing and storage, or application virtualization. Today, mega-scale data centers typically have 10G access ports that interface to 40G switching fabrics, but in the near future, the access ports will increase to 25G and the switching fabrics to 100G. Here, we review the challenges introduced by data center applications on optical modules and describe how the industry is responding to meet the demand. Challenges in cost per Gbps for optical modules A single mega data center that houses 100,000s of servers interconnected by a highly redundant horizontal mesh requires a similarly high number of optical links. Because each link must be terminated on both ends by an optical transceiver, the number of transceivers is at least twice the number of optical links and can reach even higher numbers if optical breakout configurations are used. Such high volumes can drive low cost points for optical transceivers, even though these modules operate at the forefront of high data rate. Pricing on the order of $10/Gbps for longer reaches all the way down to $1/Gbps for shorter reaches has been put forward as a challenge to suppliers, which is clearly an ambitious goal given that today’s pricing is 5x to 10x higher, albeit at different data rates or in a different application space. Cost reductions of this order are difficult to achieve by only making minor refinements of proven approaches to module design and manufacturing. Relaxed specifications, such as lowering the maximum operating temperature, reducing the operating temperature range, shortening the product usage lifetime, and allowing the use of forward error correction (FEC), are examples that can help reduce module cost since it allows module vendors to adopt lower cost designs with higher levels of optical integration, non-hermetic packaging, uncooled operation, or simplified testing. Transition from 40G to 100G optical modules for the data center An important factor that determines the applications of optical modules is form factor. Today’s data centers have consolidated around transceivers in the SFP form factor for server access and around QSFP tranceivers for switch-to-switch interconnects. Direct attach copper (DAC) cables are typically used when the distance to the access port is less than 5m, with optical modules or active optical cables (AOC) used for longer reaches. 10G access ports use SFP+ modules, but they will transition to SFP28 when the access speed increases to 25G. Server access does not require reaches beyond 100m, so these modules are typically limited to VCSEL-based transceivers operating over multimode fiber (MMF). However, it is also expected that the ecosystem around 25G lanes will be leveraged in applications such as next-generation enterprise networks which will drive demand for SFP28 modules operating over single mode fiber (SMF) for reaches of 10km to 40km.

 

QSFP modules accept 4 electrical input lanes, and operate at 4x the data rate of the corresponding SFP module. Today, 40G QSFP+ is widely deployed in data center switching fabrics. Two somewhat competing schemes exist for the optical interface: parallel single mode fiber (PSM) and course wavelength division multiplexing (CWDM). PSM operates over 8 SMF ribbon cable, where each optical lane occupies a duplex fiber pair. PSM has the potential advantage of a lower module cost because no wavelength multiplexing is required, but cable and connector costs are significantly higher than duplex, resulting in a costlier fiber plant.

 

CWDM operates over duplex SM cabling and uses wavelength division multiplexing to combine 4 lanes in one fiber. Here, the 40GBASE-LR4 Ethernet standard exists as a reference specification for the optical interface. Because the lanes travel in a single fiber strand, CWDM links are compatible with all-optical switching, which can be used for data center traffic management and reconfiguration. A challenge with CWDM modules is that the cost is typically higher than PSM due to the need for additional components such as an optical multiplexer or demultiplexer, but significant costs reductions can be realized by reducing the transmission distance from 10km (LR4) to 2km (MR4 or LR4-Lite). This illustrates another trend related to data centers, which is that nearly all link lengths are less than 2km. For this reason, the specifications for the next generation of QSFP modules that operate at 100G (QSFP28) have focused on reaches between 500m and 2km over SMF. The CWDM4 and CLR4 MSAs are based on the same wavelength grid as 40GBASE-LR4 but increase the capacity to 100G (4x25G). Alternatively, the PSM4 MSA specifies a 4x25G interface over 500m of PSM cabling. Such QSFP28 modules will be deployed in high volumes as data centers transition from 40G to 100G switching fabrics starting in 2016. In addition, QSFP28-LR4 modules will be necessary for interfacing data center switches to core routers which require Ethernet compliant interfaces (100GBASE-LR4). In this case, similar to 40G, cost-reduced versions that are optimized for 2km are expected to be in high demand.