From Chip to Cloud: Optical Interconnects in Engineered Systems

2017 
The high-performance server compute landscape is changing. The traditional model of building general-purpose enterprise compute boxes that end-users can configure with storage and networking to assemble their desired compute environments, has evolved to purpose-built systems optimized for specific applications. This tight integration of hardware and software components together with high-density midboard optical modules and an optical backplane allows for unprecedented levels of switching and compute efficiencies and has fueled the penetration of optical interconnects deep “inside the box,” particularly for switch scale-up. We briefly review earlier 40 G/port switching systems based on active optical cables, and present our newest system: An all-optically-interconnected 100 G/port 8.2 Tb/s InfiniBand packet switch ASIC with 41 ports running 100 Gb/s per port interconnected by 12-channel midboard optical transceivers with 25 Gb/s per channel per direction of optical I/O. Using a blind-mate optical backplane, these components enable systems with up to 50 Tb/s bandwidth in a 2U standard rack mount configuration with industry-leading density, efficiency, and latency. For even tighter co-integration of optical interconnects with switch and processor ASICs, we discuss photonic multichip module and interposer packaging technologies that will further improve system energy efficiencies and overcome impending system I/O bottlenecks.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    58
    References
    36
    Citations
    NaN
    KQI
    []