At OFC 2025, Lightmatter introduced two groundbreaking products—Passage™ M1000 and Passage™ L200/L200X—that redefine the future of AI infrastructure by delivering record-breaking bandwidth and eliminating interconnect bottlenecks. These next-generation 3D photonic superchips and co-packaged optics solutions mark a significant leap in scalable, high-performance AI compute and networking architectures. Designed to meet the needs of ultra-large models and hyperscale clusters, Lightmatter’s solutions deliver up to 200+ Tbps of bandwidth within a single package, setting new standards for integration, power efficiency, and scale.
The Passage M1000 is the industry’s first 3D photonic superchip with 114 Tbps total optical bandwidth and support for the world’s largest die complexes across a 4,000+ mm² multi-reticle interposer. The device features a reconfigurable waveguide mesh, 256 fiber I/Os, and 1.5 kW power delivery—far exceeding traditional Co-Packaged Optics (CPO) solutions. Leveraging GF’s Fotonix™ platform and manufacturing partnerships with Amkor, the M1000 enables pervasive I/O across the die surface, removing shoreline constraints and accelerating time to deployment for next-gen XPUs and AI switches.
Complementing the M1000, the Passage L200 and L200X are the world’s first 3D CPO engines, offering 32 Tbps and 64 Tbps of bidirectional bandwidth, respectively. Integrated using UCIe die-to-die interfaces, these chips support 320 programmable SerDes, advanced WDM optics (up to 1.6 Tbps per fiber), and seamless chiplet-based integration. Co-developed with Alphawave Semi, the L200 series offers a scalable roadmap to solve the growing imbalance between compute performance and interconnect throughput—achieving up to 8x faster AI model training.
Lightmatter’s innovations were developed in collaboration with leading foundry and assembly partners, including GlobalFoundries, ASE, and Amkor, and feature the company’s Guide™ light engine to deliver the total optical power and integration needed to meet AI infrastructure demands. Both M1000 and L200 products will be showcased at booth #5145 during OFC 2025 in San Francisco.
• Passage™ M1000:
• First 3D Photonic Superchip with 114 Tbps total optical bandwidth
• 8-tile interposer with 1024 SerDes and reconfigurable waveguide mesh
• 256 fibers @ 448 Gbps each = 1.5 kW power delivery
• Built on GF Fotonix™ platform, with support from Amkor
• Reference platform spans 4,000+ mm², largest die complex in AI packaging
• Available Summer 2025
• Passage™ L200 / L200X (3D Co-Packaged Optics):
• L200: 32 Tbps (56 Gbps NRZ)
• L200X: 64 Tbps (106/112 Gbps PAM4)
• 320 multi-rate/multi-protocol SerDes
• 16 wavelength WDM per fiber (up to 1.6 Tbps/fiber)
• UCIe die-to-die interface; chiplet-ready
• Developed with Alphawave Semi
• Targeted availability: 2026
• Shared Innovations:
• World’s first edgeless I/O architecture
• Guide™ light engine integrated for high-power, dense laser delivery
• Enables up to 8x faster AI model training via ultra-high I/O throughput
• Built for high-volume manufacturing with ASE, Amkor, GF
“Passage M1000 is a breakthrough achievement in photonics and semiconductor packaging for AI infrastructure,” said Nick Harris, founder and CEO of Lightmatter. “We are delivering a cutting-edge photonics roadmap years ahead of industry projections.”
“The M1000 photonic interposer architecture, built on our GF Fotonix platform, sets the pace for photonics performance and will transform advanced AI chip design,” said Dr. Thomas Caulfield, President and CEO of GlobalFoundries.
- Lightmatter, headquartered in Mountain View, California, was founded in 2017 by Nicholas Harris, Darius Bunandar, and Thomas Graham. The company has raised a total of $850 million in funding, including a $400 million Series D round in October 2024, which valued the company at $4.4 billion. Significant milestones for Lightmatter include the development of Envise, an application-specific integrated circuit (ASIC) that uses optical computing for large language model inference, and Passage, an optical interconnect platform capable of connecting up to 48 chips with support for up to 768 terabits per second between each chip.