Innovium and Partners Demonstrate Range of Production High Performance Networking Solutions at Supercomputing 2019

Industry’s highest performance 100 – 400G TERALYNX-based switches with lowest latency and highest port-density being showcased at multiple booths

SAN JOSE, Calif., November 14th, 2019 – Innovium, Inc., a leading provider of high performance, innovative switching silicon solutions, announced that Innovium TERALYNX-based high-performance switches will be showcased by multiple industry system partners at Supercomputing 2019 (SC19). These switches deliver the industry’s highest performance, lowest latency, and highest port radix with flexibility for 10G – 400G connectivity and breakthrough telemetry using Innovium’s programmable TERALYNX switch silicon. The switches will be showcased at Cisco and GIGABYTE booths at SC19 to be held November 17th–22nd, 2019 at the Colorado Convention Center in Denver.

Innovium’s 12.8Tbps TERALYNX low-latency production switch silicon, with unmatched telemetry, is being used by market-leading OEMs and Cloud providers for Hyperscale and Enterprise deployments. A broad range of switches are available with support for OEM, Hyperscale and open-source network operating systems including SONiC/SAI. Further, Innovium has collaborated with key ecosystem partners to perform comprehensive interoperability testing of TERALYNX with an expansive set of NRZ and PAM-4 based DAC/ACC cables and optics modules to enable rapid time to deployment for customers.

“High performance computing customers running applications such as genomics, seismic imaging, financial modeling, data analytics, AI/ML require networking solutions with the highest performance, highest port densities and lowest latencies,” said Rajiv Khemani, Co-founder and CEO of Innovium. “We are excited to provide the best application performance to these HPC customers with TERALYNX-based switches, that deliver these critical requirements from multiple system partners.”

Cisco (booth #801) is showcasing two Innovium-based switches, the Nexus 3432D-S and Nexus 3408-S. The Nexus 3432D-S is a 1RU, QSFP-DD switch that supports up to 32 ports of 400G, with each portable to operate in 25/40/50/100/400G speed. The Nexus 3408-S is a 4RU, 8-slot chassis with flexibility to use either 100G or 400G Line-Card Expansion Modules (LEMs) offering up to 128 ports of 100G or 32 ports of 400G in a pay-as-you-grow fashion. These switches have the industry’s highest port radix in a compact and highly energy-efficient chassis. They are being adopted across a range of customer deployments.

“As a leader in high-performance computing hardware, GIGABYTE has released one of the most comprehensive lineups of PCIe Gen 4.0 capable server platforms onto the market. At SC19, we are excited to collaborate with Innovium, a leader in high-performance networking, to showcase joint solutions including servers with PCIe Gen 4.0 based 200GbE RoCE NICs and Innovium’s TERALYNX based low-latency multi-terabit RoCE capable switches, so that customers can achieve even greater productivity gains in their HPC workloads,” said Etay Lee, General Manager at Networking and Communication Business Unit, GIGABYTE. GIGABYTE (booth #671) is showcasing a 1RU, QSFP-28/56 switch that supports up to 32-ports of 100/200G speed.

To meet with Innovium at Supercomputing 2019, please contact [email protected].

About Innovium

Innovium is a leading provider of high performance, innovative switching silicon solutions for cloud, enterprise and edge data centers. Innovium TERALYNX family delivers software compatible products ranging from 1Tbps to 12.8Tbps with unmatched telemetry, low latency, programmability, and highly scalable architecture. Innovium’s products have been selected and validated by market-leading switch OEM, ODM and cloud providers. The company is headquartered in Silicon Valley, California and is backed by leading venture capital firms including Greylock Partners, Walden Riverwood, Capricorn Investment Group, Qualcomm Ventures, S-Cubed Capital and Redline Capital. For more information, please visit

Amit Sanyal
[email protected]