Modern cloud and hybrid applications require unprecedented network scalability and solid performance across a range of scenarios. To achieve that, application and infrastructure architectures are evolving rapidly. New application architectures, with highly-distributed compute workloads and increasing use of AI, need robust, low-latency high performance networks. Innovium’s 2 – 12.8Tbps TERALYNX-based switches deliver the lowest latencies and highest application performance for data center networks, in a proven, production-qualified product family. Let us take a deeper look at how.
New applications are being developed using micro-services architecture, with each distributed component running in a container or virtual machine. These micro-services can live forever or are spawned on demand only for the duration of the service when Serverless or Lambda functions are called. An outcome of this new compute model is that east-west traffic has been growing exponentially faster than north-south traffic. A good example of this can be seen when Facebook users logs into their home pages. To display the contents of their unique and custom pages, the application initiates tens or hundreds of east-west IOs to the different micro-services within the Facebook application tier. They include separate IOs for user credentials/authentication, fetching pictures, aggregating comments & ‘likes’, inserting advertisements, providing video, pushing changes to many replication servers etc. With lower latency in the network for the ever-increasing east-west traffic, users can get a significantly better experience.
Large-scale data center today has tens of thousands of servers connected across an Ethernet network. To connect these servers, the Ethernet network has multiple tiers that include a minimum of ToR, Leaf and Spine switches. Further, data center operators want to maximize utilization of their compute resources. Hence, they want any application to be placed anywhere that compute is available. In a distributed compute model used today, servers often have east-west IOs going across multiple tiers of the network. Thus, latencies can quickly add up as data travels through the multi-tier network and queuing delays occur. Technologies like RoCE (RDMA over Converged Ethernet) and distributed compute models using Serverless/ Lambda functions further require the lowest latencies and highest performance in the network to deliver the best application performance.
Storage innovations such as flash arrays and NVMeOF provides customers higher IOPs and lower latencies. On the compute front, technologies like DPDK (Data Plane Development Kit) and FD.io VPP (Vector Packet Processing) also offer lower latencies and higher performance data path on servers. To keep up, Ethernet networks must deliver the best performance and lowest latencies for server-server and server-storage IO. Innovium’s TERALYNX does that extremely well and in a robust manner.
Machine Learning and Artificial Intelligence deployments are growing very rapidly. Infrastructure for ML & AI consists of a cluster of GPU/TPU/specialized AI nodes, connected using an Ethernet Fabric. As training models grow and data sets become larger, the cluster size grows larger with a larger Ethernet Fabric. These GPUs/TPUs/AI clusters want the highest performance and lowest latencies from the Ethernet fabric to run faster and complete training jobs quickly. With the lowest latency in TERALYNX, we reduce training/AI run times and hence speed up AI/ML workloads significantly.
Innovium TERALYNX switches offer the highest port density and the lowest latency. For instance, a 32-port QSFP-DD switch, powered by TERALYNX, delivers the industry’s highest performance of 12.8Tbps in a compact 1RU form-factor with a latency of approx. 400 nsec. It delivers latencies that are about half or lower compared to alternatives. Further, latency on TERALYNX is not impacted by features or programmability used on the switch – something that happens on other switches. This switch can also support up to 128 ports of 100GbE, replacing twelve 32 port 100G switches and thereby cutting down latency from 3 latency hops to a single latency hop. Therefore, customers can deploy TERALYNX based switches in their data centers to get the most scalable and lowest latency network and hence the best application performance.
We are excited to deliver up to 12.8Tbs TERALYNX data-center switch silicon with very low latency, unmatched telemetry, best feature set and leading scalability. Please contact us at [email protected] for further information.