Cisco has unwrapped a high-end, 51.2 Tbps router and chip that it says will go a long way toward supporting the distributed AI workloads of today and in the future.
Aimed at hyperscalers and large data center operators, the Cisco 8223 routing system is based on a new iteration of the company’s Silicon One portfolio: the P200 programmable, deep buffer chip. The system supports Octal Small Form-Factor Pluggable (OSFP) and Quad Small Form-Factor Pluggable Double Density (QSFP-DD) optical form factors that help the box support geographically dispersed AI clusters.
“Power constraints and resiliency requirements are causing hyperscalers, neoclouds, and enterprises to embrace distributed AI clusters that span campus and metro regions, all of which need secure, high-performing, high-capacity, and energy-efficient connectivity,” wrote Rakesh Chopra, Cisco Fellow and senior vice president for Silicon One, in a blog post about the new system. “The Cisco 8223 is optimized for large-scale disaggregated fabrics within and across data centers, enabling customers to scale AI infrastructure with unmatched efficiency and control.”
srcset=”https://b2b-contenthub.com/wp-content/uploads/2025/10/Cisco-Silicon-One-P200-chip.png?quality=50&strip=all 1581w, https://b2b-contenthub.com/wp-content/uploads/2025/10/Cisco-Silicon-One-P200-chip.png?resize=150%2C150&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2025/10/Cisco-Silicon-One-P200-chip.png?resize=300%2C300&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2025/10/Cisco-Silicon-One-P200-chip.png?resize=768%2C769&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2025/10/Cisco-Silicon-One-P200-chip.png?resize=1022%2C1024&quality=50&strip=all 1022w, https://b2b-contenthub.com/wp-content/uploads/2025/10/Cisco-Silicon-One-P200-chip.png?resize=1533%2C1536&quality=50&strip=all 1533w, https://b2b-contenthub.com/wp-content/uploads/2025/10/Cisco-Silicon-One-P200-chip.png?resize=1240%2C1240&quality=50&strip=all 1240w, https://b2b-contenthub.com/wp-content/uploads/2025/10/Cisco-Silicon-One-P200-chip.png?resize=696%2C697&quality=50&strip=all 696w, https://b2b-contenthub.com/wp-content/uploads/2025/10/Cisco-Silicon-One-P200-chip.png?resize=168%2C168&quality=50&strip=all 168w, https://b2b-contenthub.com/wp-content/uploads/2025/10/Cisco-Silicon-One-P200-chip.png?resize=84%2C84&quality=50&strip=all 84w, https://b2b-contenthub.com/wp-content/uploads/2025/10/Cisco-Silicon-One-P200-chip.png?resize=479%2C480&quality=50&strip=all 479w, https://b2b-contenthub.com/wp-content/uploads/2025/10/Cisco-Silicon-One-P200-chip.png?resize=359%2C360&quality=50&strip=all 359w, https://b2b-contenthub.com/wp-content/uploads/2025/10/Cisco-Silicon-One-P200-chip.png?resize=250%2C250&quality=50&strip=all 250w” width=”1022″ height=”1024″ sizes=”auto, (max-width: 1022px) 100vw, 1022px”>
Cisco
Cisco Silicon One processors are purpose-built to support high network bandwidth and performance and can be customized for routing or switching from a single chipset, eliminating the need for different silicon architectures for each network function. Core to the Silicon One system is its support for enhanced Ethernet features, such as improved flow control and congestion awareness and avoidance.
A single P200-based system handles the traffic that previously required six 25.6 Tbps fixed systems or a four-slot modular system, Chopra said. In addition, the 8223 3RU, 51.2 Tbps configuration consumes about 65% less power than prior generations, he stated.
The 8223 features 64 ports of 800G coherent optics support and is capable of processing over 20 billion packets per second, according to Chopra. It features advanced buffering at its core to handle the massive traffic surges from AI training application traffic. The P200 enables the router to support full 512 radix, and it can scale to 13 petabits using a two-layer topology, or up to a massive 3 exabits using a three-layer topology, Chopra added.
Some say deep buffers shouldn’t be used to handle this type of traffic; the contention is that these buffers fill and drain, creating jitter in the workloads, and that slows things down, Chopra told Network World. “But the real source of that challenge is not the buffers. It’s a poor congestion management scheme and poor load balancing with AI workloads, which are completely deterministic and predictable. You can actually proactively figure out how to place flows across the network and avoid the congestion,” he said.
The 8223’s deep-buffer design provides ample memory to temporarily store packets during congestion or traffic bursts, an essential feature for AI networks where inter-GPU communication can create unpredictable, high-volume data flows, according to Gurudatt Shenoy, vice president of Cisco Provider Connectivity. “Combined with its high-radix architecture, the 8223 allows more devices to connect directly, reducing latency, saving rack space, and further lowering power consumption. The result is a flatter, more efficient network topology supporting high-bandwidth, low-latency communication that is critical for AI workloads,” Shenoy wrote in a blog post.
NOS options
Notably, the first operating systems that the 8223 supports are the Linux Foundation’s Software for Open Networking in the Cloud (SONiC) and Cisco news:
- Cisco: AI demands more reliable optical networking components
- Cisco admins urged to patch IOS, IOS XE devices
- Cisco expands its quantum networking portfolio with new software prototypes
🛸 Recommended Intelligence Resource
As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.
→ contabo
Leave a Reply