SkyWatchMesh – UAP Intelligence Network

UAP Intelligence Network – Real-time monitoring of official UAP reports from government agencies and scientific institutions worldwide

Oracle’s big bet for AI: Zettascale10

Oracle Cloud Infrastructure (OCI) is not just going all-in on AI, but on AI at incredible scale.

This week, the company announced what it calls the largest AI supercomputer in the cloud, OCI Zettascale10. The multi-gigawatt architecture links hundreds of thousands of Nvidia GPUs to deliver what OCI calls “unprecedented” performance.

The supercomputer will serve as the backbone for the ambitious, yet somewhat embattled, $500 billion Stargate project.

“The platform offers benefits such as accelerated performance, enterprise scalability, and operational efficiency attuned towards the needs of industry-specific AI applications,” Yaz Palanichamy, senior advisory analyst at Info-Tech Research Group, told Network World.

How Oracle’s new supercomputer works

Oracle’s new supercomputer stitches together hundreds of thousands of Nvidia GPUs across multiple data centers, essentially forming multi-gigawatt clusters. This allows the architecture to deliver up to 10X more zettaFLOPS of peak performance: An “unprecedented” 16 zettaFLOPS, the company claims.

To put that in perspective, a zettaFLOP (a 1 followed by 21 zeroes) can perform one sextillion FLOPS (floating point operations per second), allowing systems to perform intensely complex computations, like those calculated by the most advanced AI and machine learning (ML) systems. That compares to computers working at gigaflop (1 followed by 9 zeroes) or exaFLOP (1 followed by 18 zeroes) speeds.

“OCI Zettascale10 was designed with the goal of integrating large-scale generative AI use cases, including training and running large language models,” said Info-Tech’s Palanichamy.

Oracle also introduced new capabilities in Oracle Acceleron, its OCI networking stack, that it said helps customers run workloads more quickly and cost-effectively. They include dedicated network fabrics, converged NICs, and host-level zero-trust packet routing that Oracle says can double network and storage throughput while cutting latency and cost.

Oracle’s zettascale supercomputer is built on the Acceleron RoCE (RDMA over Converged Ethernet) architecture and Nvidia AI infrastructure. This allows it to deliver what Oracle calls “breakthrough” scale, “extremely low” GPU-to-GPU latency, and improved price/performance, cluster use, and overall reliability.

The new architecture has a “wide, shallow, resilient” fabric, according to Oracle, and takes advantage of switching capabilities built into modern GPU network interface cards (NICs). This means it can connect to multiple switches at the same time, but each switch stays on its own isolated network plane.

Customers can thus deploy larger clusters, faster, while running into fewer stalls and checkpoint restarts, because traffic can be shifted to different network planes and re-routed when the system encounters unstable or contested paths.

The architecture also features power-efficient optics and is “hyper-optimized” for density, as its clusters are located in large data center campuses within a two-kilometer radius, Oracle said.

“The highly-scalable custom design maximizes fabric-wide performance at gigawatt scale while keeping most of the power focused on compute,” said Peter Hoeschele, VP for infrastructure and industrial compute at OpenAI.

OCI is now taking orders for OCI Zettascale10, which will be available in the second half of 2026. The company plans to offer multi-gigawatt deployments, initially targeting those with up to 800,000 Nvidia GPUs.

But is it really necessary?

While this seems like an astronomical amount of compute, “there are customers for it,” particularly envelope-pushing companies like OpenAI, said Alvin Nguyen, a senior analyst at Forrester.

He pointed out that most AI models have been trained on text, essentially at this point comprising “all of human-written history.” Now, though, systems are ingesting large and compute-heavy files including images, audio, and video. “Inferencing is expected to grow even bigger than the training steps,” he said.

And, ultimately, it does take a while for new AI factories/systems like OCI Zettascale10 to be produced at volume, he noted, which could lead to potential issues. “There is a concern in terms of what it means if [enterprises] don’t have enough supply,” said Nguyen. However, “a lot of it is unpredictable.”

Info-Tech’s Palanichamy agreed that fears are ever-present around large-scale GPU procurement, but pointed to the Oracle-AMD partnership announced this week, aimed at achieving next-gen AI scalability.

“It is a promising next step for safeguarding and balancing extreme scale in GPU demand, alongside enabling energy efficiency for large-scale AI training and inference,” he said.

Advice to enterprises who can’t afford AI factories: ‘Get creative’

Nguyen pointed out that, while OpenAI is a big Oracle partner, the bulk of the cloud giant’s customers aren’t research labs, they’re everyday enterprises that don’t necessarily need the latest and greatest.

Their more modest requirements offer those customers an opportunity to identify other ways to improve performance and speed, such as by simply updating software stacks. It’s also a good time for them to analyze their supply chain and supply chain management capabilities.

“They should be making sure they’re very aware of their supply chain, vendors, partners, making sure they can get access to as much as they can,” Nguyen advised.

Not many companies can afford their own AI mega-factories, he pointed out, but they can take advantage of mega-factories owned and run by others. Look to partners, pursue other cloud options, and get creative, he said.

There is no doubt that, as with the digital divide, there is a growing “AI divide,” said Nguyen. “Not everyone is going to be Number One, but you don’t have to be. It’s being able to execute when that opportunity arises.”


🛸 Recommended Intelligence Resource

As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

→ Ecovacs

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *