Chip designer Arm Holdings plc has announced it is joining the Open Compute Project to help address rising energy demands from AI-oriented data centers.
Arm said it plans to support companies in developing the next phase of purpose-built silicon and packaging for converged infrastructure. The company said that to build this next phase of infrastructure requires co-designed capabilities across compute, acceleration, memory, storage, networking and beyond.
The new converged AI data center won’t be built like before, with separate CPU, GPU, networking and memory. They will feature increased density through purpose-built in-package integration of multiple chiplets using 2.5D and 3D technologies, according to Arm.
Arm is addressing this by contributing the Foundation Chiplet System Architecture (FCSA) specification to the Open Compute Project. FCSA leverages Arm’s ongoing work with the Arm Chiplet System Architecture (CSA) but addresses industry demand for a framework that aligns to vendor- and CPU architecture-neutral requirements.
To power the next generation of converged datacenters, Arm is contributing its new Foundation Chiplet System Architecture specification to the OCP and broadening the Arm Total Design ecosystem.
The benefits for OEM partners are power efficiency and custom design of the processors, said Mohamed Awad, senior vice president and general manager of infrastructure business at Arm. “For anybody building a data center, the specific challenge that they’re running into is not really about the dollars associated with building, it’s about keeping up with the [power] demand,” he said.
Keeping up with the demand comes down to performance, and more specifically, performance per watt. With power limited, OEMs have become much more involved in all aspects of the system design, rather than pulling silicon off the shelf or pulling servers or racks off the shelf.
“They’re getting much more specific about what that silicon looks like, which is a big departure from where the data center was ten or 15 years ago. The point here being is that they look to create a more optimized system design to bring the acceleration closer to the compute, and get much better performance per watt,” said Awad.
The Open Compute Project is a global industry organization dedicated to designing and sharing open-source hardware configurations for data center technologies and infrastructure. It covers everything from silicon products to rack and tray design. It is hosting its 2025 OCP Global Summit this week in San Jose, Calif.
Arm also was part of the Ethernet for Scale-Up Networking (ESUN) initiative announced this week at the Summit that included AMD, Arista, Broadcom, Cisco, HPE Networking, Marvell, Meta, Microsoft, and Nvidia. ESUN promises to advance Ethernet networking technology to handle scale-up connectivity across accelerated AI infrastructures.
Arm’s goal by joining OCP is to encourage knowledge sharing and collaboration between companies and users to share ideas, specifications and intellectual property. It is known for focusing on modular rather than monolithic designs, which is where chiplets come in.
For example, customers might have multiple different companies building a 64-core CPU and then choose IO to pair it with, whether like PCIe or an NVLink. They then choose their own memory subsystem, deciding whether to go HBM, LPDDR, or DDR. It’s all mix and match like Legos, Awad said.
“What this model allows for is the sort of selection of those components in a differentiation where it makes sense without having to redo all the other aspects of the system which are which are effectively common across multiple different designs,” said Awad.
🛸 Recommended Intelligence Resource
As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.
FTC Disclosure: This post contains affiliate links. We may earn a commission if you purchase through these links at no additional cost to you. See our Affiliate Disclosure for details.
Leave a Reply