AMD CDNA3 Architecture Sees the Inevitable Fusion of Compute Units and x86 CPU at Massive Scale

The compute accelerator industry is another big business for AMD. The roadmap shows CDNA3 moving to 5nm by 2023.

Taken from TPU … AMD in its 2022 Financial Analyst Day presentation unveiled its next-generation CDNA3 compute architecture, which will see something we’ve been expecting for a while—a compute accelerator that has a large number of compute units for scalar processing, and a large number of x86-64 CPU cores based on some future “Zen” microarchitecture, onto a single package. The presence of CPU cores on the package would eliminate the need for the system to have an EPYC or Xeon processor at its head, and clusters of Instinct CDNA3 processors could run themselves without the need for a CPU and its system memory.

The Instinct CDNA3 processor will feature an advanced packaging technology that brings various IP blocks together as chiplets, each based on a node most economical to it, without compromising on its function. The package features stacked HBM memory, and this memory is shared not just by the compute units and x86 cores, but also forms part of large shared memory pools accessible across packages. 4th Generation Infinity Fabric ties it all together.

AMD is claiming a 500% (or 5 times) AI compute performance/Watt uplift over CDNA2, thanks to the combination of 5 nm processor for the compute dies, an advanced 3D chiplet packaging technology, 4th Gen Infinity Fabric, new math computing formats, Infinity Cache on the compute dies, and a unified memory architecture. The company is working toward a 2023 debut of CDNA3.

Source: TPU

 

Related posts

VLDL and Team17 Announces Epic NPC Man Nice Day for Fishing

CHERRY XTRFY Launches a New Version of the K4 Keyboard With the K4V2

MSI and Blizzard Entertainment Announce Exciting Collaboration for Diablo IV Vessel of Hatred

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Read More