Non-binary DDR5 is finally coming to save your wallet • The Register | Origin Tech

We’re all used to coping with system reminiscence in neat components of eight. As capability goes up, it follows a predictable binary scale doubling from 8GB to 16GB to 32GB and so forth. However with the introduction of DDR5 and non-binary reminiscence within the datacenter, all of that is altering.

As a substitute of leaping straight from a 32GB DIMM to a 64GB one, DDR5, for the primary time, permits for half steps in reminiscence density. Now you can have DIMMs with 24GB, 48GB, 96GB, or extra in capability.

The added flexibility provided by these DIMMs might find yourself driving down system prices, as prospects are not compelled to purchase extra reminiscence than they want simply to maintain their workloads glad.

What the heck is non-binary reminiscence?

Non-binary reminiscence is not really all that particular. What makes non-binary reminiscence totally different from normal DDR5 comes all the way down to the chips used to make the DIMMs.

As a substitute of the 16Gb — that is gigabit — modules discovered on most DDR5 reminiscence at present, non-binary DIMMs use 24Gb DRAM chips. Take 20 of those chips and bake them onto a DIMM, and also you’re left with 48GB of usable reminiscence after you take into consideration ECC and metadata storage.

Based on Brian Drake, senior enterprise improvement supervisor at Micron, you’ll be able to often get to round 96GB of reminiscence on a DIMM earlier than you’re often compelled to resort to superior packaging methods.

Utilizing through-silicon through (TSV) or dual-die packaging, DRAM reminiscence distributors can obtain a lot increased densities. Utilizing Samsung’s eight-layer TSV course of, for instance, the chipmaker might obtain densities as excessive as 24GB per DRAM module for 768GB per DIMM.

To this point, the entire main reminiscence distributors, together with Samsung, SK-Hynix, and Micron, have introduced 24Gb modules to be used in non-binary DIMMs.

The fee downside

Arguably the largest promoting level behind non-binary reminiscence comes all the way down to price and adaptability.

“For a typical datacenter, price of reminiscence is critical and may be even increased than price of compute,” CCS Insights analyst Wayne Lam advised The Register.

As our sister web site The Subsequent Platform reported earlier this 12 months, reminiscence can account for as a lot as 14 % of a server’s price. And within the cloud, some business pundits put that quantity nearer to 50 %.

“Doubling of DRAM capability — 32GB to 64GB to 128GB — now produces massive steps in price. The fee per bit is pretty fixed, subsequently, if you happen to preserve doubling, the associated fee increments turns into prohibitively costly,” Lam defined. “Going from 32GB to 48GB to 64GB and 96GB affords gentler worth increments.”

Take this thought experiment for example:

Say your workload advantages from having 3GB/thread. Utilizing a 96-core AMD Epyc 4-based system with one DIMM per channel, you’d want a minimum of 576GB of reminiscence. Nonetheless, 32GB DIMMs would depart you 192GB quick, whereas 64GB DIMMs would depart you with simply as a lot in surplus. You possibly can drop all the way down to 10 channels and get nearer to your goal, however then you are going to take a success to reminiscence bandwidth and pay additional for the privilege. And this downside solely will get worse as you scale up.

In a two-DIMM-per-channel configuration — one thing we’ll be aware AMD does not assist on Epyc 4 at launch — you may use blended capability DIMMs to slim in on the best memory-to-core ratio, however as Drake factors out, this is not an ideal answer.

“Perhaps the system has to down clock that two-DIMM-per-channel answer, so it might’t run the utmost information fee. Or possibly there is a efficiency implication of getting uneven ranks in every channel,” he mentioned.

By comparability, 48GB DIMMs will nearly actually price much less, whereas permitting you to hit your ideally suited memory-to-core ratio with out sacrificing on bandwidth. And as we’ve talked about previously, reminiscence bandwidth issues lots, as chipmakers proceed to push the core counts of their chips ever increased.

The calculus goes to look totally different relying in your wants, however on the finish of the day, non-binary reminiscence affords larger flexibility for balancing price, capability, and bandwidth.

And there aren’t actually any downsides to utilizing non-binary DIMMs, Drake mentioned, including that, in sure conditions, they might really carry out higher.

What about CXL?

In fact non-binary reminiscence is not the one strategy to get across the memory-core ratio downside.

“Applied sciences equivalent to non-binary capacities are useful, however so is the transfer to CXL reminiscence — shared system reminiscence — and on-chip high-bandwidth reminiscence,” Lam mentioned.

With the launch of AMD’s Epyc 4 GPUs this fall and Intel’s upcoming Sapphire Rapids processors subsequent month, prospects will quickly have another choice for including reminiscence capability and bandwidth to their methods. Samsung and Astera Labs have each proven off memory-expansion modules, and Marvell plans to supply controllers for comparable merchandise sooner or later.

Nonetheless, they’re much less an alternative choice to non-binary reminiscence and extra of a complement to them. In reality, Astera Lab’s growth modules ought to work simply fantastic with 48GB, 96GB, or bigger non-binary DIMMs. ®



Non-binary DDR5 is finally coming to save your wallet • The Register

x