This article is more than 1 year old

Intel teams with Micron on next-gen many-core Xeon Phi with 3D DRAM

Introduces new 'fundamental building block of HPC systems' with Intel Omni Scale Fabric

Intel has released more details about its future "Knights Landing" Xeon Phi many-core processor, including a new high-speed interconnect tech called Intel Omni Scale Fabric, as well as on-package Micron Gen2 Hybrid Memory Cube (HMC) DRAM of up to 16GB.

Intel presentation slide: what's new in Knights Landing

Intel's Silvermont Atom processor microarchitecture makes a bid for HPC (click to enlarge)

"Intel is re-architecting the fundamental building block of HPC systems by integrating the Intel Omni Scale Fabric into Knights Landing, marking a significant inflection and milestone for the HPC industry," Intel VP and GM for workstations and high-performance computing (HPC) Charles Wuischpard said in a statement. "Knights Landing will be the first true many-core processor to address today's memory and I/O performance challenges."

The new fabric will be used not only in the Knights Landing processor – which Intel says will come equipped with "more than 60 HPC-enhanced Silvermont architecture-based cores" and which is scheduled to begin appearing in HPC systems in late 2015 – but will also be incorporated into "future 14nm Intel Xeon processors."

Intel says that the Omni Scale Fabric is based on its own "in-house innovations," as well as IP acquired from Cray and QLogic. "Additionally," Chipzilla reports, "traditional electrical transceivers in the director switches in today's fabrics will be replaced by Intel Silicon Photonics-based solutions, enabling increased port density, simplified cabling and reduced costs."

Those customers who are currently using Intel's True Scale Fabric InfiniBand tech, the company says, will be happy to know that applications using that current fabric will be compatible with the upcoming Omni Scale fabric, and that Intel will "offer a program" to grease the upgrade when Omni Scale becomes available.

Reg readers who have been following the development of Intel's "Knights" series of many-core processors will remember that they morphed from the abandoned graphics-focused Larrabee project into the Knights Ferry "development platform" in 2010, and were first commercially released as the Pentium core–based Knights Corner in 2011.

The chips were originally called "Many Integrated Core" processors, known by the acronym MIC. That appellation was dropped in 2012, however, when Intel decided to rebrand them as the "Xeon Phi" – perhaps because even Intel wasn't consistent on whether to pronounce that acronym "Mick" or "Mike".

When Intel first discussed Knights Landing in November of last year, it revealed that it would be available in both a coprocessor/accelerator version on a PCIe card – as is the "Knights Corner" Xeon Phi, and also as a socketable, bootable CPU version. Monday's announcement reiterated that promise.

During that same November announcement, Intel revealed that Knights Landing would include memory alongside the many-core chip, sharing the same package. In Monday's news, it was revealed that said memory was developed in conjunction with Micron, and will be based on that company's Gen2 Hybrid Memory Cube tech.

"Micron and Intel have actually been working on a memory cube for quite awhile," Micron's HMC technology strategist Mike Black told The Reg. "We actually demonstrated a technology platform back at IDF 2011 where we had our first technology demonstration around HMC."

Having memory in close proximity to CPU cores not only speeds data bandwidth – "more than an order of magnitude," Black said – but also lessens the amount of energy consumed due to the fact that the memory partitions are accessed by means of through-silicon vias, aka TSVs.

Micron Hybrid Memory Cube – details and advantages

You want high DRAM throughput? Micron's 3D Hybrid Memory Cube can oblige

TSVs have proven difficult to manufacture, not the least because it's no easy feat to keep them precisely and evenly columnar from bottom to top. According to Black, however, "We've been working on through-silicon vias for over 10 years now, and there's been tremendous advancements in the last three to four years." He acknowledged that there were, indeed, many "challenges" in the early days of TSV development, but said that "most companies now are showing TSVs that are pretty solid."

The 3D-memory stacks, which Black dubs "high performance on-package memory," consist of a logic layer based on an IBM 32-nanometer logic process, on top of which are placed four or eight memory arrays baked in a 30nm process. Each of those memory layers provides four gigabits of DRAM, resulting in a density of 2GB or 4GB.

The eight-layer stack is the current limit for 3D memory stacking, he told us, saying that taller stacks will require additional development work – but he noted that "our customers will take as much as we can possibly give them" in order to cram more memory into such a small footprint.

Having the memory logic in the base layer has an additional benefit. "Because we have the logic there," Black said, "we're able to provide advanced reparability, resiliency; we can adjust the memory stack itself on the fly if an event were to happen down the road where a bit were to get weak and fail."

Intel provided no pricing for Knights Landing in any of its socketable or PCIe implementations, but from what Black told us about HMC, the addition of 3D memory as in-package DRAM shouldn't drive the price of the new many-core parts when they begin to appear next year.

"The HMC, in terms of looking at the total cost of ownership," he said, "is actually at a lower cost from a memory-implementation standpoint than it would be to try to do that in existing memory platforms." By "that," Black was referring to delivering the exceptional memory bandwidth that the high performance on-package memory can deliver: up to 15 times that of DDR3 and five times that of DDR4. ®

More about

TIP US OFF

Send us news


Other stories you might like