സെഫിർനെറ്റ് ലോഗോ

ഡ്രാം, അത് അടുക്കുന്നു: എസ്കെ ഹൈനിക്സ് 819 ജിബി/എസ് എച്ച്ബിഎം 3 ടെക് പുറത്തിറക്കുന്നു

തീയതി:

Korean DRAM fabber SK hynix has developed an HBM3 DRAM chip operating at 819GB/sec.

HBM3 (High Bandwidth Memory 3) is a third generation of the HBM architecture which stacks DRAM chips one above another, connects them by vertical current-carrying holes called Through Silicon Vias (TSVs) to a base interposer board, via connecting micro-bumps, upon which is fastened a processor that accesses the data in the DRAM chip faster than it would through the traditional CPU socket interface.

Seon-yong Cha, SK hynix’s senior vice president for DRAM development, said: “Since its launch of the world’s first HBM DRAM, SK hynix has succeeded in developing the industry’s first HBM3 after leading the HBM2E market. We will continue our efforts to solidify our leadership in the premium memory market.”

സ്കീമാറ്റിക്

Schematic diagram of high bandwidth memory

The previous generations were HBM, HBM2 and HBM2E (Enhanced or Extended), with ജെഡെക് developing standards for each. It has not yet developed an HBM3 standard, which means that SK hynix might need to retrofit its design to a future and faster HBM3 standard.

HBM memory speeds. The rightmost column its a possible future HBM3 standard and the empty column is our guesstimated SK hynix HMB3 I/O speed.

The rightmost column its a possible future HBM3 standard and the empty column is our guess-timated SK hynix HMB3 I/O speed

The 819GB/sec speed is a 78 per cent increase on the firm’s HBM2e chip speed of 460GB/sec. SK hynix used 8 x 16Gbit layers in its 16GB HBM2e chip. The HBM3 chip comes in 24GB and 16GB capacities with the 24GB chip having a 12-layer stack.

The company says its engineers ground their DRAM chip height to approximately 30 micrometer (μm, 10-6m), equivalent to a third of an A4 paper’s thickness, before vertically stacking up to 12 of them using TSV technology.

Underside (Interposer side) of Sk hynix HBM3 chip.

Underside (Interposer side) of the SK hynix HBM3 chip

Producing an HBM3 chip is only half, so to speak, of what needs to be done, since it has to be fixed to an interposer-processor combo and that needs to be built to accommodate the memory component.

Building an HBM-interposer-processor combo will generally only be done for applications that need more memory capacity and speed than that provided by industry-standard server CPUs and their socket scheme. That means supercomputers, HPC systems, GPU servers, AI systems and the like where the expense and specialisation (restricted market) is worthwhile.

We might expect systems using SK hynix’s HBM3 to appear after mid-2022 and in 2023. ®

പ്ലേറ്റോഅയ്. വെബ് 3 പുനർ‌ചിന്തനം. ഡാറ്റ ഇന്റലിജൻസ് വർദ്ധിപ്പിച്ചു.
ആക്സസ് ചെയ്യുന്നതിന് ഇവിടെ ക്ലിക്കുചെയ്യുക.

Source: https://go.theregister.com/feed/www.theregister.com/2021/10/20/sk_hynix_hbm3/

സ്പോട്ട്_ഐഎംജി

ഏറ്റവും പുതിയ ഇന്റലിജൻസ്

സ്പോട്ട്_ഐഎംജി