SK hynix, the world’s second-largest memory chip provider, said Monday it has developed DRAM with the industry’s highest bandwidth for use in machine learning, supercomputers and artificial intelligence.
Unlike conventional DRAM products, high bandwidth memory chips are closely interconnected to processors like graphic processing units and logic chips, which allows faster data transfers. HBM is rising as an optimal memory solution for high-end GPUs, supercomputers, machine learning and AI platforms.
Compared to the previous HBM2, the latest HBM2E boasts 50 percent higher bandwidth and performance.
It supports over 460 gigabytes of data per second based on 3.6 gigabits-per-second speed performance per pin with 1,024 data inputs and outputs.
The company has applied “through silicon via” technology, known as TSV, to form a single, dense package of 16 GB data capacity by stacking eight 16-gigabit chips vertically.
SK hynix introduced the world’s first HBM in 2013. It plans to start mass production of the latest HBM2E in 2020.
“When the market opens up, the company will be able to strengthen its leadership in the new HBM market,” said Chun Jun-hyun, head of HBM business strategy.
By Song Su-hyun (firstname.lastname@example.org