SK hynix starts SOCAMM2 mass production for Nvidia

Story by  ANI | Posted by  Ashhar Alam | Date 20-04-2026
Sk hynix SOCAMM2
Sk hynix SOCAMM2

 

Seoul (South Korea)

SK hynix, on Monday, announced that it began the mass production of SOCAMM2, a next-generation memory module developed to increase performance and power efficiency in artificial intelligence servers.

This new Small Outline Compression Attached Memory Module 2 is optimized for Nvidia's upcoming Vera Rubin platform, according to a report by The Korea Herald, signaling a deeper level of technical collaboration between the two companies.

The SOCAMM2 module integrates 192 gigabytes of memory using sixth-generation 10-nanometer LPDDR5X DRAM. While traditional server modules typically rely on standard DDR5, this specific design vertically stacks LPDDR chips to improve energy efficiency while maintaining the high performance required for modern AI workloads.

"We expect SOCAMM2 to fundamentally address memory bottlenecks in training and inference for large language models with hundreds of billions of parameters, significantly accelerating overall system performance," the report quoted SK hynix.

Data provided by the company indicated that SOCAMM2 delivers more than twice the bandwidth and over 75 per cent greater power efficiency when compared with conventional DDR5 RDIMM modules. This makes the hardware suitable for high-performance AI tasks. Data transfer speeds reached 9.6 gigabits per second, an increase from the 8.5 Gbps recorded in the previous SOCAMM1 generation. The module also features a higher number of input and output pins to enhance total data throughput.

These technical improvements are expected to reduce the total cost of ownership for hyperscale data center operators. In these environments, investment decisions depend on rack-level performance, power consumption, and cooling requirements rather than just the cost of individual components.

The report noted that while SOCAMM does not reach the ultra-high bandwidth levels of High Bandwidth Memory, its architecture allows for a simpler manufacturing process and higher yields, which provides a cost advantage on a per-capacity basis.

"In this hierarchy, SOCAMM serves as an intermediate layer, handling frequently accessed 'hot' data and buffering workloads between HBM and system memory to reduce bottlenecks," the report quoted an industry official.

READ MOREFouqia Wajid's journey shows the power of mass communication

The modular form factor represents a shift from conventional LPDDR memory, which is usually soldered directly onto boards and cannot be replaced. This new design allows for more flexibility in system maintenance and design. SK hynix worked with Nvidia to tailor the module for the Vera Rubin platform, which is scheduled for launch in the second half of the year. The company also expects to supply its next-generation HBM4 memory for the same platform.

"The 192GB SOCAMM2 sets a new benchmark for AI memory performance. We will strengthen our position as a trusted AI memory solutions provider through close collaboration with global AI customers," the report quoted Kim Ju-seon, SK hynix President and head of AI Infra.