Samsung Low-power, High-speed AI chips in 2024

In 2024, Samsung will launch low latency wide (LLW) DRAMs that are designed to improve the power efficiency of artificial intelligence applications by 70% more than that of regular DRAMs.

LLW DRAMs will become its flagship, next-generation chips and be embedded into AI devices such as extended reality headsets. Samsung aims to beef up artificial intelligence chip foundry sales to about 50% of its total foundry sales in five years.

Samsung’s LLW DRAM is a low-power memory that features wide I/O, low latency and boasts a bandwidth of 128 GB/s, presumably per module.

The new AI chips will enhance data processing speed and capacity by increasing the number of input/output terminals (I/O) in a semiconductor circuit, compared with existing DRAMs.

3D PACKAGING

In 2024, Smasung will unveil an advanced three-dimensional (3D) chip packaging technology, including the most advanced 3.5D packaging.

3 NM PROCESS

Samsung will improve 3-nanometer chip processing technology, currently the industry’s smallest and most advanced process node, to be suitable for AI applications.

In 2023, they began the mass production of 3 nm chips for fabless clients as a global first and ahead of Taiwan’s TSMC.

In 2023, Samsung advanced the production yields of its first-generation 3 nm process technology to a perfect level.

Samsung will improve memory performance by 2.2 times every two year.

GDP STRATEGY

Samsung has new GAA, dram and packaging, or GDP as a new strategy.

GAA, short for gate-all-around, reduces the leakage current of processors with a circuit width of 3 nm or below. It is Samsung’s key architecture to develop next-generation DRAMs and packaging technology.

TESLA FSD and a New AI Chip

Samsung is working with Tesla to develop the EV maker’s next-generation Full Self-Driving (FSD) chips for Level-5 autonomous driving vehicles.

Samsung will develop a 4 nm AI accelerator, a high-performance computing machine used to process AI workloads.

1 thought on “Samsung Low-power, High-speed AI chips in 2024”

  1. Is the powersavings coming from the die shrink or the architecture? I’m not quite clear on this. I do know if you take an IR camera to an NVIDA 3080 or flagship 4080, the hotspot is the memory.

Comments are closed.