Classification and barriers of in-memory-computing
In the von Neumann system, CPU computing and memory storage are separated, and data movement between the two will cause high latency and high energy consumption.
With the development of memory-intensive and computing-intensive applications such as AI in recent years, the problems of high latency and high energy consumption have become urgent problems to be solved.
Strictly speaking, in memory computing can be divided into two categories:
process using memory: It is biased towards circuit innovation, such as allowing the memory itself to have computing power, but this method currently has limited calculation accuracy.
process near memory : The memory integrates additional computing units, such as 3D-stacked memory, logic in memory controllers.
Samsung will release the industry's first in-memory computing chip based on MRAM (Magnetic Random Access Memory) on Nature in 2022. Using MRAM for in-memory computing is a huge leap forward.
However, there are still many factors hindering PIM, such as:
The problem of memory interleave. Modern memory is interleaved according to the channel. After using PIM, such as 3D-stacked memory, how to distribute data
How to program more conveniently, what operations use PIM, and what operations use CPU
cache coherence
The complexity of the process
…
It is believed that these problems will be overcome in the near future.
评论
发表评论