Principles and advantages of Computing in Memory
The formation of the concept of Computing in Memory (CIM) can be traced back to the 1990s. Extracting data from the memory outside the processing unit, the handling time is often hundreds of times the computing time, the useless energy consumption of the whole process is about 60%-90%, the energy efficiency is very low, and the "storage wall" has become a data computing A major obstacle to application.
Computing in Memory can be understood as embedding computing power in memory, and performing two-dimensional and three-dimensional matrix multiplication/addition operations with a new computing architecture, rather than optimizing on traditional logical operation units or processes. This can essentially eliminate the delay and power consumption of unnecessary data movement, improve AI computing efficiency hundreds of times, reduce costs, and break the storage wall.
Relatively speaking, CPUs generally have 10-100 computing cores, GPUs generally have tens of thousands of computing cores, and Computing in Memory can reach millions of equivalent computing cores.
In addition to being used for AI computing, storage computing technology can also be used for sensing Computing in Memory chips and brain-like chips, representing the mainstream big data computing chip architecture in the future.
The core advantages of Computing in Memory technology include:
· Reduce unnecessary data handling (reduce energy consumption to 1/10~1/100)
· The direct storage unit participates in logical calculations to increase computing power (equivalent to increasing the number of computing cores on a large scale with the same area)
· Save a lot of chip area occupied by D flip-flops
https://en.witmem.com/news/industry_news1/witmem_computing_in_memory1.html
评论
发表评论