Storage Computing Hardware
With the development of important application fields of electronic information such as cloud storage and Internet of Things, consumer electronics, aerospace, earth resource information, scientific computing, medical imaging and life sciences, and military equipment, today's society is in an era of big data with information explosion. Ultra-high-performance computing with ultra-high speed, high bandwidth, large capacity, high density, low power consumption, and low cost is growing explosively. A traditional computer adopts a von Neumann architecture, in which computing and storage functions are separated, and are completed by a central processing unit (CPU) and a memory, respectively. With the rapid development of microelectronics technology, the performance of CPU and memory, such as speed and capacity, has been rapidly improved. However, due to the limited improvement of the bus speed for transmitting data and instructions, frequent data transmission between CPU and memory has caused information processing. The bottleneck is called the storage wall.
As early as the "storage wall" was exposed, computer researchers have begun to find ways to solve or weaken the "storage wall" problem. The industry-wide approach to this day is known as the "Memory hierarchy". The core idea is to buffer the speed mismatch between the processor and the dynamic memory unit by inserting a series of cache memories (cache) between the two. Although the storage hierarchy reduces the average delay of computing to a certain extent, it does not fundamentally eliminate the "storage wall" problem.
At present, many scholars and institutions have begun to study computing-in-memory. The core idea is to integrate computing (processing) functions and storage functions in the same chip, and all calculations are implemented inside the storage. Reading and writing of data is unnecessary.
Sourse: https://en.witmem.com/news/news/storage_computing_hardware.html
As early as the "storage wall" was exposed, computer researchers have begun to find ways to solve or weaken the "storage wall" problem. The industry-wide approach to this day is known as the "Memory hierarchy". The core idea is to buffer the speed mismatch between the processor and the dynamic memory unit by inserting a series of cache memories (cache) between the two. Although the storage hierarchy reduces the average delay of computing to a certain extent, it does not fundamentally eliminate the "storage wall" problem.
At present, many scholars and institutions have begun to study computing-in-memory. The core idea is to integrate computing (processing) functions and storage functions in the same chip, and all calculations are implemented inside the storage. Reading and writing of data is unnecessary.
Sourse: https://en.witmem.com/news/news/storage_computing_hardware.html
评论
发表评论