Computer Fundamentals 简明教程
Computer - Memory
What is computer memory?
一种以临时或永久方式存储数据或信息在其中的物理设备称为存储器。它是一个存储和处理数据的地方。一般来说,计算机具有主存和辅助存储。辅助(辅助)存储器长时间存储数据和程序,或存储用户希望将它们保留在存储器中的时间,而主存存储程序执行期间的指令和数据;因此,任何当前在计算机上运行或执行的程序或文件都存储在主存中。
A physical device that stores data or information temporarily or permanently in it is called memory. It’s a device where data is stored and processed. In common, a computer has primary and secondary memories. Auxiliary (secondary) memory stores data and programs for long-term storage or until the time a user wants to keep them in memory, while main memory stores instructions and data during programme execution; hence, any programme or file that is currently running or executing on a computer is stored in primary memory.
Memory Classification
计算机存储器有各种类型,并且有不同的用途 -
Computer memory comes in various types and serves different purposes −
-
Primary Memory (RAM - Random Access Memory) − Volatile memory loses its contents when the machine is turned off. RAM stores the data that is actively being used. During the booting process of a system, the operating system actively uses RAM and applications that are necessary to execute a file or a program. It speeds up CPU processing by providing fast data and instruction access.
-
Secondary Memory (Storage) − Secondary Memory is also known as permanent memory or non-volatile memory of a computer. Secondary memory retains data when the machine shuts down. Files, programmes, and the OS are stored there permanently. HDDs, SSDs, USB flash drives, and optical discs are non-volatile memory devices.
-
Cache Memory − Memory that is smaller and faster than RAM is called cache memory. It is placed closer to the CPU than the RAM.
它保存大量使用的数据和指令,以便处理得更快。
It saves data and instructions that are used a lot so that processing goes faster.
不同的缓存类型,如 L1、L2 和 L3 缓存,具有不同的速度和空间量。
Different types of cache memory, like L1, L2, and L3 cache, have different speeds and amounts of space.
缓存存储器的级别:L1、L2 和 L3
The Levels of Cache Memory: L1, L2, and L3
CPU 缓存存储器分为“三级”:L1、L2 和 L3。存储器层次结构再次根据速度,因此还有缓存大小。
CPU Cache memory is divided into three "levels": L1, L2, and L3. The memory hierarchy is again according to the speed and, thus, the cache size.
L1 Cache
一级缓存是计算机最快的存储器。CPU 最常访问的数据驻留在 L1 缓存中。CPU 确定 L1 缓存大小。一些高端消费类 CPU,如 Intel i9-9980XE,具有 1MB L1 缓存,但它们昂贵且罕见。英特尔的 Xeon 等服务器芯片组具有 1-2MB L1 内存缓存。购买之前,请检查 CPU 规格以确定 L1 缓存的大小。没有“标准”数量。
Level 1 cache is a computer’s fastest memory. The CPU’s most frequently accessed data resides in the L1 cache. CPU determines L1 cache size. Some high-end consumer CPUs, such as the Intel i9-9980XE, have a 1MB L1 cache, but they are expensive and rare. Server chipsets like Intel’s Xeon have 1-2MB L1 memory cache. Before buying, examine the CPU specs to ascertain the L1 cache size. There is no "standard" amount.
来源:[1]
Source: [1]
L1 缓存通常有两个部分:指令缓存(存储 CPU 操作信息)和数据缓存(存储操作数据)。
The L1 cache normally has two sections: the instruction cache, which stores CPU operation information, and the data cache, which stores operation data.
L2 Cache
2 级缓存更大,但速度比 L1 慢。现代 L2 内存缓存是千兆字节,而不是千字节。AMD 顶级 Ryzen 5 5600X 拥有 384KB L1 和 3MB L2 缓存,以及 32MB L3 缓存。L2 缓存大小取决于 CPU,但通常为 256KB 至 32MB。如今,大多数 CPU 拥有超过 256KB L2 缓存,这很小。一些功能最强大的当前 CPU 拥有超过 8MB 的 L2 内存缓存。在速度方面,L2 缓存比 L1 缓存慢,但仍然比系统 RAM 快。L2 缓存比 RAM 快 25 倍,而 L1 缓存快 100 倍。
Level 2 cache is larger but slower than L1. Modern L2 memory caches are gigabytes, not kilobytes. AMD’s top-rated Ryzen 5 5600X has 384KB L1 and 3MB L2 caches and 32MB L3 cache. The L2 cache size depends on the CPU but is usually 256KB to 32MB. Nowadays, most CPUs have more than 256KB L2 cache, which is small. Some of the most powerful current CPUs have L2 memory caches exceeding 8MB. In terms of speed, the L2 cache is slower than the L1 cache but still faster than the system RAM. L2 caches are 25 times faster than RAM, while L1 caches are 100 times faster.
L3 Cache
3 级缓存。L3 内存缓存最初位于主板上。很久以前,当大多数 CPU 都是单核时,L3 缓存位于高端消费级 CPU 上,可达到 32MB,而 AMD 突破性的 Ryzen 7 5800X3D CPU 拥有 96MB。一些服务器中的 CPU L3 缓存可达到 128MB。
Level 3 cache. The L3 memory cache was originally on the motherboard. This was long ago when most CPUs were single-core. The L3 cache on top-end consumer CPUs can reach 32MB, while AMD’s groundbreaking Ryzen 7 5800X3D CPUs have 96MB. CPU L3 caches in some servers can reach 128MB.
最大且最慢的缓存内存单元是 L3。现代 CPU 拥有片上 L3 缓存。芯片的 L1 和 L2 缓存为每个内核提供服务,而 L3 缓存更像是整个芯片的内存池。下图说明了 2012 年英特尔酷睿 i5-3570K CPU 和 2020 年 AMD Ryzen 5800X CPU 的 CPU 内存缓存级别。第二张图片的右下角包含 CPU 缓存数据。
The largest and slowest cache memory unit is L3. Modern CPUs have an on-chip L3 cache. The chip’s L1 and L2 caches serve each core, while the L3 cache is more like a memory pool for the whole chip. The following images illustrate the CPU memory cache levels for a 2012 Intel Core i5-3570K CPU and a 2020 AMD Ryzen 5800X CPU. The second image’s bottom right corner contains CPU cache data.
来源:[1]
Source: [1]
请注意,两款 CPU 都具有拆分的 L1 缓存和更大的 L2 和 L3 缓存。在 AMD Ryzen 5800X 上,L3 缓存比英特尔 i5-3570K 大五倍以上。
Note how both CPUs have a split L1 cache and larger L2 and L3 caches. On the AMD Ryzen 5800X, the L3 cache is over five times greater than the Intel i5-3570K.
How cache memory works:
-
Hierarchy − Computers normally have L1, L2, and L3 caches are the several layers of cache memory. The L1 cache is the smallest and fastest cache, located closest to the CPU; L2 and L3 caches are larger and slower.
-
Cache Organization − Each block or line of cache memory contains a small bit of data copied from the main memory. The CPU accesses cache memory in fixed-size blocks, not bytes.
-
Cache Coherency − Cache coherency ensures cached data matches the main memory data. Cache coherence techniques update other cores' caches when one core writes to a memory location in a multi-core processor.
-
Cache Replacement Policies − A cache replacement policy decides which block to evict when the cache is full and a new block is needed. LRU, FIFO, and Random Replacement are common policies.
-
Cache Access − The CPU checks the cache before reading or writing data. When data is cached, the CPU can quickly retrieve it. If data is not in the cache (cache miss), the CPU must fetch it from the main memory, which may delay it.
-
Cache Hierarchy − Modern processors contain L1, L2, and L3 caches that grow in capacity and latency farther from the CPU cores. Parallel access is achieved by splitting the L1 cache into instruction and data caches.
-
Cache Management − Optimization of cache utilization maximizes hit rates and minimizes miss penalties. Prefetching, where the processor predicts memory accesses and loads data into the cache, improves cache performance. Cache memory buffers frequently access data between the CPU and main memory to speed up processing and increase system performance. Modern computer systems require effective management and structure for optimal performance.
Register Memory
寄存器内存,也称为处理器寄存器或“寄存器”,是最小且最快的计算机内存类型,直接集成到 CPU 中。寄存器是 CPU 中的小型、快速存储单元,用于快速存储正在处理的数据或正在运行的指令。
Register memory, which is also called processor registers or "registers," is the smallest and fastest type of computer memory that is directly integrated into the CPU. Registers are small, fast storage units inside the CPU that are used to quickly store data that is being processed or instructions that are being run.
-
Instruction Execution − Registers hold the instructions that the CPU is currently running. This includes the operation code (opcode) and associated operands with it.
-
Data Storage − Registers store CPU-processed data. This can provide memory addresses, intermediate values during arithmetic or logical operations, and other data needed by the instructions being executed.
-
Addressing − Memory addresses are used to store or retrieve data from memory locations in RAM or other parts of the computer’s memory hierarchy.
-
Program Counter (PC) − Stores the memory address of the next instruction to be fetched and executed.
-
Instruction Register (IR) − Holds the current instruction being executed by the CPU.
-
Memory Address Register (MAR) − Stores the memory address of data being read from or written to memory.
-
Memory Data Register (MDR) − Contains the actual data being read from or written to memory.
-
General-Purpose Registers (GPRs) − Used for general data storage and manipulation during program execution.
Video Random-Access Memory (VRAM)
视频随机存取内存 (VRAM) 是一种旨在与视频卡和图形处理单元 (GPU) 配合工作的内存类型。它是内存中用于存储图像、帧缓冲区等图形数据和其他与图形相关数据的特殊位置。
Video Random-Access Memory (VRAM) is a type of memory that is intended to work with video cards and graphics processing units (GPUs). It’s a special place in memory where graphics data like images, frame buffers, and other graphics-related data can be stored.
VRAM 旨在处理渲染计算机显示器上的图形和图像时对快速并行处理的要求。它使 GPU 能够快速访问大量的图形数据,从而使它们能够渲染复杂的场景、纹理和动画。
VRAM is designed to handle the fast, parallel processing demands of rendering graphics and images on computer displays. It enables GPUs to quickly access large amounts of graphic data, which lets them render complex scenes, textures, and animations.
VRAM 的主要特点包括 −
Key features of VRAM include −
-
High Bandwidth − VRAM typically offers high-speed data transfer rates, enabling fast access to graphical data by the GPU.
-
Parallel Access − VRAM is designed to support parallel access, allowing multiple rendering tasks to access different portions of the memory simultaneously.
-
Specialized Architecture − VRAM often has a specialized architecture optimized for graphics processing tasks, including features such as multi-port access and wide memory buses.
-
Dedicated Graphics Memory − Unlike system RAM, which is shared among various system components, VRAM is dedicated solely to graphics processing, ensuring that the GPU has sufficient memory bandwidth and capacity for rendering graphics-intensive applications.
-
GDDR (Graphics Double Data Rate) VRAM − This is the most commonly used type of VRAM, it is majorly found in modern GPUs. GDDR5, GDDR5X, and GDDR6 are some of the variants that provide improvements in power efficiency and bandwidth over earlier generations.
-
HBM (High Bandwidth Memory) − HBM VRAM is a more recent technology that provides even greater bandwidth at lower power consumption than conventional GDDR VRAM. It accomplishes this by minimising the distance data must travel between memory cells by stacking memory chips vertically on a silicon interposer.