LINFO

Memory Definition



Memory, as it is used with regard to computers, most commonly refers to semiconductor devices whose contents can be accessed (i.e., read and written to) at extremely high speeds but which are retained only temporarily (i.e., while in use or, at most, while the power supply remains on).

This contrasts with storage, which (1) retains programs and data regardless of whether they are currently in use or not, (2) retains programs and data after the power supply has been disconnected, (3) has much slower access speeds and (4) has a much larger capacity (and a much lower cost). Examples of storage devices are hard disk drives (HDD), floppy disks, optical disks (e.g., CDROMS and DVDs) and magnetic tape.

The term memory as used in a computer context originally referred to the magnetic core memory devices that were used beginning in the 1950s. It was subsequently applied to the semiconductor memory devices that replaced core memories in the 1970s.

Computer memory today consists mainly of dynamic random access memory (DRAM) chips that have been built into multi-chip modules that are, in turn, plugged into slots on the motherboard (the main circuit board on personal computers and workstations). This DRAM is commonly referred to as RAM (random access memory), and it constitutes the main memory of a computer.

The random in random access memory refers to the fact that any location in such memory can be addressed directly at any time. This contrasts with sequential access media, such as magnetic tape, which must be read partially in sequence regardless of the desired content.

Purpose of Memory

Memory is used to hold portions of the operating system, application programs and data that are currently being used by the CPU (central processing unit) or that are likely to be used by it. This includes the kernel, which is the core of the operating system and the first part of it to be loaded into memory during booting (i.e., the process by which a computer starts and automatically loads the operating system into memory) and which remains there for the entire duration of a computer session.

Thus, at any point in time the contents of memory include (1) the kernel, (2) machine code of the process currently progressing in the CPU, (3) machine code for various suspended processes along with the various data that constitute the intermediate results of those processes and (4) copies of open files. Machine code consists of program instructions that have been translated by a compiler from source code into a binary (i.e., only zeros and ones) format so that the CPU can use them directly, without further translation. A process is an executing (i.e., running) instance of a program. Source code is the version of a program as it is originally written by a human using a programming language (such as C, C++, Java or Perl).

When a user opens an existing file (i.e., a file that has been saved on a disk or other storage device), the operating system actually makes a copy of that file and places it in memory. When a file is saved, it is copied from memory into storage and overwrites the older version there. When a new file is created, it is first created in memory and then written (i.e., copied) to the HDD or other designated storage device.

Because the CPU has a much faster speed than the HDD and all other devices on a computer, memory serves as a high speed intermediary for it, providing it with enough data, including from the HDD, so that it does not have to waste time waiting.

The Memory Hierarchy

Memory and storage can be viewed as a hierarchy, with the fastest but scarcest and most expensive at the top and the slowest but most plentiful and least expensive at the bottom.

At the very top of this memory hierarchy are registers (also sometimes referred to as processor registers), which consist of an extremely small amount of very fast (i.e., much faster than the main memory) memory cells that are built directly into the CPU. Their purpose is to speed up the execution of the CPU, and thus of programs, by providing quick access to commonly used values, generally those in the midst of a calculation. Registers are usually implemented as an array of SRAM (i.e., static RAM) cells.

SRAM is a type of RAM that is faster and more reliable than DRAM. The term static is used because the memory does not need to be refreshed as does DRAM, although it is still volatile (i.e., it needs to be connected to a power supply in order to retain its contents). SRAM has the disadvantages that it consumes more space than DRAM and is considerably more expensive.

Below the registers in the memory hierarchy is the cache memory, whose purpose is to reduce the mismatch in speeds between the CPU and RAM. That is, modern processors have clock rates of several gigahertz (billions of cycles per second), whereas RAM chips have access times measured in megahertz (millions of cycles per second). The clock rate is the speed at which a CPU performs its most basic operations.

Modern computers usually have two or three levels of cache memory, referred to as L1 (Level 1), L2 (Level 2) and L3 (Level 3). L1 cache consists of high speed SRAM cells that are likewise built directly into the CPU. To conserve space and reduce cost, there is usually only a small amount of L1 cache on a processor, for example, only 20KB in the case of the Pentium 4. L1 cache can usually be accessed in just a few CPU cycles, in contrast to the typically several hundred CPU cycles to access the main memory.

Most modern computers also come with a secondary cache memory, L2, which holds used instructions or other data that will possibly be used again in the very near future. Initially, L2 resided on a separate chip that was connected to the CPU by a bus (i.e., a set of wires), but newer CPUs contain built-in L2 caches. L2 caches are likewise composed of SRAM cells, but they contain many more such cells than do L1 caches. For example, the Pentium 4 has 256KB of L2 cache.

When the CPU and the motherboard both have L2 caches, that on the latter is designated L3. It serves basically the same function as the L2 cache, but it has a lower potential for providing CPU performance gains because of the slower speed resulting from its being connected by a bus rather than being inside of the CPU. L3 is the least used of the cache levels.

Below the several levels of cache memory in the memory hierarchy is the main memory. In contrast to the registers and cache memory, which have data holding capacities measured in bits and kilobytes, respectively, main memory on modern computers is usually measured in hundreds of megabytes. For example, main memory is typically 512MB or one gigabyte (i.e., approximately one billion bytes) in currently available personal computers, and it can be multiple gigabytes for workstations and other high performance systems. Main memory usually has an access time equal to several hundred CPU cycles.

At the bottom of the hierarchy is storage. It typically has an access time equivalent to hundreds of thousands of CPU cycles. This relatively slow speed is the result of using mechanical parts, particularly electric motors and moving magnetic heads. However, storage can have a capacity ranging from tens of gigabytes on small computers to many thousands of gigabytes on large systems.

Scarce Resource

Memory is a scarce resource because it is expensive, at least relative to a comparable amount of hard disk space, and thus it must be carefully managed by the operating system in order to optimize system performance. The addition of multiple levels of high-speed cache memory is one way that this is accomplished.

Another is to use virtual memory, i.e., the use of space on an HDD to simulate additional memory. In order to free up space in memory, the operating system transfers data which is not immediately needed from memory to the HDD, and when that data is needed again, it is copied back into memory. The disadvantage of virtual memory is that it is slower than main memory, although there is usually no noticeable effect on system performance unless virtual memory is being used intensively (e.g., when several very large programs are being run simultaneously).

Linux additionally improves the efficiency of memory utilization through the division of the kernel into the main part, which is held constantly in memory, and modules, which can be loaded into memory only as needed and subsequently removed when no longer needed.

Other Types of Memory

The term memory generally refers to semiconductor devices whose contents are volatile, can be accessed randomly and can be written to and read at high speed, i.e., RAM. However, it can also be used in a broader sense to refer to several types of non-volatile semiconductor devices.

The most basic of them is read-only memory (ROM), whose contents are written in at the factory and thereafter cannot be erased or rewritten.

Programmable read-only memory (PROM) is a type of ROM into which a program or other data can be written once after it leaves the factory, usually by the manufacturer of the computer (or of any other product in which it is used). It cannot subsequently be erased and rewritten.

Erasable programmable read-only memory (EPROM) is a type of PROM whose contents can be erased by exposing the chip's upper surface to ultraviolet light. The contents can then be rewritten electronically.

Electrically erasable programmable read-only memory (EEPROM) is a type of PROM whose contents can be erased through the use of electronic signals. This is much more convenient than using ultraviolet light. Flash memory is a type of EEPROM that is used in USB flash drives (also called key drives), an increasingly popular kind of removable storage device.

Computers almost always contain a small amount of some type of built-in ROM. It is used to hold the BIOS (basic input output system), which is used to boot up the system. It is also used to hold fixed logic for various hardware devices (e.g., disk drives) and peripherals.






Created November 25, 2004. Updated June 21, 2006.
Copyright © 2004 - 2006 The Linux Information Project. All Rights Reserved.