LINFO

Microprocessor Definition



A microprocessor, also commonly referred to as a processor, is a semiconductor device that performs logic operations in computers and a vast array of other products.

The main microprocessor in a computer is called the central processing unit (CPU). Computers also contain a number of microcontrollers to operate disk drives and other peripheral devices and thereby relieve the burden on the CPU. Some high performance computers contain multiple CPUs.

A microcontroller is a type of microprocessor that is designed for use in embedded systems rather than as the main, general-purpose logic unit that is the main brain of a computer. An embedded system is a combination of computer circuitry and software that is built into a product for purposes such as control, monitoring and communication without human intervention. Embedded systems are at the core of every modern electronic product, ranging from toys to medical equipment to aircraft control systems.

Microcontrollers emphasize self-sufficiency and low cost. A typical microcontroller contains sufficient memory and interfaces for simple applications, whereas a general-purpose microprocessor requires additional chips to provide these functions, including at least one ROM (read-only memory) chip to store the built-in software, such as the BIOS (built-in operating system) in the case of computers. Thus, in the case of many products, it is possible to mount most, or all, circuitry on a single, small printed circuit board.

The CPUs for the first electronic computers were formed from arrays of electromechanical relays and other discrete electronic devices (such as vacuum tubes, diodes and resistors). The CPUs for the first all-electronic computers, such as ENIAC, which began test operation in 1945, consisted of arrays of vacuum tubes. The tubes were replaced by discrete transistors in new computers built in the 1950s, thereby resulting in a large increase in reliability and large reduction in space consumption. Further gains in reliability and compactness were later attained through the replacement of the transistors and other discrete devices by small-scale integrated circuits (ICs) in the 1960s.

One of the biggest milestones in the history of microprocessors occurred in 1971 with the introduction of the world's first single-chip model, the 4004. Developed by Intel Corporation, this chip measured one eighth of an inch by one sixth of an inch and contained 2,300 transistors. It had a four bit word length and a then impressive speed of speed of 108KHz. Functional blocks included an address register and address incrementer, an index register, a four-bit adder, an instruction register, and decoder and control peripheral circuitry. A register is a very small amount of built-in, high speed memory. Despite its tiny size, the 4004 had as much computational power as ENIAC, which contained 17,468 vacuum tubes, 70,000 resistors and numerous other discrete components and weighed roughly 30 metric tons.

The 4004 was the first microprocessor designed and manufactured by Intel. It was followed the next year by the world's first eight-bit model, the 8008, which contained 3,300 transistors (and the 4040, a revised version of the 4004). Today the Santa Clara, California-based Intel is the world's largest producer of microprocessors, and it is also the largest manufacturer of semiconductor devices as a whole.

Paralleling the electronics industry in general, microprocessor technology evolved swiftly, with rapid increases in the number of transistors and other electronic devices integrated on to a single chip and in processing speed accompanied by dramatic reductions in production cost and prices. Modern microprocessors are produced with circuit line widths of only a fraction of a micron and contain hundreds of millions of transistors.

Microprocessors can be classified in several ways, including according to whether they are intended for CPU use, their architecture and instructions set, their bit size, their manufacturer and their speed. Architecture refers to the basic circuit design; the same software can operate on various chips having the same architecture, including those produced by different manufacturers, without major modification. An instructions set can be described in very simple terms as those aspects of the architecture that are visible to and easily accessible to a programmer.

More than six billion microprocessors are produced annually, according to industry sources. The vast majority of these (roughly 98 percent) are used in embedded systems rather than as CPUs in computers. Embedded processors range from simple four-bit chips, such as those used in cheap toys, to powerful, custom-designed 128-bit chips.

The CPU for the first IBM personal computer, which was introduced in 1981, was a 16-bit Intel chip, the 8088. It had a speed of 4.77MHz. Today, most personal computers use 32-bit chips, and new models have speeds of well in excess of one gigahertz. Moreover, 64-bit chips are becoming increasingly common and inexpensive, as are speeds in excess of two gigahertz.

Another major trend has been a reduction in the number of architectures that are used for CPUs. Today the CPUs in almost all newly produced personal computers are x86 (i.e., Intel-compatible) chips. Most of these chips are made by Intel, but some are also produced by other companies, most notably AMD Corporation. Originally, x86 referred to Intel's x86 series of CPUs (e.g., 286, 386 and 486), but now it refers to all 32-bit microprocessors with a similar architecture and instruction set.

A small minority of computers utilize PowerPC CPUs, which are not x86 compatible. Manufactured primarily by IBM, Apple Computer began using them when it switched to its OS X operating system beginning in 1999, as they allowed a large improvement in performance over the 68k family of chips, produced by Motorola, that it had previously been using. Subsequently, from late 2005 Apple switched CPUs again, this time to x86 chips produced by Intel, and the transition was complete as of mid-2006. Transitions to a different CPU architecture are very difficult from a technology point of view (e.g., maintaining backward compatibility with software) and expensive, but Apple managed to pull it off smoothly both times.

This monoculture of CPUs has the advantages of a reduction of CPU production cost (and thus computer prices) through economies of scale and facilitating the portability of software, thereby benefiting consumers. For example, Apple's switch to the x86 architecture allows its computers to also run the the Microsoft Windows operating systems, thus giving users of its hardware access to a vastly increased number of application programs. On the other hand, critics might say that the monoculture has the disadvantage of discouraging innovation and competition with regard to CPU technology.

In contrast to CPUs for desktop and notebook computers, there is substantially more diversity of architectures for processors for embedded applications. This is mainly the result of the vastly larger and more diverse market and the consequently greater number of manufacturers. The most frequently deployed architectures, at least on chips produced in the U.S., have been ARM and x86, each with a market share of roughly 30 percent. This is followed by PowerPC and MIPS, with much smaller shares. ARM (originally the Acorn RISC Machine) is a 32-bit RISC (reduced instruction set computer) architecture that features low power consumption and is thus particularly popular for use in mobile electronics products.






Created July 20, 2006.
Copyright © 2006 The Linux Information Project. All Rights Reserved.