|[ Team LiB ]|
Reduced to its fundamental principles, the workings of a modern silicon-based microprocessor are not difficult to understand. They are simply the electronic equivalent of a knee-jerk. Every time you hit the microprocessor with an electronic hammer blow (the proper digital input), it reacts by doing a specific something. Like a knee-jerk reflex, the microprocessor's reaction is always the same. When hammered by the same input and conditions, it kicks out the same function.
The complexity of the microprocessor and what it does arises from the wealth of inputs it can react to and the interaction between successive inputs. Although the microprocessor's function is precisely defined by its input, the output from that function varies with what the microprocessor has to work on, and that depends on previous inputs. For example, the result of you carrying out a specific command—"Simon says lift your left leg"—will differ dramatically depending on whether the previous command was "Simon says sit down" or "Simon says lift your right leg."
The rules for controlling the knee-jerks inside a computer are the rules of logic, and not just any logic. Computers use a special symbolic logic system that was created about a century and a half ago in the belief that human thinking could be mechanized much as the production of goods had been mechanized in the Industrial Revolution.
As people began to learn to think again after the Dark Ages, they began exploring mathematics, first in the Arabian world, then Europe. They developed a rigorous, objective system—one that was reassuring in its ability to replicate results. Carry out the same operation on the same numbers, and you always got the same answer. Mathematics delivered a certainty that was absent from the rest of the world, one in which dragons inhabited the unknown areas of maps and people had no conception that micro-organisms might cause disease.
Applying the same rigor and structure, scientific methods first pioneered by the Greeks were rediscovered. The objective scientific method found truth, the answers that eluded the world of superstition. Science led to an understanding of the world, new processes, new machines, and medicine.
In Victorian England, philosophers wondered whether the same objectivity and rigor could be applied to all of human thought. A mathematician, George Boole, first proposed applying the rigorous approach of algebra to logical decision-making. In 1847, Boole founded the system of modern symbolic logic that we now term Boolean logic (alternately, Boolean algebra). In his system, Boole reduced propositions to symbols and formal operators that followed the strict rules of mathematics. Using his rigorous approach, logical propositions could be proven with the same certainty as mathematical equations.
Philosophers, including Ludwig Wittgenstein and Bertrand Russell, further developed the concept of symbolic logic and showed that anything that could be known could be expressed in its symbols. By translating what you knew and wanted to know into symbols, you could apply the rules of Boolean logic and find an answer. Knowledge was reduced to a mechanical process, and that made it the province of machines.
In concept, Babbage's Analytical Engine could have deployed Boole's symbolic logic and become the first thinking machine. However, neither the logic nor the hardware was up to the task at the time. But when fast calculations became possible, Boolean logic proved key to programming the computers that carried out the tasks.
Giving an electrical circuit the power to make a decision isn't as hard as you might think. Start with that same remote signaling of the telegraph but add a mechanical arm that links it to a light switch on your wall. As the telegraph pounds, the light flashes on and off. Certainly you'll have done a lot of work for a little return, in that the electricity could be used to directly light the bulb. There are other possibilities, however, that produce intriguing results. You could, for example, pair two weak telegraph arms so that their joint effort would be required to throw the switch to turn on the light. Or you could link the two telegraphs so that a signal on either one would switch on the light. Or you could install the switch backwards so that when the telegraph is activated, the light would go out instead of come on.
These three telegraph-based design examples actually provide the basis for three different types of computer circuits, called logic gates (the AND, OR, and NOT gates, respectively). These electrical circuits are called gates because they regulate the flow of electricity, allowing it to pass through or cutting it off, much as a gate in a fence allows or impedes your own progress. These logic gates endow the electrical assembly with decision-making power. In the light example, the decision is necessarily simple: when to switch on the light. But these same simple gates can be formed into elaborate combinations that make up a computer that can make complex logical decisions.
The three logic gates can perform the function of all the operators in Boolean logic. They form the basis of the decision-making capabilities of the computer as well as other logic circuitry. You'll encounter other kinds of gates, such as NAND (short for "Not AND"), NOR (short for "Not OR"), and XOR (for "Exclusive OR"), but you can build any one of the others from the basic three: AND, OR, and NOT.
These same gates also can be arranged to form memory. Start with the familiar telegraph. Instead of operating the current for a light bulb, however, reroute the wires from the switch so that they, too, link to the telegraph's electromagnet. In other words, when the telegraph moves, it throws a switch that supplies itself with electricity. Once the telegraph is supplying itself with electricity, it will stay on using that power even if you switch off the original power that first made the switch. In effect, this simple system remembers whether it has once been activated. You can go back at any time and see if someone has ever sent a signal to the telegraph memory system.
This basic form of memory has one shortcoming: It's elephantine and never forgets. Resetting this memory system requires manually switching off both the control voltage and the main voltage source.
A more useful form of memory takes two control signals: One switches it on, the other switches it off. In simplest form, each cell of this kind of memory is made from two latches connected at cross-purposes so that switching one latch on cuts the other off. Because one signal sets this memory to hold data and the other one resets it, this circuit is sometimes called set-reset memory. A more common term is flip-flop because it alternately flips between its two states. In computer circuits, this kind of memory is often simply called a latch. Although the main memory of your computer uses a type of memory that works on a different electrical principal, latch memory remains important in circuit design.
Although the millions of gates in a microprocessor are so tiny that you can't even discern them with an optical microscope (you need at least an electron microscope), they act exactly like elemental, telegraph-based circuits. They use electrical signals to control other signals. The signals are just more complicated, reflecting the more elaborate nature of the computer.
Today's microprocessors don't use a single signal to control their operations, rather, they use complex combinations of signals. Each microprocessor command is coded as a pattern of signals, the presence or absence of an electrical signal at one of the pins of the microprocessor's package. The signal at each pin represents one bit of digital information.
The designers of a microprocessor give certain patterns of these bit-signals specific meanings. Each pattern is a command called a microprocessor instruction that tells the microprocessor to carry out a specific operation. The bit pattern 0010110, for example, is the instruction that tells an Intel 8086-family microprocessor to subtract in a very explicit manner. Other instructions tell the microprocessor to add, multiply, divide, move bits or bytes around, change individual bits, or just wait around for another instruction.
Microprocessor designers can add instructions to do just about anything—from matrix calculations to brewing coffee (that is, if the designers wanted to, if the instructions actually did something useful, and if they had unlimited time and resources to engineer the chip). Practical concerns such as keeping the design work and the chip manageable constrain the range of commands given to a microprocessor.
The entire repertoire of commands that a given microprocessor model understands and can react to is called that microprocessor's instruction set or its command set. The designer of the microprocessor chooses which pattern to assign to a given function. As a result, different microprocessor designs recognize different instruction sets, just as different board games have different rules.
Despite their pragmatic limits, microprocessor instruction sets can be incredibly rich and diverse, and the individual instructions incredibly specific. The designers of the original 8086-style microprocessor, for example, felt that a simple command to subtract was not enough by itself. They believed that the microprocessor also needed to know what to subtract from what and what it should do with the result. Consequently, they added a rich variety of subtraction instructions to the 8086 family of chips that persists into today's Athlon and Pentium 4 chips. Each different subtraction instruction tells the microprocessor to take numbers from different places and find the difference in a slightly different manner.
Some microprocessor instructions require a series of steps to be carried out. These multistep commands are sometimes called complex instructions because of their composite nature. Although a complex instruction looks like a simple command, it may involve much work. A simple instruction would be something such as "pound a nail." A complex instruction may be as far ranging as "frame a house." Simple subtraction or addition of two numbers may actually involve dozens of steps, including the conversion of the numbers from decimal to the binary (ones and zeros) notation that the microprocessor understands. For instance, the previous sample subtraction instruction tells one kind of microprocessor that it should subtract a number in memory from another number in the microprocessor's accumulator, a place that's favored for calculations in today's most popular microprocessors.
Before the microprocessor can work on numbers or any other data, it first must know what numbers to work on. The most straightforward method of giving the chip the variables it needs would seem to be supplying more coded signals at the same time the instruction is given. You could dump in the numbers 6 and 3 along with the subtract instruction, just as you would load laundry detergent along with shirts and sheets into your washing machine. This simple method has its shortcomings, however. Somehow the proper numbers must be routed to the right microprocessor inputs. The microprocessor needs to know whether to subtract 6 from 3 or 3 from 6 (the difference could be significant, particularly when you're balancing your checkbook).
Just as you distinguish the numbers in a subtraction problem by where you put them in the equation (6–3 versus 3–6), a microprocessor distinguishes the numbers on which it works by their position (where they are found). Two memory addresses might suffice were it not for the way most microprocessors are designed. They have only one pathway to memory, so they can effectively "see" only one memory value at a time. So instead, a microprocessor loads at least one number to an internal storage area called a register. It can then simultaneously reach both the number in memory and the value in its internal register. Alternatively (and more commonly today), both values on which the microprocessor is to work can be loaded into separate internal registers.
Part of the function of each microprocessor instruction is to tell the chip which registers to use for data and where to put the answers it comes up with. Other instructions tell the chip to load numbers into its registers to be worked on later or to move information from a register someplace else (for instance, to memory or an output port).
A register functions both as memory and a workbench. It holds bit-patterns until they can be worked on or sent out of the chip. The register is also connected with the processing circuits of the microprocessor so that the changes ordered by instructions actually appear in the register. Most microprocessors typically have several registers, some dedicated to specific functions (such as remembering which step in a function the chip is currently carrying out; this register is called a counter or instruction pointer) and some designed for general purposes. At one time, the accumulator was the only register in a microprocessor that could manage calculations. In modern microprocessors, all registers are more nearly equal (in some of the latest designs, all registers are equal, even interchangeable), so the accumulator is now little more than a colorful term left over from a bygone era.
Not only do microprocessors have differing numbers of registers, but the registers may also be of different sizes. Registers are measured by the number of bits that they can work with at one time. A 16-bit microprocessor, for example, should have one or more registers that each holds 16 bits of data at a time. Today's microprocessors have 32- or 64-bit registers.
Adding more registers to a microprocessor does not make it inherently faster. When a microprocessor lacks advanced features such as pipelining or superscalar technology (discussed later in this chapter), it can perform only one operation at a time. More than two registers would seem superfluous. After all, most math operations involve only two numbers at a time (or can be reduced to a series of two-number operations). Even with old-technology microprocessors, however, having more registers helps the software writer create more efficient programs. With more places to put data, a program needs to move information in and out of the microprocessor less often, which can potentially save several program steps and clock cycles.
Modern microprocessor designs, particularly those influenced by the latest research into design efficiency, demand more registers. Because microprocessors run much faster than memory, every time the microprocessor has to go to memory, it must slow down. Therefore, minimizing memory accessing helps improve performance. Keeping data in registers instead of memory speeds things up.
The width of the registers also has a substantial effect on the performance of a microprocessor. The more bits assigned to each register, the more information that the microprocessor can process in every cycle. Consequently, a 64-bit register in the next generation of microprocessor chips holds the potential of calculating eight times as fast as an 8-bit register of a first generation microprocessor—all else being equal.
A computer program is nothing more than a list of instructions. The computer goes through the instruction list of the program step by step, executing each one in turn. Each builds on the previous instructions to carry out a complex function. The program is essentially a recipe for a microprocessor or the step-by-step instructions in a how-to manual.
The challenge for the programmer is to figure out into which steps to break a complex job and to arrange those steps in the best possible order. It can be a big job. Although a program can be as simple as a single step (say, stop), a modern program or software package may comprise millions or tens of millions of steps. They are quite literally too complex for a single human being to understand—or write. They are joint efforts. Not just the work of many people, but the work of people and machines using development environments to divide up the work and take advantage of routines and libraries created by other teams. A modern software package is the result of years of work in putting together simple microprocessor instructions.
One of the most important concepts in the use of modern personal computers is multitasking, the ability to run multiple programs at the same time, shifting your focus from one to another. You can, for example, type a term paper on L. Frank Baum and the real meaning of the Wizard of Oz using your word processor while your MP3 program churns out a techno version of "Over the Rainbow" through your computer's speakers. Today, you take that kind of thing for granted. But thinking about it, this doesn't make sense in the context of computer programs being simple lists of instructions and your microprocessor executing the instructions, one by one, in order. How can a computer do two things at the same time?
The answer is easy. It cannot. Computers do, in fact, process instructions as a single list. Computers can, however, switch between lists of instructions. They can execute a series of instructions from one list, shift to another list for a time, and then shift back to the first list. You get the illusion that the computer is doing several things at the same time because it shifts between instruction lists very quickly, dozens of times a second. Just as the separate frames of an animated cartoon blur together into an illusion of continuous motion, the computer switches so fast you cannot perceive the changes.
Multitasking is not an ability of a microprocessor. Even when you're doing six things at once on your computer, the microprocessor is still doing one thing at a time. It just runs all those instructions. The mediator of the multitasking is your operating system, basically a master program. It keeps track of every program (and subprogram) that's running—including itself—and decides which gets a given moment of the microprocessor's time for executing instructions.
Give a computer a program, and it's like a runaway freight train. Nothing can stop it. It keeps churning through instructions until it reaches the last one. That's great if what you want is the answer, whenever it may arrive. But if you have a task that needs immediate attention—say a block of data has just arrived from the Internet—you don't want to wait forever for one program to end before you can start another.
To add immediacy and interactivity to microprocessors, chip designers incorporate a feature called the interrupt. An interrupt is basically a signal to the microprocessor to stop what it is doing and turn its attention to something else. Intel microprocessors understand two kinds of interrupts: software and hardware.
A software interrupt is simply a special instruction in a program that's controlling the microprocessor. Instead of adding, subtracting, or whatever, the software interrupt causes program execution to temporarily shift to another section of code in memory.
A hardware interrupt causes the same effect but is controlled by special signals outside of the normal data stream. The only problem is that the microprocessors recognize far fewer interrupts than would be useful—only two interrupt signal lines are provided. One of these is a special case, the Non-Maskable Interrupt. The other line is shared by all system interrupts. The support hardware of your computer multiplies the number of hardware interrupts so that all devices that need them have the ability to interrupt the microprocessor.
Microprocessors do not carry out instructions as soon as the instruction code signals reach the pins that connect the microprocessor to your computer's circuitry. If chips did react immediately, they would quickly become confused. Electrical signals cannot change state instantly; they always go through a brief, though measurable, transition period—a period of indeterminate voltage level during which the signals would probably perplex a microprocessor into a crash. Moreover, all signals do not necessarily change at the same rate, so when some signals reach the right values, others may still be at odd values. As a result, a microprocessor must live through long periods of confusion during which its signals are at best meaningless, at worst dangerous.
To prevent the microprocessor from reacting to these invalid signals, the chip waits for an indication that it has a valid command to carry out. It waits until it gets a "Simon says" signal. In today's computers, this indication is provided by the system clock. The clock sends out regular voltage pulses, the electronic equivalent of the ticking of a grandfather's clock. The microprocessor checks the instructions given to it each time it receives a clock pulse, providing it is not already busy carrying out another instruction.
Early microprocessors were unable to carry out even one instruction every clock cycle. Vintage microprocessors may require as many as 100 discrete steps (and clock pulses) to carry out a single instruction. The number of cycles required to carry out instructions varies with the instruction and the microprocessor design. Some instructions take a few cycles, others dozens. Moreover, some microprocessors are more efficient than others in carrying out their instructions. The trend today is to minimize and equalize the number of clock cycles needed to carry out a typical instruction.
When you want to squeeze every last bit of performance from a computer, you can sometimes tinker with its timing settings. You can up the pace at which its circuits operate, thus making the system faster. This technique also forces circuits to operate at speeds higher than they were intended, thus compromising the reliability of the computer's operations. Tinkers don't worry about such things, believing that most circuits have such a wide safety margin that a little boost will do no harm. The results of their work may delight them—they might eke 10 percent or more extra performance from a computer—but these results might also surprise them when the system operates erratically and shuts down randomly. This game is called overclocking because it forces the microprocessor in the computer to operate at a clock speed that's over its ratings.
Overclocking also takes a more insidious form. Unscrupulous semiconductor dealers sometimes buy microprocessors (or memory chips or other speed-rated devices) and change their labels to reflect higher-speed potentials (for example, buying a 2.2GHz Pentium 4 and altering its markings to say 2.53GHz). A little white paint increases the market value of some chips by hundreds of dollars. It also creates a product that is likely to be operated out of its reliable range. Intel introduced internal chip serial numbers with the Pentium III to help prevent this form of fraud. From the unalterable serial number of the chip, the circuitry of a computer can figure out the factory-issue speed rating of the chip and automatically adjust itself to the proper speed.
|[ Team LiB ]|