|[ Team LiB ]|
No matter the designation or origin, all microprocessors in today's Windows-based computers share a unique characteristic and heritage. All are direct descendents of the very first microprocessor. The instruction set used by all current computer microprocessors is rooted in the instructions selected for that first-ever chip. Even the fastest of today's Pentium 4 chips has, hidden in its millions of transistors, the capability of acting exactly like that first chip.
In a way, that's good because this backward-looking design assures us that each new generation of microprocessor remains compatible with its predecessors. When a new chip arrives, manufacturers can plug it into a computer and give you reasonable expectations that all your old software will still work. But holding to the historical standard also heaps extra baggage on chip designs that holds back performance. By switching to a radically new design, engineers could create a faster, simpler microprocessor—one that could run circles around any of today's chips, but, alas, one that can't use any of your current programs or operating systems.
The history of the microprocessor stretches back to a 1969 request to Intel by a now-defunct Japanese calculator company, Busicom. The original plan was to build a series of calculators, each one different and each requiring a custom integrated circuit. Using conventional IC technology, the project would have required the design of 12 different chips. The small volumes of each design would have made development costs prohibitive.
Intel engineer Mercian E. (Ted) Hoff had a better idea, one that could slash the necessary design work. Instead of a collection of individually tailored circuits, he envisioned creating one general-purpose device that would satisfy the needs of all the calculators. Hoff laid out an integrated circuit with 2,300 transistors using 10-micron design rules with four-bit registers and a four-bit data bus. Using a 12-bit multiplexed addressing system, it was able to address 640 bytes of memory for storing subproducts and results.
Most amazing of all, once fabricated, the chip worked. It became the first general-purpose microprocessor, which Intel put on sale as the 4004 on November 15, 1971.
The chip was a success. Not only did it usher in the age of low-cost calculators, it also gave designers a single solid-state programmable device for the first time. Instead of designing the digital decision-making circuits in products from scratch, developers could buy an off-the-shelf component and tailor it to their needs simply by writing the appropriate program.
With the microprocessor's ability to handle numbers proven, the logical next step was to enable chips to deal with a broader range of data, including text characters. The 4004's narrow four-bit design was sufficient for encoding only numbers and basic operations—a total of 16 symbols. The registers would need to be wider to accommodate a wider repertory. Rather than simply bump up the registers a couple of bits, Intel's engineers chose to go double and design a full eight-bit microprocessor with eight-bit registers and an eight-bit data bus. In addition, this endowed the chip with the ability to address a full 16KB of memory using 14 multiplexed address lines. The result, which required a total of 3450 transistors, was the Intel 8008, introduced in April 1972.
Intel continued development (as did other integrated circuit manufacturers) and, in April 1974, created a rather more drastic revision, the 8080, which required nearly twice as many transistors (6,000) as the earlier chip. Unlike the 8008, the new 8080 chip was planned from the start for byte-size data. Intel gave the 8080 a 16-bit address bus that could handle a full 64KB of memory and a richer command set, one that embraced all the commands of the 8008 but went further. This set a pattern for Intel microprocessors: Every increase in power and range of command set enlarged on what had gone before rather than replacing it, thus ensuring backward compatibility (at least to some degree) of the software. To this day, the Intel-architecture chips used in personal computers can run program code written using 8080 instructions. From the 8080 on, the story of the microprocessor is simply one of improvements in fabrication technology and increasingly complex designs.
With each new generation of microprocessor, manufacturers relied on improving technology in circuit design and fabrication to increase the number and size of the registers in each microprocessor, broadening the data and address buses to match. When that strategy stalled, they moved to superscalar designs with multiple pipelines. Improvements in semiconductor fabrication technology made the increasing complexity of modern microprocessor designs both practical and affordable. In the three decades since the introduction of the first microprocessor, the linear dimensions of semiconductor circuits have decreased to 1/50th their original size, from 10-micron design rules to 0.13 micron, which means microprocessor-makers can squeeze 6,000 transistors where only one fit originally. This size reduction also facilitates higher speeds. Today's microprocessors run nearly 25,000 times faster than the first chip out of the Intel foundry, 2.5GHz in comparison to the 108KHz of the first 4004 chip.
Personal Computer Influence
The success of the personal computer marked a major turning point in microprocessor design. Before the PC, microprocessor engineers designed what they regarded as the best possible chips. Afterward, they focused their efforts on making chips for PCs. This change came between what is now regarded as the first generation of Intel microprocessors and the third generation, in the years 1981 to 1987.
The engineers who designed the IBM Personal Computer chose to use a chip from the Intel 8086 family. Intel introduced the 8086 chip in 1978 as an improvement over its first chips. Intel's engineers doubled the size of the registers in its 8080 to create a chip with 16-bit registers and about 10 times the performance. The 16-bit design carried through completely, also doubling the size of the data bus of earlier chips to 16 bits to move information in and out twice as fast.
In addition, Intel broadened the address bus from 16-bit to 20-bits to allow the 8086 to directly address up to one megabyte of RAM. Intel divided this memory into 64KB segments to make programming and the transition to the new chip easier. A single 16-bit register could address any byte in a given segment. Another, separate register indicated which of the segments that address was in.
A year after the introduction of the 8086, Intel introduced the 8088. The new chip was identical to the 8086 in every way—16-bit registers, 20 address lines, and the same command set—except one. Its data bus was reduced to eight bits, enabling the 8088 to exploit readily available eight-bit support hardware. At that, the 8088 broke no new ground and should have been little more than a footnote in the history of the microprocessor. However, its compromise design that mated 16-bit power with cheap 8-bit support chips made the 8088 IBM's choice for its first personal computer. With that, the 8088 entered history as the second most important product in the development of the microprocessor, after the ground-breaking 4004.
After the release of the 8086, Intel's engineers began to work on a successor chip with even more power. Designated the 80286, the new chip was to feature several times the speed and 16 times more addressable memory than its predecessors. Inherent in its design was the capability of multitasking, with new instructions for managing tasks and a new operating mode, called protected mode, that made its full 16MB of memory fodder for advanced operating systems.
The 80286 chip itself was introduced in 1982, but its first major (and most important) application didn't come until 1984 with the introduction of IBM's Personal Computer AT. Unfortunately, this development work began before the PC arrived, and few of the new features were compatible with the personal computer design. The DOS operating system for PCs and all the software that ran under it could not take advantage of the chip's new protected mode—which effectively put most of the new chip's memory off limits to PC programs.
With all its innovations ignored by PCs, the only thing the 80286 had going for it was its higher clock speed, which yielded better computer performance. Initially released running at 6MHz, computers powered by the 80286 quickly climbed to 8MHz, and then 10MHz. Versions operating at 12.5MHz, 16MHz, 20MHz, and ultimately 24MHz were eventually marketed.
The 80286 proved to be an important chip for Intel, although not because of any enduring success. It taught the company's engineers two lessons. First was the new importance of the personal computer to Intel's microprocessor market. Second was licensing. Although the 80286 was designed by Intel, the company licensed the design to several manufacturers, including AMD, Harris Semiconductor, IBM, and Siemens. Intel granted these licenses not only for income but also to assure the chip buyers that they had alternate sources of supply for the 80286, just in case Intel went out of business. At the time, Intel was a relatively new company, one of many struggling chipmakers. With the success of the PC and its future ensured, however, Intel would never again license its designs so freely.
Even before the 80286 made it to the marketplace, Intel's engineers were working on its successor, a chip designed with the power of hindsight. By then they could see the importance that the personal computer's primeval DOS operating system had on the microprocessor market, so they designed to match DOS instead of some vaguely conceived successor. They also added in enough power to make the chip a fearsome competitor.
The next chip, the third generation of Intel design, was the 80386. Two features distinguish it from the 80286: a full 32-bit design, for both data and addressing, and the new Virtual 8086 mode. The first gave the third generation unprecedented power. The second made that power useful.
Moreover, Intel learned to tailor the basic microprocessor design to specific niches in the marketplace. In addition to the mainstream microprocessor, the company saw the need to introduce an "entry level" chip, which would enable computer makers to sell lower-cost systems, and a version designed particularly for the needs of battery-powered portable computers. Intel renamed the mainstream 80386 as the 386DX, designated an entry-level chip the 386SX (introduced in 1988), and reengineered the same logic core for low-power applications as the 386SL (introduced in 1990).
The only difference between the 386DX and 386SX was that the latter had a 16-bit external data bus whereas the former had a 32-bit external bus. Internally, however, both chips had full 32-bit registers. The origin of the D/S nomenclature is easily explained. The external bus of the 386DX handled double words (32 bits), and that of the 386SX, single words (16 bits).
Intel knew it had a winner and severely restricted its licensing of the 386 design. IBM (Intel's biggest customer at the time) got a license only by promising not to sell chips. It could only market the 386-based microprocessors it built inside complete computers or on fully assembled motherboards. AMD won its license to duplicate the 386 in court based on technology-sharing agreements with Intel dating before even the 80286 had been announced. Another company, Chip and Technologies, reverse-engineered the 386 to build clones, but these were introduced too late—well after Intel advanced to its fourth generation of chips—to see much market success.
Age of Refinement
The 386 established Intel Architecture in essentially its final form. Later chips differ only in details. They have no new modes. Although Intel has added new instructions to the basic 386 command set, almost any commercial software written today will run on any Intel processor all the way back to the 386—but not likely any earlier processor, if the software is Windows based. The 386 design had proven itself and had become the foundation for a multibillion-dollar software industry. The one area for improvement was performance. Today's programs may run on a 386-based machine, but they are likely to run very slowly. Current chips are about 100 times faster than any 386.
The next major processor after the 386 was, as you might expect, the 486. Even Intel conceded its new chip was basically an improved 386. The most significant difference was that Intel added three features that could boost processing speed by working around handicaps in circuitry external to the microprocessor. These innovations included an integral Level One cache that helped compensate for slow memory systems, pipelining within the microprocessor to get more processing power from low clock speeds, and an integral floating-point unit that eliminated the handicap of an external connection. As this generation matured, Intel added one further refinement that let the microprocessor race ahead of laggardly support circuits—splitting the chip so that its core logic and external bus interface could operate at different speeds.
Intel introduced the first of this new generation in 1989 in the form of a chip then designated 80486, continuing with its traditional nomenclature. When the company added other models derived from this basic design, it renamed the then-flagship chip as the 486DX and distinguished lower-priced models by substituting the SX suffix and low-power designs for portable computers using the SL designation, as it had with the third generation. Other manufacturers followed suit, using the 486 designation for their similar products—and often the D/S indicators for top-of-the-line and economy models.
In the 486 family, however, the D/S split does not distinguish the width of the data bus. The designations had become disconnected from their origins. In the 486 family, Intel economized on the SX version by eliminating the integral floating-point unit. The savings from this strategy was substantial—without the floating-point circuitry, the 486SX required only about half the silicon of the full-fledged chip, making it cheaper to make. In the first runs of the 486SX, however, the difference was more marketing. The SX chips were identical to the DX chips except that their floating-point circuitry was either defective or deliberately disabled to make a less capable processor.
As far as hardware basics are concerned, the 486 series retained the principal features of the earlier generation of processors. Chips in both the third and fourth generations have three operating modes (real, protected, and virtual 8086), full 32-bit registers, and a 32-bit address bus enabling up to 4GB of memory to be directly addressed. Both support virtual memory that extends their addressing to 64TB. Both have built-in memory-management units that can remap memory in 4KB pages.
But the hardware of the 486 also differs substantially from the 386 (or any previous Intel microprocessor). The pipelining in the core logic allows the chip to work on parts of several instructions at the same time. At times the 486 could carry out one instruction every clock cycle. Tighter silicon design rules (smaller details etched into the actual silicon that makes up the chip) gave the 486 more speed potential than preceding chips. The small but robust 8KB integral primary cache helped the 486 work around the memory wait states that plagued faster 386-based computers.
The streamlined hardware design (particularly pipelining) meant that the 486-level microprocessors could think faster than 386 chips when the two operated at the same clock speed. On most applications, the 486 proved about twice as fast as a 386 at the same clock rate, so a 20MHz 486 delivered about the same program throughput as a 40MHz 386.
In March 1993, Intel introduced its first superscalar microprocessor, the first chip to bear the designation Pentium. At the time the computer industry expected Intel to continue its naming tradition and label the new chip the 80586. In fact, the competition was banking on it. Many had already decided to use that numerical designation for their next generation of products. Intel, however, wanted to distinguish its new chip from any potential clones and establish its own recognizable brand on the marketplace. Getting trademark protection for the 586 designation was unlikely. A federal court had earlier ruled that the 386 numeric designation was generic—that is, it described a type of product rather than something exclusive to a particular manufacturer—so trademark status was not available for it. Intel coined the word Pentium because it could get trademark protection. It also implied the number 5, signifying fifth generation, much as "586" would have.
Intel has used the Pentium name quite broadly as the designation for mainstream (or desktop performance) microprocessors, but even in its initial usage the singular Pentium designation obscured changes in silicon circuitry. Two very different chips wear the plain designation "Pentium." The original Pentium began its life under the code name P5 and was the designated successor to the 486DX. Characterized by 5-volt operation, low operating speeds, and high power consumption, the Intel made the P5 available only at three speeds: 60MHz, 66MHz, and 90MHz. Later, Intel refined the initial Pentium design as the P54C (another internal code name), with tighter design rules and lower voltage operation. These innovations raised the speed potential of the design, and commercial chips gradually stepped up from 100MHz to 200MHz. The same basic design underlies the Pentium OverDrive (or P24T) processor used for upgrading 486-based PCs.
In January 1997, Intel enhanced the Pentium instruction set to better handle multimedia applications and created Pentium Processor with MMX Technology (code-named P55C during development). These chips also incorporated a larger on-chip primary memory cache, 32KB.
To put the latest in Pentium power in the field. Intel reengineered the Pentium with MMX Technology chip for low-power operation to make the Mobile Pentium with MMX Technology chip, also released in January 1997. Unlike the deskbound version, the addressing capability of the mobile chip was enhanced by four more lines to allow direct access to 64GB of physical memory.
The Pentium was Intel's last CISC design. Other manufacturers were adapting RISC designs to handle the Intel instruction set and achieving results that put Intel on notice. The company responded with its own RISC-based design in 1995 that became the standard Intel core logic until the introduction of the Pentium 4 in the year 2000. Intel developed this logic core under the code name P6, and it has appeared in a wide variety of chips, including those bearing the names Pentium Pro, Pentium II, Celeron, Xeon, and Pentium III.
That's not to say all these chips are the same. Although the entire series uses essentially the same execution units, the floating-point unit continued to evolve throughout the series. The Pentium Pro incorporates a traditional floating-point unit. That of the Pentium II is enhanced to handle the MMX instruction set. The Pentium III adds Streaming SIMD Extensions. In addition, Intel altered the memory cache and bus of these chips to match the requirements of particular market segments to distinguish the Celeron and Xeon lines from the plain Pentium series.
The basic P6 design uses its own internal circuits to translate classic Intel instructions into micro-ops that can be processed in a RISC-based core, which has been tuned using all the RISC design tricks to massage extra processing speed from the code. Intel called this design Dynamic Execution. In the standard language of RISC processors, Dynamic Execution merely indicates a combination of out-of-order instruction execution and the underlying technologies that enable its operation (branch prediction, register renaming, and so on).
The P6 pipeline has 12 stages, divided into three sections: an in-order fetch/decode stage, an out-of-order execution/dispatch stage, and an in-order retirement stage. The design is superscalar, incorporating two integer units and one floating-point unit.
One look and there's no mistaking the Pentium Pro. Instead of a neat square chip, it's a rectangular giant. Intel gives this package the name Multi-Chip Module (MCM). It is also termed a dual-cavity PGA (pin-grid array) package because it holds two distinct slices of silicon, the microprocessor core and secondary cache memory. This was Intel's first chip with an integral secondary cache. Notably, this design results in more pins than any previous Intel microprocessor and a new socket requirement, Socket 8 (discussed earlier).
The main processor chip of the Pentium Pro uses the equivalent of 5.5 million transistors. About 4.5 million of them are devoted to the actual processor itself. The other million provide the circuitry of the chip's primary cache, which provides a total of 16KB storage bifurcated into separate 8KB sections for program instructions and data. Compared to true RISC processors, the Pentium Pro uses about twice as many transistors. The circuitry that translates instructions into RISC-compatible micro-ops requires the additional transistor logic.
The integral secondary RAM cache fits onto a separate slice of silicon in the other cavity of the MCM. Its circuitry involves another 15.5 million transistors for 256KB of storage and operates at the same speed as the core logic of the rest of the Pentium Pro.
The secondary cache connects with the microprocessor core logic through a dedicated 64-bit bus, termed a back-side bus, that is separate and distinct from the 64-bit front-side bus that connects to main memory. The back-side bus operates at the full internal speed of the microprocessor, whereas the front-side bus operates at a fraction of the internal speed of the microprocessor.
The Pentium Pro bus design superficially appears identical to that of the Pentium with 32-bit addressing, a 64-bit data path, and a maximum clock rate of 66MHz. Below the surface, however, Intel enhanced the design by shifting to a split-transaction protocol. Whereas the Pentium (and, indeed, all previous Intel processors) handled memory accessing as a two-step process (on one clock cycle the chip sends an address out the bus, and reads the data at the next clock cycle), the Pentium Pro can put an address on the bus at the same time it reads data from a previously posted address. Because the address and data buses use separate lines, these two operations can occur simultaneously. In effect, the throughput of the bus can nearly double without an increase in its clock speed.
The internal bus interface logic of the Pentium Pro is designed for multiprocessor systems. Up to four Pentium Pro chips can be directly connected together, pin for pin, without any additional support circuitry. The computer's chipset arbitrates the combination.
One underlying reason for the cartridge-style design is to accommodate the Pentium II's larger secondary cache, which is not integral to the chip package but rather co-mounted on the circuit board inside the cartridge. The 512KB of static cache memory connect through a 64-bit back-side bus. Note that the secondary cache memory of a Pentium II operates at one-half the speed of the core logic of the chip itself. This reduced speed is, of course, a handicap. It was a design expediency. It lowers the cost of the technology, allowing Intel to use off-the-shelf cache memory (from another manufacturer, at least initially) in a lower-cost package. The Pentium II secondary cache design has another limitation. Although the Pentium II can address up to 64GB of memory, its cache can track only 512MB. The Pentium II also has a 32KB primary cache that's split with 16KB assigned to data and 16KB to instructions. Table 5.3 summarizes the Intel Pentium II line.
Mobile Pentium II
To bring the power of the Pentium II processor to notebook computers, Intel reengineered the desktop chip to reduce its power consumption and altered its packaging to fit slim systems. The resulting chip—the Mobile Pentium II, introduced on April 2, 1997—preserved the full power of the Pentium II while sacrificing only its multiprocessor support. The power savings comes from two changes. The core logic of the Mobile Pentium II is specifically designed for low-voltage operation and has been engineered to work well with higher external voltages. It also incorporates an enriched set of power-management modes, including a new QuickStart mode that essentially shuts down the chip, except for the logic that monitors for bus activity by the PCI bridge chip, and allows the chip to wake up when it's needed. This design, because it does not monitor for other processor activity, prevents the Mobile Pentium II from being used in multiprocessor applications. The Mobile Pentium II can also switch off its cache clock during its sleep or QuickStart states.
Initially, the Mobile Pentium II shared the same P6 core logic design and cache design with the desktop Pentium II (full-speed 32KB primary cache and half-speed 512KB secondary cache inside its mini-cartridge package). However, as fabrication technology improved, Intel was able to integrate the secondary cache on the same die as the processor core, and on January 25, 1999, the company introduced a new version of the Mobile Pentium II with an integral 256KB cache operating at full core speed. Unlike the Pentium II, the mobile chip has the ratio between its core and bus clocks fixed at the factory to operate with a 66MHz front-side bus. Table 5.4 lists the introduction dates and basic characteristics of the Mobile Pentium II models.
Pentium II Celeron
Introduced on March 4, 1998, the Pentium II Celeron was Intel's entry-level processor derived from the Pentium II. Although it had the same processor core as what was at the time Intel's premier chip (the second-generation Pentium II with 0.45-micron design rules), Intel trimmed the cost of building the chip by eliminating the integral 512KB secondary (Level Two) memory cache installed in the Pentium II cartridge. The company also opted to lower the packaging cost of the chip by omitting the metal outer shell of the full Pentium II and instead leaving the Celeron's circuit board substrate bare. In addition, the cartridge-based Celeron package lacked the thermal plate of the Pentium II and the latches that secure it to the slot. Intel terms the Celeron a Single Edge Processor Package to distinguish it from the Single Edge Contact cartridge used by the Pentium II.
In 1999, Intel introduced a new, lower-cost package for the Celeron, a plastic pin-grid array (PPGA) shell that looks like a first generation Pentium on steroids. It has 370 pins and mates with Intel's PGA370 socket. The chip itself measures just under two inches square (nominally 49.5 millimeters) and about three millimeters thick, not counting the pins, which hang down another three millimeters or so (the actual specification is 3.05 to 3.30 millimeters).
When the Celeron chip was initially introduced, the absence of a cache made such a hit on the performance that Intel was forced by market pressure to revise its design. In August, 1998, the company added a 128KB cache operating at one-half core speed to the Celeron. Code names distinguished the two chips: The first Celeron was code-named Covington during development; the revised chip was code-named Mendocino. Intel further increased the cache to 256KB on October 2, 2001, with the introduction of a 1.2GHz Celeron variant.
Intel also distinguished the Celeron from its more expensive processor lines by limiting its front-side bus speed to 66MHz. All Celerons sold before January 3, 2001 were limited to that speed. With the introduction of the 800MHz Celeron, Intel kicked the chip's front-side bus up to 100MHz. With the introduction of a 1.7GHz Celeron on May 2, 2002, Intel started quad-clocking the chip's front-side bus, yielding an effective data rate of 400MHz.
Intel also limited the memory addressing of the Celeron to 4GB of physical RAM by omitting the four highest address bus signals used by the Pentiums II and III from the Celeron pin-out. The Celeron does not support multiprocessor operation, and, until Intel introduced the Streaming SIMD Extensions to the 1.2GHz version, the Celeron understood only the MMX extension to the Intel instruction set.
Table 5.5 lists the features and introduction dates of various Celeron models.
Pentium II Xeon
In 1998, Intel sought to distinguish its higher performance microprocessors from its economy line. In the process, the company created the Xeon, a refined Pentium II microprocessor core enhanced by a higher-speed memory cache, one that operated at the same clock rate as the core logic of the chip.
At heart, the Xeon is a full 32-bit microprocessor with a 64-bit data bus, as with all Pentium-series processors. Its address bus provides for direct access to up to 64GB of RAM. The internal logic of the chip allows for up to four Xeons to be linked together without external circuitry to form powerful multiprocessor systems.
A sixth generation processor, the Xeon is a Pentium Pro derivative by way of the standard Pentium II. It incorporates two 12-stage pipelines to make what Intel terms Dynamic Execution micro-architecture.
The Xeon incorporates two levels of caching. One is integral to the logic core itself, a primary 32KB cache split 16KB for instructions, 16KB for data. In addition, a separate secondary cache is part of the Xeon processor module but is mounted separately from the core logic on the cartridge substrate. This integral-but-separate design allows flexibility in configuring the Xeon. Current chips are available equipped with either 512KB or 1MB of L2 cache, and the architecture and slot design allow for secondary caches of up to 2MB. This integral cache runs at the full core speed of the microprocessor.
This design required a new interface, tagged Slot 2 by Intel.
Initially the core operating speed of the Xeon started where the Pentium II left off (at the time, 400MHz) and followed the Pentium II up to 450MHz.
The front-side bus of the Xeon was initially designed for 100MHz operation, although higher speeds are possible and expected. A set of contacts on the SEC cartridge allows the motherboard to adjust the multiplier that determines the ratio between front-side bus and core logic speed.
The independence of the logic core and cache is emphasized by the power requirements of the Xeon. Each section requires its own voltage level. The design of the Xeon allows Intel flexibility in the power requirements of the chip through a special coding scheme. A set of pins indicates the core voltage and the cache voltage required by the chip, and the chip expects the motherboard to determine the requirements of the board and deliver the required voltages. The Xeon design allows for core voltages as low as 1.8 volts or as high as 2.1 volts (the level required by the first chips). Cache voltage requirements may reach as high as 2.8 volts. Nominally, the Xeon is a 2-volt chip.
Overall, the Xeon is optimized for workstations and servers and features built-in provide support for up to four identical chips in a single computer. Table 5.6 summarizes the original Xeon product line.
Pentium II OverDrive
To give an upgrade path for systems originally equipped with the Pentium Pro processor, Intel developed a new OverDrive line of direct-replacement upgrades. These Pentium II OverDrive chips fit the same zero-insertion force Socket 8 used by the Pentium Pro, so you can slide one chip out and put the other in. Dual-processor systems can use two OverDrive upgrades. Intel warns that some systems may require a BIOS upgrade to accommodate the OverDrive upgrade.
The upgrade offers the revised design of the Pentium II (which means better 16-bit operation) as well as higher clock speeds. The chip also can earn an edge over ordinary Pentium II chips operating at the same speeds—the 512KB secondary cache in the OverDrive chip operates at full core logic speed, not half speed as in the Pentium II.
|[ Team LiB ]|