|[ Team LiB ]|
Just as some kinds of memory play different roles in the operation of your computer, the various kinds of memory work differently and are made differently. Read-write memory is optimized for speed. It has to run as fast as it can to try to keep up with the high speeds of modern microprocessors—and it inevitably loses the race. Read-only memory, on the other hand, takes a more leisurely look at what it remembers, opting for intransigence instead of incandescent speed.
Both major divisions of the memory world have also endured a long and twisted evolution. Read-only memory has quested ever-higher speed by dashing from one technology to the next. The old withers and dies, soon forgotten—and when not forgotten, expensive. Older technology memory quickly becomes rare, and with that its price becomes dear. Unfortunately, you can rarely substitute faster, new technology memory for the slower, older kind. (Here's a buying tip: When a new technology appears ready to ascend to dominance, you're well advised to stock up on the old—buy all you think you'll ever need.)
With read-only memory, the improvements come mostly in convenience and versatility. You can do more with newer kinds of read-only memory, but the old types remain useful. Nearly all kinds of read-only memory remain in production. Although that's good if you ever need to replace some, you almost never need to replace some. But you do stand to reap the benefits of newer technologies. For example, without Flash memory, you'd have to stick a battery-hungry disk drive or some other primitive storage device in your digital camera or MP3 player.
Although read-only memory is usually made from the same silicon stuff as other memory circuits, silicon is merely a matter of convenience, not a necessity. After all, you can make read-only memory with a handful of wires and a soldering iron. (Okay, maybe only someone who knows what he is doing could, but you get the idea.) Any electrical circuit that can be permanently changed can be read-only memory.
The challenge for engineers has been to design read-only memory that's easier to change. That seeming contradiction has a practical basis. If someone had to solder a handful of wires every time a circuit needed ROM, a computer would be a complicated rat's nest of circuits just waiting for an unpropitious moment to fail. If that doesn't sound too different from the system on your desk, a machine with wires instead of silicon ROM would likely be as expensive as a battleship and about as easy to maneuver around your desk. If you wanted to change a bit of ROM—say, to make an upgrade so your system doesn't crash whenever AOL delivers your mail—you'd have to call a service technician to lug over his soldering iron and schematic diagrams.
Changeable ROM is upgradeable ROM. And engineers have made steady progress to make your upgrades easier. The first ROM was as unchangeable as your grandfather's mind. Then engineers developed chips you could change in special machines. After that they created chips that could be changed by special circuits inside your computer. Today, you can update read-only memory in a flash—and that's what they call it.
If ROM chips cannot be written by the computer, the information inside must come from somewhere. In one kind of chip, the mask ROM, the information is built into the memory chip at the time it is fabricated. The mask is a master pattern that's used to draw the various circuit elements on the chip during fabrication. When the circuit elements of the chip are grown on the silicon substrate, the pattern includes the information that will be read in the final device. Nothing, other than a hammer blow or its equivalent in destruction, can alter what is contained in this sort of memory.
Mask ROMs are not common in personal computers because they require their programming be carried out when the chips are manufactured; changes are not easy to make, and the quantities that must be created to make things affordable are daunting.
One alternative is the programmable read-only memory chip, or PROM. This style of circuit consists of an array of elements that work like fuses. Too much current flowing through a fuse causes the fuse element to overheat, melt, and interrupt the current flow, thus protecting equipment and wiring from overloads. The PROM uses fuses as memory elements. Normally, the fuses in a PROM conduct electricity just like the fuses that protect your home from electrical disaster. Like ordinary fuses, the fuses in a PROM can be blown to stop the electrical flow. All it takes is a strong enough electrical current, supplied by a special machine called a PROM programmer or PROM burner.
PROM chips are manufactured and delivered with all their fuses intact. The PROM is then customized for its given application using a PROM programmer to blow the fuses, one by one, according to the needs of the software to be coded inside the chip. This process is usually termed burning the PROM.
As with most conflagrations, the effects of burning a PROM are permanent. The chip cannot be changed to update or revise the program inside. PROMs are definitely not something for people who can't make up their minds—or for a fast changing industry.
Happily, technology has brought an alternative—the erasable programmable read-only memory chip, or EPROM. Sort of like self-healing semiconductors, the data inside an EPROM can be erased and the chip reused for other data or programs.
EPROM chips are easy to spot because they have a clear window in the center of the top of their packages. Invariably this window is covered with a label of some kind, and with good reason. The chip is erased by shining high-intensity ultraviolet light through the window. If stray light should leak through the window, the chip could inadvertently be erased. (Normal room light won't erase the chip because it contains very little ultraviolet. Bright sunshine does, however, and can erase EPROMs.) Because of their versatility, permanent memory, and easy reprogrammability, EPROMs are ubiquitous inside personal computers.
A related chip is called electrically erasable programmable read-only memory, or EEPROM (usually pronounced double-E PROM). Instead of requiring a strong source of ultraviolet light, EEPROMs need only a higher than normal voltage (and current) to erase their contents. This electrical erasability brings an important benefit—EEPROMs can be erased and reprogrammed without popping them out of their sockets. EEPROM gives electrical devices such as computers and their peripherals a means of storing data without the need for a constant supply of electricity. Note that whereas EPROM must be erased all at once, each byte in EEPROM is independently erasable and writable. You can change an individual byte if you want. Consequently, EEPROM has won favor for storing setup parameters for printers and other peripherals. You can easily change individual settings yet still be assured the values you set will survive switching the power off.
EEPROM has one chief shortcoming—it can be erased only a finite number of times. Although most EEPROM chips will withstand tens or hundreds of thousands of erase-and-reprogram cycles, that's not good enough for general storage in a computer that might be changed thousands of times each second you use your machine. This problem is exacerbated by the manner in which EEPROM chips are erased—unlike ordinary RAM chips in which you can alter any bit whenever you like, erasing an EEPROM means eliminating its entire contents and reprogramming every bit all over again. Change any one bit in an EEPROM, and the life of every bit of storage is shortened.
Today's most popular form of nonvolatile memory—the stuff that makes the memory cards in your camera and MP3 player work—is Flash memory. Although strictly speaking, Flash is a kind of EEPROM, it earns its distinction (and value) by eliminating special circuits for reprogramming. Instead of requiring special, higher voltages to erase it contents, Flash memory can be erased and reprogrammed using the normal voltages inside a computer. Normal read and write operations use the standard power (from 3.3 to 5 volts) that is used by the rest of the computer's logic circuits. Only erasing requires a special voltage, one higher than usual called a super-voltage, which is typically 12 volts. Because the super-voltage is out of the range of normal memory operations, the contents of Flash memory remain safe whether the power is off or your computer is vigorously exercising its memory.
For system designers, the electrical reprogrammability of Flash memory makes it easy to use. Unfortunately, Flash memory is handicapped by the same limitation as EEPROM—its life is finite (although longer than ordinary EEPROM), and it must be erased and reprogrammed as one or more blocks instead of individual bytes.
The first generation of Flash memory made the entire memory chip a single block, so the entire chip had to be erased to reprogram it. Newer Flash memory chips have multiple, independently erasable blocks that may range in size from 4KB to 128KB. The old, all-at-once style of Flash ROM is now termed bulk erase Flash memory because of the need to erase it entirely at once.
New multiple-block Flash memory is manufactured in two styles. Sectored-erase Flash memory is simply divided up into multiple sectors that your computer can individually erase and reprogram. Boot block Flash memory specially protects one or more blocks from normal erase operations so that special data in it—such as the firmware that defines the operation of the memory—will survive ordinary erase procedures. Altering the boot block typically requires applying the super-voltage to the reset pin of the chip at the same time as performing an ordinary write to the boot block.
Although modern Flash memory chips can be erased only in blocks, most support random reading and writing. Once a block is erased, it will contain no information. Each cell will contain a value of zero. Your system can read these blank cells, though without learning much. Standard write operations can change the cell values from zero to one but cannot change them back. Once a given cell has been changed to a logical one with a write operation, it will maintain that value until the Flash memory gets erased once again, even if the power to your system or the Flash memory chip fails.
Flash memory is an evolving technology. The first generation of chips required that your computer or other device using the chips handle all the minutiae of the erase and write operations. The current generation of chips have their own onboard logic to automate these operations, making Flash memory act more like ordinary memory. The logic controls the timing of all the pulses used to erase and write to the chip, ensures that the proper voltages reach the memory cells, and even verifies that each write operation was carried out successfully.
On the other hand, the convenience of using Flash memory has led many developers to create disk emulators from it. For the most effective operation and longest life, however, these require special operating systems (or, more typically, special drivers for your existing operating system) that minimize the number of erase-and-reprogram cycles to prolong the life of the Flash memory.
In the single-minded quest for speed, engineers have developed and cast away a variety of technologies. Certainly faster chips almost automatically result when they design smaller chips to fit more megabytes on a silicon sliver, but making chips faster is a lot easier said than done. By carefully designing chips to trim internal delays and taking advantage of the latest fabrications technologies, chip-makers can squeeze out some degree of speed improvement, but the small gains are hard won and expensive.
By altering the underlying design of the chips, however, engineers can wring out much greater performance increases, often with little increase in fabrication cost. In a quest for quicker response, designers have developed a number of new memory chip technologies. To understand how they work and gain their edge, you first need to know a bit about the design of standard memory chips.
The best place to begin a discussion of the speed limits and improvements in Dynamic Random Access Memory (DRAM) is with the chips themselves. The critical issue is how they arrange their bits of storage and allow it to be accessed.
The traditional metaphor for memory as the electronic equivalent of pigeonholes is apt. As with the mail sorter's pigeonholes, memory chips arrange their storage in a rectangular matrix of cells. A newer, better metaphor is the spreadsheet, because each memory cell is like a spreadsheet cell, uniquely identified by its position, expressed as the horizontal row and vertical column of the matrix in which it appears. To read or write a specific memory cell, you send the chip the row and column address, and the chip sends out the data.
In actual operation, chips are somewhat more complex. To keep the number of connections (and thus, cost) low, the addressing lines of most memory chips are multiplexed (that is, the same set of lines serve for sending both the row and column addresses to the chip).
To distinguish whether the signals on the address lines mean a row or column, chips use two signals. The Row Address Strobe signal indicates that the address is a row, and the Column Address Strobe signal indicates the address is a column. These signals are most often abbreviated as RAS and CAS, respectively, with each acronym crowned with a horizontal line indicating the signals are inverses (logical complements), meaning that they indicate "on" when they are "off." Just to give engineering purists nightmares (and to make things typographically easier and more understandable, this book uses a slightly different convention, putting a minus sign in front of the acronyms for the same effect). Multiplexing allows 12 address lines plus the -CAS and -RAS signals to encode every possible memory cell address in a 4Mb chip (or 4MB memory module).
In operation, the memory controller in your computer first tells the memory chip the row in which to look for a memory cell and then the column the cell is in. In other words, the address lines accompanied by the -RAS signal select a memory bank; then a new set of signals on the address lines accompanied by the -CAS signal selects the desired storage cell.
Even though electricity travels close to the speed of light, signals cannot change instantly. Changing all the circuits in a chip from row to column addressing takes a substantial time, at least in the nanosecond context of computer operations. This delay, together with the need for refreshing, is the chief limit on the performance of conventional memory chips. To speed up memory performance, chip designers have developed a number of clever schemes to sneak around these limits.
These memory technologies have steadily evolved. When ordinary DRAM chips proved too slow to accommodate microprocessor needs, chip-makers first tried static column memory before hitting on fast page mode DRAM chips (which proved the industry stalwart through 1995). Next, they tinkered with addressing cycles to create Extended Data Out (EDO). They even tried to improve EDO with a burst-mode version. But these conventional technologies proved too slow to keep up with gigahertz and faster microprocessors.
The current kinds of memory take different strategies. Lower-cost systems use SDRAM chips (discussed later in this chapter) to coax more speed from the conventional memory design. The quickest memory systems of the fastest computers break with all the past technologies to use a new form of memory addressing, commonly called Rambus.
Fast Page-Mode RAM
The starting point for discussions of modern memory is fast page-mode RAM, which was still an alternative for computers about five years ago. (Other kinds of memory preceded FPM chips, but the machines using it are obsolete today.)
FPM earns its name by allowing repeated access to memory cells within a given page to occur quickly. The memory controller first sends out a row address and then activates the -RAS signal. While holding the -RAS signal active, it then sends out a new address and the -CAS signal to indicate a specific cell. If the -RAS signal is kept active, the controller can then send out one or more additional new addresses, each followed by a pulse of the -CAS, to indicate additional cells within the same row. In memory parlance, the row is termed a page, hence, the name of the chips.
The chief benefit of this design is that your computer can rapidly access multiple cells in a single memory page. With typical chips, the access time within a page can be trimmed to 25 to 30 nanoseconds, fast enough to eliminate wait states in many computers. Of course, when your computer needs to shift pages, it must change both the row and column addresses with the consequent speed penalty.
Extended Data Out Memory
Rather than a radical new development, Extended Data Out (EDO) is a variation on fast page-mode memory (which allows waitless repeated access to bits within a single page of memory). The trick behind EDO is elegant. Whereas conventional memory discharges after each read operation and requires recharging time before it can be read again, EDO keeps its data valid until it receives an additional signal. EDO memory modifies the allowed timing for the -CAS signal. The data lines remain valid for a short period after the -CAS line switches off (by going high). As a result, your system need not wait for a separate read cycle but can read (or write) data as fast as the chip will allow address access. It doesn't have to wait for the data to appear before starting the next access but instead can read it immediately. In most chips, a 10-nanosecond wait period is normally required between issuing the column addresses. The EDO design eliminates this wait, allowing the memory to deliver data to your system faster. Standard page-mode chips turn off the data when the -CAS line switches off. For this system to work, however, your computer has to indicate when it has finished reading the data. In the EDO design, the memory controller signals with the Output Enable signal.
In effect, EDO can remove additional wait states, thereby boosting memory performance. In theory, EDO could give a performance boost as high as 50 to 60 percent over FPM. In reality and the latest computers, the best EDO implementations boosted performance by 10 to 20 percent.
Physically, EDO chips and SIMMs appear identical to conventional memory. Both use the same packaging. You can't tell a difference just by looking—unless you're intimately familiar with the part numbers. Telling the difference is important, however. You can't just plug EDO into any computer and expect it to work. It requires a completely different management system, which means the system (or at least its BIOS) must match the memory technology. Although you can install EDO SIMMs in most computers, they will work, if at all, as ordinary memory and deliver no performance advantage.
Note that the speed ratings of EDO chips are given in nanoseconds, much like page-mode chips. For a given nanosecond rating, however, EDO memory will act as if it is about 30 percent faster. For example, whereas a 70 ns page-mode chip can deliver zero wait state operation to a 25MHz memory bus, a 70 ns EDO chip can operate at zero wait states on a 33MHz bus.
Burst EDO DRAM
To gain more speed from EDO memory, Micron Technology added circuitry to the chip to make it match the burst mode used by Intel microprocessors since the 486. The new chips, called Burst EDO DRAM (BEDO), perform all read and write operations in four-cycle bursts. The same technology also goes by the more generic name pipeline nibble mode DRAM, because it uses a data pipeline to retrieve and send out the data in a burst.
The chips work like ordinary EDO or page-mode DRAM in that they send out data when the -CAS line goes active. However, instead of sending a single nibble or byte of data (depending on the width of the chip), a two-bit counter pulses the chip internally four times, each pulse dealing out one byte or nibble of data.
Although BEDO was relatively easy and inexpensive to fabricate, because it requires a minimum of changes from ordinary EDO or page-mode DRAM, it never caught on with computer-makers. They opted to leapfrog the design and move to far faster technologies.
Because of their multiplex operation, ordinary memory chips cannot operate in lock-step with their host microprocessors. Normal addressing requires alternating cycles. By redesigning the basic chip interface, however, memory chips can make data available every clock cycle. Because these resulting chips can (and should) operate in sync with their computer hosts, they are termed synchronous DRAM (SDRAM).
Although altering the interface of the chip may remove system bottlenecks, it does nothing to make the chip perform faster. To help SDRAM chips keep up with their quicker interface, a pipelined design is used. As with pipelined microprocessors, SDRAM chips are built with multiple, independently operating stages so that the chip can start to access a second address before it finishes processing the first. This pipelining extends only across column addresses within a given page.
Some SDRAM memory is registered, which means that the memory controller on the motherboard relies on the memory modules themselves to drive and synchronize their own memory control signals. Nonregistered modules get the required synchronizing signals from the memory controller. Through some strange quirk in engineering language, nonregistered memory is generally called unbuffered, whereas buffered memory is termed registered.
All SDRAM chips suffer from a delay between when your computer makes its request to read memory and the time valid memory becomes available. Engineers call this delay CAS latency, because it is the measure of time between when your computer applies the Column Address Strobe signal and when data becomes available. They measure the delay in clock cycles. With today's memory products, the CAS latency is typically two or three, and it is a function of the memory chips themselves. Chips and memory modules are rated as to their CAS latency (although you may need to be an engineer to dig the information out of a product's data sheet). Knowing the CAS latency of memory is useful because it is one of the parameters you can adjust with some BIOS setup procedures to speed up your computer.
Single Data-Rate Memory
Ordinary SDRAM memory is often called single data-rate (SDR) memory in the modern world to contrast it with, well, double data-rate (DDR) memory. It earns its new name because SDR memory modules transfer data at the same rate as the system bus clock rate. On each clock pulse, SDR transfers a single bit down each line of its bus.
SDR memory is rated by the speed of the system bus to which it attaches, although only two speeds are commonly recognized and sold: PC100 for computers with 100MHz system buses, and PC133 for computers with 133MHz system buses. You can assume that unrated SDR memory is meant for 66MHz system buses, although the designation PC66 appears sometimes.
Double Data-Rate Memory
Memory chips that use DDR technology are rated by their effective data speed (that is, twice their actual clock speed). Three speed ratings of chips are available. They are designated DDR 200, DDR 266, and DDR 333.
The speed ratings on DDR memory modules are based on peak bandwidth rather than bus speed. The result is a much more impressive number. For example, a DDR module on a 100MHz bus transfers data at 200MHz, but its peak bandwidth is 1.6GBps. A module at this speed becomes a PC1600 memory module. These large figures serve to distinguish DDR from SDR modules, so you are less likely to confuse the two. Table 16.3 lists the rated speeds and bus speeds of DDR modules.
Note that both SDR and DDR memory have two buses—one for addressing and one for transferring data. In DDR, only the data bus operates at double speed. The address bus still works at the standard clock speed. Consequently, DDR memory only speeds up part of the memory cycle, the part when data actually moves. Because most memory transfers now occur in bursts, this handicap is not as substantial as it might seem—for some requests, DDR memory may need only one address cycle for a burst of a full page of memory, so the overhead is only one slow cycle for 4096 DDR memory transfers.
Quad Data-Rate Memory
The next generation of SDRAM is exactly what you might expect—quad data-rate (QDR) memory. This new technology has already been developed through the joint efforts of Cypress, Hitachi, IDT, Micron, NEC, and Samsung. Chips using QDR have already been developed, but they have not been applied to memory modules. Currently there are not even specifications for QDR modules.
Rather than doubling up the data speed once again, QDR uses a double-ported design. Each chip has an input and an output port that can operate simultaneously, and each port uses double-clocking to transfer data. The result is that information can move through a QDR chip four times faster than ordinary SDRAM, hence the "quad" in the name. According to the QDR promoters, the double-ported design eliminates even the possibility of contention between the memory chip and its controller. In addition, QDR boosts the speed of the address bus to the same double rate as used by the data bus, giving QDR an automatic edge on DDR.
The developers of QDR maintain an informational Web site in support of the technology at www.qdrsram.com.
The next step up in memory speed comes from revising the interface between memory chips and the rest of the system. The leading choice is the Rambus design, developed by the company with the same name. Intel chose Rambus technology for its fastest systems designs. An earlier incarnation of the technology was used in the Nintendo 64 gaming system.
The Rambus design has evolved since its beginnings. Rambus memory chips use an internal 2048-byte static RAM cache that links to the dynamic memory on the chip through a very wide bus that allows the transfer of an entire page of memory into the cache in a single cycle. The cache is fast enough that it can supply data at a 15-ns rate during hits. When the cache misses, the chip retrieves the request data from its main memory and at the same time transfers the page containing it into the cache so that it is ready for the next memory operation. Because subsequent memory operations will likely come from the cache, the dynamic portion of the chip is free to be refreshed without stealing system time or adding wait states.
The Rambus operates like a small network, sending data in packets that can be up to 256 bytes long. The system uses its own control language to control memory and steer bytes around. The overhead from this system saps about 10 percent of the bandwidth from the peak transfer rate of the system.
You won't ordinarily deal with individual Rambus chips but rather complete memory modules. Because of the different technology used by Rambus chips, the modules work differently with your system, too. Whereas with conventional memory it is important to match the width of the memory bus to that of the rest of your computer, such matches are unnecessary with Rambus. The memory controller reorganizes the data it pulls from Rambus memory to make it fit the bus width of the host computer. Consequently, a Rambus module with a 16-bit bus works in a computer with a 64-bit bus. Usually, however, Rambus systems put two or more modules in parallel. The goal of this design is to increase memory speed or bandwidth.
Rambus modules don't link to a computer system like standard memory. Instead, the design uses a special high-speed bus (hence, the origin of the Rambus name). In current Rambus modules, memory connects to the chipset in your computer through two high-speed buses. One of them, called the request bus, is used to send control and address information to memory. This bus uses an eight-bit connection. In addition, the data bus is a 16-bit-wide channel that transfers the data for reading from and writing to memory. The two buses are synchronized, although the data bus transfers bits on both edges of the clock, much as is done with DDR memory. The speed rating of Rambus modules is usually given by the data rate on the data bus—twice the actual clock frequency of the overall memory system. Rambus modules also have a third, low-speed bus that's used only for initialization and power management.
The Rambus itself—the lines that carry the request bus, data bus, and timing signals—is designed as a transmission line like those carrying radio signals. Transmission lines are sensitive to changes in tuning, much like the rabbit-ear antennae on portable televisions that, if not perfectly adjusted, make a scene look like a blizzard across the prairie. The Rambus transmission line loops through each module in a Rambus circuit before stopping at the line termination at its end (which "terminates" the signal by absorbing it all and not letting any reflect back down the transmission line to interfere with other signals).
This design has important practical implications. Because the Rambus signals loop through each module, in current computers all Rambus sockets must be filled with memory modules. Most computer-makers design their systems to accommodate two Rambus modules per Rambus circuit. Typically, computer-makers install one memory module at the factory to allow you a socket for future memory expansion. Because even the Rambus signal must loop through unused sockets, the second socket in each Rambus circuit in most new computers gets filled with a "dummy" module. The memory system will not work unless dummy modules (or memory upgrades) are properly installed.
To increase memory speed and capacity, most of today's computers use two Rambus circuits. This doubles both the total bandwidth of the memory system and the potential total memory in the system. As a consequence, computers that use today's 16-bit Rambus modules must expand their memories two modules at a time.
In the future, Rambus modules will use both 32-bit and 64-bit bus widths in addition to the 16-bit modules currently in use. Modules with 32-bit buses will be essentially the same as two 16-bit modules on one card with two request buses and two data buses. The 32-bit modules will use a different design with a single shared request bus and four data buses. These wider bus modules are incompatible with today's computers and their memory systems. Consequently, Rambus designed the module connectors to be physically incompatible, too, so you cannot inadvertently slide the wrong kind of Rambus module into your computer.
Current Rambus modules for personal computers are designed to operate at 800MHz or 1066MHz. Because of the double clocking of the data buses in the modules, 800MHz modules perfectly match 400MHz system buses, and 1066MHz modules match 533MHz system buses. In older computers using Rambus, 100MHz system buses are best matched by 800MHz modules, and 133MHz system buses best match 1066MHz modules.
Memory access problems are particularly prone to appear in video systems. Memory is used in display systems as a frame buffer, where the onscreen image is stored in digital form with one unit of memory (be it a bit, byte, or several bytes) assigned to each element of the picture. The entire contents of the frame buffer are read from 44 to 75 times a second as the stored image is displayed on the monitor screen. All the while, your computer may be attempting to write new picture information into the buffer to appear on the screen.
With normal DRAM chips, these read and write operations cannot occur simultaneously. One has to wait for another. The waiting negatively affects video performance, your system's speed, and your patience.
The wait can be avoided with special memory chips that have a novel design twist—two paths for accessing each storage location. With two access paths, this memory acts like a warehouse with two doors—your processor can push bytes into the warehouse through one door while the video system pulls them out through another. Strictly speaking, this memory can take two forms: True dual-ported memory allows simultaneous reading and writing; video memory chips (often called VRAM for video random access memory) give one access port full read and write random access, while the other port only allows sequential reading (which corresponds to the needs of scanning a video image).
The chief disadvantage of VRAM technology is that it is more expensive because it requires more silicon (about 20 percent more area on the chip die). It more than makes up for its higher cost with its speed advantage. Using VRAM can speed up video systems by as much as 40 percent.
|[ Team LiB ]|