|[ Team LiB ]|
The hardware that changes your pulsing digital computer's thoughts into the signals that can be displayed by a monitor is called the display adapter. Over the years, the display adapter has itself adapted to the demands of computer users, gaining color and graphics capabilities as well as increasing its resolution and range of hues. In most machines, the display adapter is a special expansion board that serves primarily to make graphic images; hence, the display adapter is often called a graphics board. Because the graphics board sends out signals in a form that resembles (but is not identical to) that of your home video system, it is often termed a video board. Notebook computers lack video boards—they typically lack any conventional expansion boards at all—but all of them also include display adapter circuitry on their motherboards.
No matter its name, the function of display adapter circuitry is the same—control. The adapter controls every pixel that appears on your computer display. But there is one more essential element. Just any control won't do. Give a room full of monkeys control of a million light dimmers (you'll need a mighty large room or a special breed of small, social simians), and the resulting patterns might be interesting—and might make sense at about the same time your simians have completed duplicating the works of Shakespeare. The display adapter circuitry also organizes the image, helping you make sense from the chaos of digital pulses in your computer. It translates the sense of your computer's thoughts into an image that makes sense to you.
The video circuitry of your computer, whether on a dedicated display adapter or part of the motherboard circuitry in the chipset, performs the same functions. In its frame buffer (or in main memory in systems using Unified Memory Architecture), it creates the image your computer will display. It then rasterizes the memory-mapped image and converts the digital signals into an analog format compatible with your monitor.
The modern video board usually has five chief circuits that carry out these functions, although some boards lack some of these elements. A graphics accelerator chip builds the image, taking commands from your software and pushing the appropriate pixel values into the frame buffer. Memory forms the frame buffer that stores the image created on the board. A video controller reads the image in the frame buffer and converts it to raster form. A RAMDAC then takes the digital values in the raster and converts them into analog signals of the proper level. Finally, a video BIOS holds extension code that provides the video functions your system needs as it boots up.
Of all the chips on a video board, the most important is the graphics accelerator. The chip choice here determines the commands the board understands—for example, whether the board carries out 3D functions in its hardware or depends on your computer to do the processing of 3D effects. The speed at which the accelerator chip operates determines how quickly your system can build image frames. This performance directly translates into how quickly your system responds when you give a command that changes the screen (for example, dropping down a menu) or how many frames get dropped when you play back a video clip. The accelerator also limits the amount and kind of memory in the frame buffer as well as the resolution levels of the images your computer can display, although other video board circuits can also impose limits. In short, the graphics accelerator is the most important chip in the entire video system.
That said, the accelerator is optional, both physically and logically. The oldest display adapters lack accelerators; hence, they are not "accelerated," which means your computer's microprocessor must execute all drawing instructions. In addition, even boards with accelerators may not accelerate all video operations. The board may lack the knowledge of a specific command to carry out some video task, or the board's driver software may not take advantage of all its features. In such circumstances, the drawing functions will be emulated by a Hardware Emulation Layer (often abbreviated HEL) in your operating system—which means your microprocessor gets stuck with the accelerator's drawing work.
As computers have evolved, the need for graphics processing has shifted back and forth between the accelerator and your system's microprocessor. For example, the MMX instructions of newer Pentium microprocessors overlap the functions of graphics accelerators. Streaming SIMD extensions add performance to most graphics operations, complementing MMX. In Unified Memory Architecture (UMA) computers, these technologies can take the place of a dedicated graphics accelerator. In computers with frame buffers, these features work in conjunction with the graphics accelerator. They speed up your computer's ability to calculate what images look like—for example, decompressing stored images or calculating wire-frames for your drafting program. But the final work, actually painting the images that will appear on your screen, relies on the graphics accelerator.
The graphics accelerator is an outgrowth of an older chip technology—the graphics coprocessor. An early attempt to speed up the display system, the graphics coprocessor was introduced as a supplemental microprocessor optimized for carrying out video-oriented commands.
The graphics coprocessor added speed in three ways. By carrying out drawing and image-manipulation operations without the need for intervention by the microprocessor, the coprocessor freed up the microprocessor for other jobs. Because the graphics coprocessor was optimized for video processing, it could carry out most image-oriented operations faster than could the microprocessor, even if the microprocessor was able to devote its full time to image processing. The graphics coprocessor also broke through the bus bottleneck that was (at the time of the development of graphics coprocessor technology) choking video performance. When the microprocessor carried out drawing functions, it had to transfer every bit bound for the monitor through the expansion bus—at the time, the slow ISA bus. The coprocessor was directly connected to the frame buffer and could move bytes to and from the buffer without regard to bus speed. The microprocessor only needed to send high-level drawing commands across the old expansion bus. The graphics coprocessor would carry out the command through its direct attachment to the frame buffer.
The workstation market triggered the graphics coprocessor. Microprocessor-makers altered their general-purpose designs into products that were particularly adept at manipulating video images. Because the workstation market was multifaceted, with each different hardware platform running different software, the graphics coprocessor had to be as flexible as possible—programmable just like its microprocessor forebears.
These coprocessors joined the computer revolution in applications that demanded high-performance graphics. But the mass acceptance of Windows made nearly every computer graphics intensive. The coprocessor was left behind as chipmakers targeted the specific features needed by Windows and trimmed off the excess—programmability. The result was the fixed-function graphics coprocessor, exactly the same technology better known now as the graphics accelerator.
The most recent evolution of graphics acceleration technology has produced the 3D accelerator. Rather than some dramatic breakthrough, the 3D accelerator is a fixed-function graphics coprocessor that includes the ability to carry out the more common 3D functions in its hardware circuitry. Just as an ordinary graphics accelerator speeds up drawing and windowing, the 3D accelerator gives a boost to the 3D rendering. Nearly all of today's video boards are equipped with a 3D accelerator. The technology is even built in to many motherboard chipsets.
As with the microprocessors, graphics and 3D accelerators come in wide varieties with different levels of performance and features. Each maker of graphics accelerators typically has a full line of products, ranging from basic chips with moderate performance designed for low-cost video boards to high-powered 3D products aimed at awing you with benchmark numbers far beyond the claims of their competitors (and often, reality).
The performance and output quality of a graphics accelerator depends on a number of design variables. Among the most important of these are the width of the registers it uses for processing video data, the amount and technology of the memory it uses, the ability of the chip to support different levels of resolution and color, the speed rating of the chip, the bandwidth of its connection to your computer and display, and the depth and extent of its command set, as well as how well those commands get exploited by your software.
Graphics accelerators work like microprocessors dedicated to their singular purpose, and internally they are built much the same. The same design choice that determines microprocessor power also affects the performance of graphics accelerator chips. The internal register width of a graphics accelerator determines how many bits the chip works with at a time. As with microprocessors, the wider the registers, the more data that can be manipulated in a single operation.
The basic data type for modern graphics operations is 32 bits—that's the requirement of 24-bit True Color with an alpha channel. Many accelerators at least double that and can move pixels two (or four) at a time in blocks. Today's best are full 128-bit processors, able to operate on multiple pixels at a time.
Because the graphics or 3D accelerator makes the video circuitry of your computer a separate, isolated system, concerns about data and bus widths elsewhere in your computer are immaterial. The wide registers in graphic accelerators work equally well regardless of whether you run 16-bit software (Windows 95 through Windows Me) or 32-bit software (Windows NT, Windows 2000, and Windows XP), no matter what microprocessor you have or what bus you plug your video board into.
The design of a graphics accelerator also sets the maximum amount of memory that can be used in the frame buffer, which in turn sets upper limits on the color and resolution support of a graphics accelerator. Other video board circuit choices may further constrain these capabilities. In general, however, the more memory, the higher the resolution and the greater the depth of color the accelerator can manage.
The same graphics or 3D accelerator chip may deliver wildly different performance when installed in different video boards, even boards with substantially similar circuitry. One of the chief reasons for performance differences among different brands of video board is not in the hardware but the software support. Drivers can radically alter the performance of a given accelerator chip. After all, the chip only processes instructions. If the instructions are optimized, the performance of the chip will be optimum. To be useful at all, a video board must have drivers to match the operating system you want to use.
The primary job of the video controller in desktop computers that use picture tubes is to serialize the data in display memory. The conversion often is as convoluted as a video game maze. The resemblance between the memory map and the onscreen image is only metaphoric. The rows and columns by which the frame buffer is organized have no relationship to the rows and columns of pixels on your monitor screen. The bytes of video information are scattered between a handful of memory chips or modules, sliced into several logical pages, and liberally dosed with added-in features such as cursors and sprites. Somehow, all the scattered bytes of data must get organized and find their way to the monitor. In addition, the monitor itself must be brought under the control of the computer, synchronized in two dimensions.
The video controller generates the actual scanning signals. Using the regular oscillations of a crystal, the controller generates a dot clock, a frequency corresponding to the rate at which it will scan the data for the pixels to appear on the screen. The controller divides down this basic operating frequency to produce the horizontal synchronizing frequency, and from that, the vertical synchronizing frequency. From these frequencies, the controller can create a monitor signal that lacks only image data. In real time, the controller scans through the memory addresses assigned to each pixel on the screen in the exact order in which each pixel will appear on the screen. The time at which each address gets read exactly matches the time and position the pixel data appears in the final video output signal.
In modern analog computer video systems, the controller doesn't read memory directly. Rather, the digital data scanned by the controller from memory gets routed first through the RAMDAC, which converts the data from digital to analog form. The video controller then adds the analog data to the scanning signals to create the video signal that gets passed along to your monitor.
The video controller may draw pixel data from places other than the frame buffer. The circuits used by computers generate the cursor that appears on the screen in text modes. Or it may add in the bit-values associated with a sprite. By rerouting its scan, it can make hardware windows.
In the language of computer engineering, the part of the video circuitry that performs the actual scanning operation is called the CRT controller, because the signals it generates actually control the sweep of the electron beam in the CRT or picture tube of your monitor.
In the first computers, the CRT controller was a separate integrated circuit, the 6845 made by Motorola. This chip originally was not designed for computers but as a generalized scan-maker for any sort of electronic device that might plug in to a television set or monitor. The engineers who designed the first computers chose it because it was a readily available, off-the-shelf product that made the development of video boards relatively easy and cheap. Long ago most video board manufacturers switched to custom-designed and manufactured CRT controllers, often part of other computer video circuitry. Even the most advanced of these chips emulate the 6845 in their basic operating modes. When software calls upon the video system to operate in one of the original computer's modes to display text or low-resolution graphics, all CRT controllers react the same way to basic hardware instructions.
Modern computer monitors use analog signals so that the signals supplied them do not limit the range of color they can display. The data stored in your computer's frame buffer is digital because…well, everything in your computer is digital. Moreover, no convenient form of analog memory is available. As a result of this divergence of signal types—digital in and analog out—your video board must convert the digital data into analog form compatible with your monitor. The chip that performs this magic is termed a digital-to-analog converter. Sometimes it may be referred to as a RAMDAC—RAM for random access memory—because its digital data originates in memory.
RAMDACs are classified by the number of digital bits in the digital code they translate. The number of bits translates into the number of signal levels that can appear in its output signal. For example, an eight-bit RAMDAC converts the levels encoded in eight-bit digital patterns into 256 analog levels. In a monochrome system, each one of those levels represents a shade of gray.
In color systems, each primary color or channel requires a separate DAC, a total of three. Video RAMDACs usually put all three converter channels into a single package, although some older video boards may use separate DAC chips for each color channel. Total up the number of bits across all three channels of each RAMDAC, and you'll get the number of bit-planes of color a system can display—its palette. Most of the RAMDACs in today's video systems have three eight-bit channels, allowing them to generate the 16.7 million hues of True Color.
RAMDACs are also speed rated. The RAMDAC chip must be fast enough to process each pixel that is to be displayed on the screen. The higher the resolution of the image you want to display, the higher the speed required from your RAMDAC. The required speed corresponds directly to the dot-clock (the number of pixels on the screen times the refresh rate). To accommodate high-resolution displays, some RAMDACs are rated 200MHz and higher. They don't have to enter the gigahertz stratosphere inhabited by microprocessors, because such speeds are well beyond the needs of any practical video resolution.
On a video board, memory mostly means frame buffer. Every video board includes a good dose of some kind of RAM for holding the bitmap of the image that appears on the screen. In addition, 3D accelerators need memory for their special operations. Double-buffering, as the name implies, doubles the memory needs by putting two separate frame buffers to work. Z-buffering and working memory for the calculations of the 3D accelerator also increase the memory needs of the video board.
The requirements of the graphics or 3D accelerator determine the type of memory required. The manufacturer of the video board sets the amount actually included on the board. Some manufacturers provide sockets to allow you to later upgrade to increase the resolution or color depth capabilities of your video system. As memory prices have fallen to the point that sockets are an appreciable fraction of the cost of adding memory, manufacturers have resorted to providing separate board models with differing memory dosages, all soldered down and not upgradable.
Because of the low prices of memory, most display adapters include substantially more memory than even the largest frame buffers require. The memory on display adapters no longer limits the resolution capabilities of the board. But more memory allows the display adapter to implement more speed-enhancing technologies. In other words, a display adapter with more memory generally will be faster.
|[ Team LiB ]|