|[ Team LiB ]|
The signals between computers and their monitors don't follow the same standards used by broadcast television and studio video systems. Nevertheless, these standard video signals have become an intrinsic part of both what computers do and what you expect from your computer. After all, computers are now regularly used to produce videos, including those for presentation on television, even on networks. At the other end of the channel, computers display video images—from a television using a TV adapter board or from a DVD drive. Video images inevitably confront the computer display system, and dealing with them can be a challenge.
Many display adapters make conventional video signals along with those meant for your computer display. Add a video-capture board to your computer, and you can turn conventional video signals into digital form so you can edit home movies or package them on CD or DVD. You can even watch television on your computer monitor using a technique called video overlay.
When it comes to images, the standard most widely used stares you in the face, literally, for several hours a day. The unblinking eye of the television set defines the most widely used image communication system in the world. Considering when the basic television standards were created, their longevity has been amazing, particularly compared to the short tenure of computer standards—the basic television signal was defined half a century ago. Only recently has it come under threat by digital and high-definition technologies. (The Federal Communications Commission hopes to turn off all conventional television signals by 2007, substituting a new all-digital television standard.)
First a bit of definition. The word video means "I see" in Latin. The word television is a hodge-podge derived from the Greek for "distant" and Latin for "sight." Television is what is broadcast or transmitted over a distance. Video is up close and personal, the signals inside a studio or your home.
When we talk of "video" among computers, however, we mean an electrical signal that encodes an image in raster form, hence the term video board that's often used instead of graphics adapter. When people involved with television speak of video, they mean a particular form of this signal, one with well-defined characteristics. A television transmitter modulates a carrier wave with this video signal to make the broadcasts to which you tune in. Video signals range in frequency from zero to a half-dozen megahertz. Television signals start out at 60MHz and extend upward to nearly ten times that. Television sets tune in television signals, receiving them on their antenna inputs. Monitors display video signals.
Although one standard currently dominates video signals in the United States and Japan, other standards are used elsewhere in the world. In addition, a secondary standard termed S-video appears in high-quality video applications.
The most common form of video wears the designation NTSC, which stands for National Television Standards Committee, an industry organization formed in the early 1950s to create a single signal standard for color television. At the time, CBS had been broadcasting for over a year with an electromechanical color system that essentially spun a color wheel in front of the camera and a matching wheel in front of the monitor. RCA, owner of rival NBC, proposed an all-electronic alternative. The RCA system had the advantage that it was backwardly compatible with black-and-white television sets, whereas the CBS system was not. The NTSC was formed chiefly to put an impartial stamp of approval on the RCA system.
In the RCA/NTSC system, each pixel gets scanned in each of the three primary colors. Although studio equipment may pass along the three colors separately like the RGB signals in computers, for broadcast they are combined together with synchronizing signals to create NTSC video.
The magic is in the combining process. The NTSC system packages three channels of color into one using some clever mathematical transformations.
First, it transforms the color space. For compatibility with monochrome, NTSC combines all three color signals together. This produces a signal called luminance, which encodes all the brightness information in a television image. The luminance signal is essentially a monochrome signal and produces an image entirely compatible with black-and-white television sets. The name of the luminance signal is often abbreviated as Y.
Next, the NTSC system creates two signals encoding color, or rather the differences between the color signals and the luminance signal. One signal (called I in the NTSC system) encodes the difference between luminance and the red signal, and another (called Q) encodes the difference between luminance and the blue signal. Subtract the first from luminance, and you get red again. Subtract the other, and you get blue. Subtract both, and the remainder is green.
Next, the NTSC system combines the two difference signals into a single signal that can carry all the color information, called chrominance (abbreviated as C). Engineers used quadrature modulation to combine the two signals into one. The result was that colors were encoded into the chrominance signal as different phase angles of the signal.
Together the luminance and chrominance signals provided a guide to a map of colors, a polar chart. The chrominance encodes the angle between the color and the X-axis of the chart, and the luminance indicates the distance from the origin to the color.
Finally, to fit the chrominance signal in where luminance should only fit, the NTSC engineers resorted to putting chrominance on a subcarrier. That is, they modulated a carrier wave with the chrominance signal and then added it to the luminance signal. Although the subcarrier had much less bandwidth than the main luminance channel, the process was effective because the human eye is less sensitive to color differences than brightness differences.
The NTSC chose a frequency of 3.58MHz as the color subcarrier frequency. The chrominance is an amplitude modulated signal on a 3.58MHz carrier. To avoid interference with the luminance signal, the NTSC process eliminates the carrier and lower sideband of the chrominance signal after the modulation process.
The NTSC process has two drawbacks: The luminance signal must be cut off before it reaches 3.58KHz to avoid it interfering with the subcarrier. This frequency cap limits the highest possible frequencies in the luminance signal, which means that the sharpness of the image is reduced from what it would be when using the full bandwidth (4.5MHz for the video signal) of the channel. Chrominance carries even less detail.
The basic frame rate of a color video signal is about 29.97 per second. Each frame is made from two interlaced fields, so the field rate is 59.94Hz. Each frame is made from 525 lines, of which about 480 are visible, and the rest are devoted to vertical retrace. Ideally, a studio image would have about 640 pixels across a line. However, the 3.58MHz bandwidth imposed by the NTSC color process constrains the luminance signal bandwidth to 400 to 450 pixels horizontally. Although that might sound paltry, a good home VCR may be able to store images with only about half that resolution.
Black-and-white signals are different. They use a 30Hz frame rate (each with two fields) and lack the color subcarrier. As a result, they can be sharper than color signals because the entire 4.5MHz bandwidth can be devoted to the image.
Instead of NTSC, most of the world uses a color system called PAL, which stands for Phase Alternating Line. France and most of the nations that once formed the USSR, such as Russia and Ukraine, use a system called SECAM, which stands for Sequence Couleur à Memoire (in English, sequential color with memory). These video standards—and the equipment that follows them—are mutually incompatible.
The constraints of NTSC color are required because of the need for backward compatibility. The color signal had to fit into exactly the same bandwidth as black and white. In effect, NTSC gives up a bit of black-and-white resolution to fit in the color information.
Video signals that never make it to the airwaves need not suffer the indignities required by the NTSC broadcast standard. Studio signals have always transcended broadcast standards—studio RGB signals have full-bandwidth, high-resolution (640-pixel) images in each of their three colors. To raise home viewing quality, VCR designers came up with a way to get more quality in color signals by avoiding the NTSC process.
The part of the NTSC process that most limits visual quality is the squeezing of the color signal onto its subcarrier. By leaving the video in two parts, separate luminance and color signals, the bandwidth limitation can be sidestepped. This form of video is termed S-video, short for separate video. High-end VCRs, camcorders, and monitors use often use S-video signals.
Other than not modulating chrominance onto a subcarrier, the color-encoding method used by S-video is identical to that of NTSC. The three RGB color signals are combined into luminance and chrominance using exactly the same formulae. Although you cannot substitute one signal for the other, the innards of S-video monitors need not be radically different from those of NTSC displays. The level of quality is often quite visibly different. S-video components may have twice the horizontal resolution as composite video.
Note that once a signal is encoded as NTSC, information is irretrievably lost. There's no point to decoding an off-the-air television signal to S-video. The only time S-video helps is when you have a source of the signals that has never been NTSC encoded.
To put active video in a window on your computer screen, many graphics adapters use a technique called video overlay, which allows them to do most of the image processing in hardware rather than software (where it would slow down the rest of your system).
The video overlay process borrows from an old television technology called chroma keying. This process works by substituting one image for a key part of another image. Typically the key—an area in the image being shown on the screen—would be identified by its color or chroma. Hardware then substitutes another image for the key color. In computers, the graphics adapter paints the television image into the key area, the windows, on the screen. Traditionally in television, the color of choice is a sky blue. This color is preferred because it's optically the opposite of average Caucasian flesh tones, so it's least apt to make parts of people disappear on the screen. Sometimes the process is used in reverse—for example, a weather reporter stands in front of a chroma-key blue screen, and the background gets filled with a weather map or satellite photo. Television people call this technique blue screening.
In video overlay, the driver software uses standard Windows instructions to paint a window on the screen, keyed to a special value (it need not be blue). The graphics adapter intercepts the signals destined for your monitor and substitutes the video signal where it finds the keyed windows. Your software and even your computer's microprocessor never need to deal with processing the video. In fact, the video never even makes it as far as the expansion bus. It is isolated on the overlay board. You get full-motion video on your screen with virtually no impact on the performance of your computer.
Gathering up video images so that they can be used by your programs—video capture—requires hardware that combines aspects of a more traditional video board and a digital video camera. Traditional video images are analog signals and require an analog-to-digital converter (A-to-D converter) to put them in a form usable by your computer.
The A-to-D converter works by sampling the voltage level of the video at each pixel position of the video image and assigning a digital value to it. In most systems, the signal is first decoded into your computer's standard RGB format to determine the strengths of individual colors before sampling. Typically the image gets stored in a buffer and is sampled from the buffer rather than from the real-time video signal. The buffer helps bridge between the different timings and formats of the video signal and its target digital form.
|[ Team LiB ]|