[ Team LiB ] Previous Section Next Section

Video Technology

As far as most of your computer is concerned, its job is done when it writes an image to the frame buffer. After all, once you've got all the pixels arranged the way you want them in memory, shipping them off through a cable to your display should be a simple matter. It isn't. The image must be transformed from its comparatively static position in screen memory to a signal that can traverse a reasonable interface—after all, the cable alone for a parallel interface capable of moving data for nearly a million pixels would likely be thicker than the average computer. Consequently, the frame-buffer image must be converted into a stream of serial data for transmission.

The need for this kind of image transformation became evident when television was invented. The system developed by Philo T. Farnsworth (one of several folks credited with inventing television) relied on scanning images that naturally produced a serial data stream. Although television has improved somewhat since the 1920s when Farnsworth was developing it (at least technical standards have improved—the quality of entertainment is another matter entirely), the transmission system remains essentially the same for today's analog television signals as well as the connections between computers and their displays.


Serializing images for the first television system was inherent in the way the cameras captured the images using the technique of raster scanning. The first television cameras traced an electron beam across the projection of an image on a special material that changed its electrical characteristics in response to the bright and dark areas of a scene. By focusing the electron beam, the camera could detect the brightness of a tiny spot in the scene. By dividing the scene into dots the size of its sensing spot and rapidly examining each one, the camera could gather all the data in the image.

Although there's no naturally required order to such an examination, the inventors of television looked to what we humans have been doing for centuries—reading. We read text one line at a time, progressing from left to right across each line. The various inventors of television (and there are many competing claims to the honor) followed the same pattern, breaking the image into lines and scanning the dots of each line from left to right.

To make a scan in a classic television camera, the electron beam sweeps across the image under the control of a combination of magnetic fields. One field moves the beam horizontally, and another vertically. Circuitry in the camera supplies a steadily increasing voltage to two sets of deflection coils to control the sweep of the beam. These coils are electromagnets, and the increasing voltage causes the field strength of the coils to increase and deflect the beam further. At the end of the sweep of a line, the field that controls the horizontal sweep of the electron beam is abruptly switched off, returning the beam to the starting side of the screen. Likewise, when the beam reaches the bottom of the screen, the field controlling the vertical sweep switches off. The result is that the electron beam follows a tightly packed zigzag path from the top of the screen to the bottom.

The primary difference between the two sweeps is that several hundred horizontal sweeps take place for each vertical one. The rate at which the horizontal sweeps take place is called the horizontal frequency, or the line rate, of the display system. The rate at which the vertical sweeps take place is called the vertical frequency, or frame rate, of the system because one complete image frame is created every time the beam sweeps fully down the screen.

The television receiver scans the inside of the cathode ray tube (CRT) in exactly the same fashion. In fact, its electron beam is precisely synchronized to that of the camera. The one-line-at-a-time, left-to-right scan nicely accomplishes the required dimensional conversion.

The video circuits of your computer have to carry out a similar conversion. The only difference is that the image is laid out in a logical two-dimensional array in memory instead of a physical two-dimensional array on a photosensitive layer inside the camera tube (or more likely today, a charge coupled device, or CCD).

To make a scan of the video buffer is a lot easier than sweeping an electron beam. Your computer need only read off addresses in the video buffer in sequential order, one row at a time. To carry out this task, your computer uses a special electronic circuit called the video controller, which scans the memory addresses, reads the data value at each address, and sends the data out in one serial data stream.

Synchronizing Signals

In television, the biggest complication of the scanning systems is ensuring that the camera and television set displaying the image both scan at exactly the same position in the image at exactly the same time. The frequencies used by horizontal and vertical scanning must exactly match. In addition, the camera and television must use exactly the same starting position.

To keep the two ends of the system locked together, television systems use synchronizing signals. They take the form of sharp pulses, which the circuitry inside your monitor converts to the proper scanning signals. The television camera generates one special set of pulses at the beginning of each line and another at the start of each image frame. The television knows to start each line when it receives the pulses.

In your computer, the video controller generates similar synchronizing signals for exactly the same purpose. It sends out one (the horizontal synchronizing signal) before each line in the image and one (the vertical synchronizing signal) at the beginning of each frame. The monitor uses the pulses to trigger its sweep of each line and to reset to the top of the image to start the scan of the next frame.

The video controller doesn't scan at just any frequency. It uses standard frequencies, which vary with the geometry of the image—its height and width along with the frame rate. The monitor is tuned to expect these frequencies, using the synchronizing signals only to achieve a precise match.

In conventional television, the synchronizing signals were designed to sit invisibly in the same single data stream that conveyed the picture information. In modern production studios and the connection between your computer and monitor, however, the synchronizing signals are usually kept separate from the picture information. Actually, there are four common ways of combining or not combining video data and synchronizing signals. These include the following:

  • Composite video. The all-together-now television approach that puts all video data and the two required synchronizing signals into one package for single-wire or single-channel transmission systems.

  • Composite sync. Combines the horizontal and vertical synchronizing signals together and puts them on one wire. Another, separate wire carries the image data.

  • Separate sync. Gives a separate wire and connection to the image data, the horizontal, and the vertical synchronizing signals.

  • Sync-on-green. Combines the vertical and horizontal synchronizing signals together and then combines them with the data for the green data channel.

In any of these four systems, the relative timing of the synchronizing and data signals is the same. The chief difference is in the wiring. A composite video system requires only one wire. The other systems use three wires for data (one for each primary color). Sync-on-green therefore requires only three connections, composite sync requires four (three colors, one sync), and separate sync requires five (three colors, two sync). The standard video system in most computers uses composite sync—the signal monitor cable has four separate connections for image data and synchronizing signals.


In a television signal, the data corresponding to the dots on the screen doesn't fill a video signal wall-to-wall. The physics of the first television systems saw to that. To make the image you see, the electron beam in a conventional television picture tube traces a nearly horizontal line across the face of the screen then, in an instant, flies back to the side of the screen from which it started but lower by the width of the line it already traced out. This quick zipping back is termed horizontal retrace, and although quick, it cannot take place instantly because of the inertia inherent in electrical circuits. Consequently, the smooth flow of bytes must be interrupted briefly at the end of each displayed line (otherwise the video information would vanish in the retrace). The video controller must take each retrace into account as it serializes the image.

In addition, another variety of retrace must occur when the electron beam reaches the bottom of the screen when it has finished painting a screen-filling image: vertical retrace. The beam must travel as quickly as possible back up to its starting place, and the video controller must halt the flow of data while it does so.


During retrace, if the electron beam from the gun in the tube were on, it would paint a bright line diagonally across the screen as the beam returns to its proper position. To prevent the appearance of this distracting line, the beam is forcibly switched off not only during retrace but also during a short interval on either side to give the beam time to stabilize. The interval in which the beam is forced off and cannot be turned on by any degree of programming is called blanking because the electron beam can draw nothing but a blank on the screen.

The classic television signal cleverly combines synchronization, retrace, and blanking together. The horizontal synchronizing signal is a strong pulse of the opposite polarity of the image data that lasts for the retrace period. The negative nature of the signal effectively switches off the electron beam, and the frequency of the signal effectively synchronizes the image.

Front and Back Porches

Most computer monitors don't fill their entire screens with data. They center (or try to) the image within darkened borders to minimize the image distortions that sneak in near the edges of the screen. To produce these darkened, protected areas, the electron beam is held at the level that produces a black image for a short while before and after the data of each image line is displayed. The short interval before the data of a line begins is termed the front porch of the signal. The interval after the end of the data but before the synchronizing signal is called the back porch. If you examined the signal, you'd see that it dips down for blanking and pops up to an intermediate height (called black level by broadcasters) to create the porches between blanking and data. Use your imagination and the black-level signals look like shelves—or porches.

Vertical Interval

The period during which the screen is blanked during the vertical retrace is called, appropriately, the vertical interval. Its physical manifestation is the wide black horizontal bar that's visible between image frames when your television screen or computer monitor picture rolls and requires adjustment of the vertical hold control. The big black bar corresponds to the time during which the signal carries no video information.

The vertical interval is a carryover from the early days of television when vacuum tube electronics needed time to "recover" between fields and frames. This allowed voltages inside the circuitry of the TV set to retreat to the proper levels to begin the next field. Modern electronics—say, for example, those of televisions made in the last 30 years—don't really require the long duration of the vertical interval. Consequently, broadcasters have found the time devoted to it useful for stuffing in extra information. Television stations add a vertical interval test signal (VITS) to monitor operation of their transmitters and associated equipment. The text for the closed captioning system is also encoded during the vertical interval, as is all sorts of other miscellaneous data.

    [ Team LiB ] Previous Section Next Section