|[ Team LiB ]|
Nothing eats memory and storage faster than full-motion, full-color video. The math is enough to make the manufacturers drool. Full-color video, which requires three bytes per pixel, at 640x480 resolution, equals nearly 1MB of digital data per frame. At the 30 frames per second used in the United States and most of the Western Hemisphere (or even the 25 frames per second standard in Europe and elsewhere), a video producer would easily use up 1GB of hard disk space in storing less than one minute of uncompressed digital video information.
Digital video systems with reasonable storage needs are possible only because of video compression, much as digital photographs are made more compact with image compression. Video compression takes the next step and analyzes not only the image data but the changes in sequential images. Unlike data-compression systems, which reduce the size of a file without losing any information, image- and video-compression systems are lossy. That is, they throw away information—usually the part that is least perceptible—that is lost and can never be recovered.
Normal still-image compression programs work two dimensionally, analyzing areas and reducing the data required for storing them. The most popular is called JPEG, which stands for the Joint Photographic Experts Group, the group that developed the standard. Video compression works three dimensionally—in addition to compressing individual areas, it processes changes that occur between images in the time dimension. It takes advantage of how little actually changes from frame to frame in a video image. Only the changes get stored or transmitted; the static parts of the image can be ignored. For example, when someone moves against a backdrop, only the pixels in the moving character need to be relayed to the data stream. The most popular form is called MPEG, for the Motion Picture Experts Group, an organization similar to JPEG (part of the same overall body) but separate from it.
Filters and Codecs
Compressing still- and video-image data streams is so different that developers use distinct terminologies when speaking of the conversion process. Moreover, they even handle the conversion software differently.
The program routines that compress still images are usually termed filters. Most graphic applications have several filters built in to handle a variety of different compression systems and file formats.
Video compression requires either a software- or hardware-based processor that is termed a codec, short for coder/decoder. The most efficient software codecs are proprietary designs that rely on patented technology. Each has its own advantages (such as speed of processing, high compression ratio, or good image quality) that make it best suited for a given type of application. Consequently, many codecs remain in common use. Most multimedia applications include the appropriate codec in their playback software or work with those assumed to be installed in your operating system.
JPEG is at its best compressing color images, because it relies on psycho-visual perception effects to discard image data that you might not be able to perceive. It also works on grayscale images but yields lower compression ratios at a given quality level. It does not work well on monochrome (two-tone or black-and-white) images and requires that color-mapped images be converted to a conventional, continuous-tone color format before processing—which, of course, loses the compression effect of the color mapping.
JPEG processing involves several steps, some of which are optional. Several of these steps may reduce the amount of detail in the image and therefore its quality. The JPEG standard allows you to select the amount of information that's thrown away in these steps so you can control how well an image reconstructed from the compressed data will resemble the original. One option is lossless, which throws away no information other than that which would be redundant. This typically compresses an image file to 50 percent of its original size. Even invoking lossy compression, you can reconstruct an image visually indistinguishable from the original with a reduction to 33 percent of the original. The loss becomes apparent somewhere around reductions of 5 to 10 percent of the original data size. You can brute-force the image data down to 1 percent of the original size, although the results will resemble more a mew work of computer art than whatever masterpiece you started with.
The videos that you're most likely to display on your computer use MPEG compression. As with JPEG, MPEG is a committee working under the joint direction of the International Standards Organization (ISO) and the International Electro-Technical Commission (IEC). The formal name of the group is ISO/IEC JTC1 SC29 WG11—the stuff after the organization simply further delineates the organization (JTC stands for Joint Technical Committee, Subcommittee 29, Work group 11). It began its life in 1988 under the leadership of Leonardo Chiariglione and Hiroshi Yasuda.
Despite the similarity of names, the JPEG and MPEG groups are separate and share few members. Some of the technologies used by the two compression systems are similar, but they are meant for different kinds of data. The most prominent point of divergence is that MPEG achieves most of its data reduction by compressing in the time dimension, encoding only differences between frames in video data.
The first MPEG standard, now usually called MPEG-1 but formally titled "Coding of Moving Pictures and Associated Audio for Digital Storage Media at up to about 1.5 MBit/s," became an international standard in October, 1992. It has four parts. The actual compression of video and video signals is covered under International Standard 11172-2. Related parts describe the compression of audio signals, synchronizing audio and video, and testing for compliance with the standard.
MPEG-1 is used by CD-i (interactive compact discs) because it achieves a data rate that is within the range of CD drives. To get down that low with the technology existing at the time the standard was developed, the system had to sacrifice resolution. At best, an MPEG-1 image on CD-i has about one-quarter the pixels of a standard TV picture. MPEG also requires hefty processing power to reconstruct the moving image stream, which is why CD-i players can display it directly to your TV or monitor, but only the most powerful computers can process the information fast enough to get it to your display without dropping more frames than an art museum in an earthquake. If you're used to the stuff that pours out of a good VCR, this early MPEG looks marginal, indeed.
MPEG-2 was meant to rectify the shortcomings of MPEG-1, at least in regard to image quality. The most apparent difference appears on the screen. The most common form of MPEG-2 extends resolution to true TV quality (720 pixels horizontally and 480 vertically) while allowing for both standard and wide-screen formats (4:3 and 16:9 aspect ratios, respectively). Although MPEG-2 benefits from advances in compression technology, this higher quality also demands more data. The TV-quality image format requires a bandwidth of about 4Mbps. Beyond that, the MPEG-2 standard supports resolutions into ionspheric levels. All MPEG-2 chips are also required to step back and process MPEG-1 formats.
In addition to high-quality video, MPEG-2 allows for 5.1 audio channels—that is, left and right main channels (front), left and right rear channels (surround), and a special effects channel for gut-thumping rumbles limited to no higher than 100Hz. (The ".1" in the channel description refers to the 100Hz limit.) MPEG-1 only allows for a single stereo pair.
What was initially MPEG-3 has been incorporated into MPEG-2. The concept behind MPEG-3 was to make a separate system for High Definition TV for images with resolutions up to 1920 by 1080 pixels with a 30Hz frame rate. Fine-tuning the high levels of MPEG-2 worked well enough for HDTV images that there was insufficient need to support a separate standard.
Adopted as ISO/IEC 14496 in early 1999, MPEG-4 defines a standard for interactive media. Although it incorporates a compression scheme (actually, several), it looks at images entirely differently than does MPEG-2. Instead of compressing an entire scene as an image, MPEG-4 makes the scene from video objects, each of which the standard allows to be independently defined and manipulated. A single scene may incorporate several video objects as well as a background. The standard also allows for conventional rectangular images such as movie frames as a special class of video objects. Its compression is optimized for low rates suited to moving images through conventional modems for videophones or small-screen video conferencing. For example, such images may have low resolution (about 176x144 pixels) and a low frame rate, on the order of 10Hz.
In effect, MPEG-4 is a standardization of many of the features of a 3D accelerator (an unremarkable convergence in that both are designed for the same purpose—the effective presentation of action video). It provides compression algorithms for video, textures, and wire-frames as well as a system for manipulating objects, scenes, and sequences. In addition, it incorporates MPEG-J, an application program interface that allows combining MPEG-4 data with Java code to make a cohesive multimedia playback environment.
MPEG-7 is a content-retrieval standard rather than a compression or storage standard, called a Multimedia Content Description Interface. It will provide a standard for describing different types of multimedia information that links the description to the content itself. By searching the description, the link will allow you to quickly and efficiently find the material that interests you. MPEG-7 is designed to let you search any kind of medium from still images and graphics to audio to conventional and 3D movies. Its designers even envision its extension to classifying facial expressions and personal characteristics.
MPEG-5 and MPEG-6 are not defined.
|[ Team LiB ]|