[ Team LiB ] Previous Section Next Section

Teletypes

Banks of switches were hardly the way to get big programs of hundreds or thousands of bytes into a computer. What engineers longed for was a device that could directly generate digital codes with a familiar interface, a familiar means for people to use it. Fortuitously, exactly what they needed was already widely used in the communications industry. The teletype machine traces its roots back to 1902 when researchers finally cracked one of the toughest problems plaguing the printing telegraph since Samuel Morse created the first one in 1845. With the creation of the start-stop code (which lives on today in the RS-232C port, see Chapter 11, "Ports") in 1908, Charles and Howard Krum produced the first practical automatic printer connected to a telegraph line.

In 1919, the father-son duo created a keyboard-based transmitter, and the foundation for the teletype system was in place. In 1925, they merged their interests with those of a rival developer, Edward Kleinschmidt, and the company took the name Teletype Corporation in 1929 and was purchased by the Bell System a year later.

Although the printer was the most important side of the invention for its creators, computer designers eagerly adapted the transmitter to their machines. The transmitter became what is essentially the first computer keyboard—you typed into a typewriter-style keyboard and produced digital code (in the form of a five-bit international code invented by Emile Baudot in 1870 and named after him).

Keypresses on the teletype keyboard produced punched tape that, when fed into a reader, generated the Baudot code. Makers of tabulators, such as IBM, found the same technology could make the punch-cards they used. This punch-card technology became the input and output system of the first commercial computer, ENIAC. Two years later, the Binac computer used an electrically controlled typewriter keyboard to write magnetic code on tape.

By 1964, Bell Laboratories and General Electric created the first electronic computer terminal for the Multics computer system. With the introduction of this first video data terminal (VDT), all the intermediary steps between the keystrokes and final digital code disappeared. Each keypress produced a code in electronic form that could be immediately processed by the computer.

Regardless of whether punched tape serves as an intermediary or a terminal creates digital codes directly, all these technologies owe one particular bit of heritage to the teletype. You need to type in each letter the computer receives in exactly the same order you want the computer to receive it. The stream of characters is a serial sequence. Communication is reduced to a single, unidirectional stream, much like telling a story in a novel.

From the cyborg viewpoint, all this text comes from the language processing part of your brain. Your ideas must be fully digested and turned into words, and the words into text, before you communicate them with your computer.

This same technology survives today in all computers. The interface between man and machine in this process is your computer's keyboard. The first major personal computer operating system used the keyboard as its primary (and only) input device. Your only means of control and data input was the keyboard. Even the longest, most intricate programs required that someone, somewhere, enter all the commands and data used in writing them through the keyboard.

Pointing Devices

To many people, the keyboard is the most formidable and forbidding aspect of a computer. The keys might as well be teeth ready to chomp down on their fingers as soon as they try to type. Typing just isn't something that comes naturally to most people. Learning to type takes months or years of practice—practice that's about as welcome as a piano lesson on a sunny afternoon when the rest of the neighborhood kids are playing in the pool outside.

Imagine trying to drive your car by typing in commands—turn right 15 degrees, increase speed to 55 miles per hour, stop before driving off that cliff—oh, well. Brace yourself for the landing. Although the keyboard provides an excellent way to move your properly formatted language-based thoughts into your computer, it is inefficient at more sophisticated means of control. Even if you could type with your fingers moving at the speed of light, control through the keyboard would still be slow because your brain needs to perform some heavy-duty processing of the data first, at biologic rather than electronic speeds.

Teletype-style input poses another problem. Control systems built from it are command driven. You have to type in individual commands, which in turn requires that you know which commands to type. If you don't know the commands, you sit there with your fingers knotted and nothing happening.

One of Douglas C. Engelbart's jobs at the Stanford Research Institute between 1957 and 1977 was to find ways of making computers more accessible and usable by ordinary people. One of his ideas was to put graphics on the computer screen and use a handheld device that would point at different places on the screen as you moved the device across your physical desktop. He made his first model of the device in 1964 and received a patent in 1970 for the first computer mouse. He called it that because it had a tail coming out the end, the interface wire.

Engelbart's concept of combining the pointing device coupled with a graphical/menu-driven onscreen user interface was later developed at the Palo Alto Research Laboratory of Xerox Corporation in its Alto workstation in 1973. Xerox first commercially offered these concepts in its 1981 Star workstation, which inspired the Apple Lisa and Macintosh computers but was in itself not a marketing success.

The underlying concept was to allow you to indicate what function you want your computer to carry out by selecting from a list of commands presented as a menu. You point at the menu selection by physically moving the pointing device, which causes a corresponding onscreen movement of the cursor. One or more buttons atop the device enable you to indicate that you want to select a menu item—a process much easier to do than describe. The mouse was meant to be small enough to fit under the palm of a hand with the button under a fingertip. The whole process of moving the mouse and its onscreen representation is termed dragging the mouse.

Apple Computer, understanding the achievements made at SRI and Xerox with the mouse and graphical interface, incorporated both into its Macintosh computer in 1984. Although you could obtain a mouse and software to use it for Intel-architecture computers at about the same time, widespread use of a graphical interface did not become popular with Intel machines until the introduction of Windows 95.

Graphic Input Devices

A mouse is an indicator rather than a full-fledged input device. When you want to put graphic images into your computer, a mouse works only if you want to draw them anew. Capturing an existing image is more complex because your computer needs a way to represent it.

That need had already been filled by bitmapped display systems (see "Two-Dimensional Graphics," later in the chapter), which break an image into visual units small enough that they blend together in the eye to form a single, solid image. The units used to represent the image are termed pixels, short for picture elements. The process of converting an image or the representation of an object into pixels is called pixelization.

The process is not easy because it must deal with two worlds, the optical and electronic, converting the optical representation of an image or object into an electronic signal with a recognizable digital format. Several devices can make this conversion, including the scanner and digital camera.

The scanner can convert anything you have on paper—or for that matter, anything reasonably flat—into computer-compatible electronic form. Dot by dot, a scanner can reproduce photos, line drawings, even collages in detail, sharper than your laser printer can duplicate. Better yet, equip your computer with optical character recognition software, and the images your scanner captures of typed or printed text can be converted into ASCII files for your word processor, database, or publishing system. Just as the computer opened a new world of information management to you, a scanner opens a new world of images and data to your computer.

The digital camera captures what you see in the real world. It grabs a view of not only three-dimensional objects but also entire scenes in their full splendor. As the name implies, the digital camera is the computer equivalent of that old Kodak, one that produces files instead of film. It captures images in a flash—or without one in bright daylight—and requires no processing other than what you do with your photo-editing software. It tops the list of most wanted computer peripherals because it's not only a useful tool but also a neat toy that turns anyone into an artist and can add a good dose of fun to an otherwise drab day at the computer.

    [ Team LiB ] Previous Section Next Section