The Computers of Star Trek Read Online Free Page B

The Computers of Star Trek
Pages:
Go to
ordinary phone lines (or more high-speed lines, if someone has cash to burn), we transmit the chapter. The Internet service provider is our micron junction link. The telephone wires are our subspace boundary layer. Our ODN is the Internet. Somewhere in an indescribably messy editorial office, our editor logs onto the Internet and retrieves Chapter 2. Picture him sitting at his PC in our drawing of the Enterprise computer. He’s over there on the right, looking at one of the terminals or control panels.
    The most striking difference between the general design of our PC-linked Internet and the ODN setup of the Enterprise computer is that our technology is more advanced. Our version of the ODN—today’s Internet—connects independent computers around the world. There’s no mainframe controlling the Internet. On Star Trek , the ODN connects LCARS terminals to a giant mainframe that controls all system functions. This is a very old-fashioned networking design.
    Now let’s take a closer look at each part of the system and see if they are reasonable approximations of what our descendants will be using in a few hundred years.

The LCARS Interface
    S uppose Lieutenant Commander Worf is glaring at the computer console screen on the main bridge. He’s typing information into the main computer system while he issues a command to the computer to locate Captain Picard, whom he assumes is somewhere on the ship. (In fact, Picard has been spirited away by the mysterious superbeing Q, raising problems we’ll discuss in a later chapter.)
    The LCARS speech module picks up Worf’s command. The Technical Manual describes the LCARS as an artificially-intelligent module that includes a graphical user interface. It doesn’t tell us why the LCARS requires artificial intelligence. On the show itself, we see no indication of artificial intelligence in the LCARS. When addressing the computer, Worf says, “Computer, locate Captain Picard.” He doesn’t address the LCARS, nor does the LCARS respond. It’s always the main computer system’s voice that we hear.
    As for the graphical-user interface, in our time it’s a screen that displays text and pictures. But in the twenty-fourth century, the computer’s interactions with users will be a good deal more advanced than this. The first question we need to ask is: If we’re three hundred years into the future, why would Worf (or anyone) require a keyboard or any type of key-button control system? Won’t keyboards have gone the way of the buggy whip?
    It won’t be all that long before invisible computers sense our presence in a room, cook our food, start our cars, do our laundry, design our clothing, and make it for us. Computers may even detect our emotional states and automatically know how to help us relax after a grueling day at work.
    Our primary means of communicating with these computers will be the same one we use with each other: speech. By analyzing
frequency and sound intensities, today’s voice recognition software can recognize more than forty thousand English words. It does this by differentiating one phoneme from another. c However, to understand what someone is saying (as opposed to simply recognizing that someone has uttered the phoneme p rather than f ), the software must be artificially intelligent. It’s one thing for voice-recognition software to interpret a spoken command such as “Save file” or “Call Dr. Green’s office.” It’s quite another for software to understand “What are the chances that Picard is still a human inside Locutus?” Phonemes alone don’t suffice. Thus we assume the main computer system must be artificially intelligent. But this function is never performed by the LCARS on Star Trek .
    Many prominent researchers think that tomorrow’s computers will understand not only our voices but also our body language. Already, enormous research has been
Go to

Readers choose