ordinary phone lines (or more high-speed lines, if someone has cash to burn), we transmit the chapter. The Internet service provider is our micron junction link. The telephone wires are our subspace boundary layer. Our ODN is the Internet. Somewhere in an indescribably messy editorial office, our editor logs onto the Internet and retrieves Chapter 2. Picture him sitting at his PC in our drawing of the Enterprise computer. Heâs over there on the right, looking at one of the terminals or control panels.
The most striking difference between the general design of our PC-linked Internet and the ODN setup of the Enterprise computer is that our technology is more advanced. Our version of the ODNâtodayâs Internetâconnects independent computers around the world. Thereâs no mainframe controlling the Internet. On Star Trek , the ODN connects LCARS terminals to a giant mainframe that controls all system functions. This is a very old-fashioned networking design.
Now letâs take a closer look at each part of the system and see if they are reasonable approximations of what our descendants will be using in a few hundred years.
The LCARS Interface
S uppose Lieutenant Commander Worf is glaring at the computer console screen on the main bridge. Heâs typing information into the main computer system while he issues a command to the computer to locate Captain Picard, whom he assumes is somewhere on the ship. (In fact, Picard has been spirited away by the mysterious superbeing Q, raising problems weâll discuss in a later chapter.)
The LCARS speech module picks up Worfâs command. The Technical Manual describes the LCARS as an artificially-intelligent module that includes a graphical user interface. It doesnât tell us why the LCARS requires artificial intelligence. On the show itself, we see no indication of artificial intelligence in the LCARS. When addressing the computer, Worf says, âComputer, locate Captain Picard.â He doesnât address the LCARS, nor does the LCARS respond. Itâs always the main computer systemâs voice that we hear.
As for the graphical-user interface, in our time itâs a screen that displays text and pictures. But in the twenty-fourth century, the computerâs interactions with users will be a good deal more advanced than this. The first question we need to ask is: If weâre three hundred years into the future, why would Worf (or anyone) require a keyboard or any type of key-button control system? Wonât keyboards have gone the way of the buggy whip?
It wonât be all that long before invisible computers sense our presence in a room, cook our food, start our cars, do our laundry, design our clothing, and make it for us. Computers may even detect our emotional states and automatically know how to help us relax after a grueling day at work.
Our primary means of communicating with these computers will be the same one we use with each other: speech. By analyzing
frequency and sound intensities, todayâs voice recognition software can recognize more than forty thousand English words. It does this by differentiating one phoneme from another. c However, to understand what someone is saying (as opposed to simply recognizing that someone has uttered the phoneme p rather than f ), the software must be artificially intelligent. Itâs one thing for voice-recognition software to interpret a spoken command such as âSave fileâ or âCall Dr. Greenâs office.â Itâs quite another for software to understand âWhat are the chances that Picard is still a human inside Locutus?â Phonemes alone donât suffice. Thus we assume the main computer system must be artificially intelligent. But this function is never performed by the LCARS on Star Trek .
Many prominent researchers think that tomorrowâs computers will understand not only our voices but also our body language. Already, enormous research has been