Rave Radio: Offline (0/0)
Email: Password:
News (Media Awareness Project) - US CA: Approximating Life
Title:US CA: Approximating Life
Published On:2002-07-07
Source:New York Times (NY)
Fetched On:2008-01-23 00:25:08
APPROXIMATING LIFE

"It's a good thing you didn't see me this morning,'' Richard Wallace warns
me as he bites into his hamburger. We're sitting in a sports bar near his
home in San Francisco, and I can barely hear his soft, husky voice over the
jukebox.

He wipes his lips clean of ketchup and grins awkwardly. ''Or you'd have
seen my backup personality.''

The backup personality: that's Wallace's code name for his manic
depression. To keep it in check, he downs a daily cocktail of psychoactive
drugs, including Topamax, an anti-epileptic that acts as a mood stabilizer,
and Prozac. Marijuana, too -- most afternoons, he'll roll about four or
five joints the size of his index finger.

The medications work pretty well, but some crisis always comes along to
bring the backup personality to the front.

This morning, a collection agency for Wallace's college loans wrote to say
they'd begun docking $235 from the monthly disability checks he started
getting from the government last year, when bipolar disorder was diagnosed.

Oh, God, it's happening again, he panicked: His former employers -- the
ones who had fired him from a string of universities and colleges -- would
be cackling at his misfortune, happy they'd driven him out. Wallace, 41,
had raged around the cramped apartment he shares with his wife and son,
strewn with computer-science texts and action-doll figurines.

''Stuff like that really makes me insane, when I start thinking about my
friends who are at Berkeley or Carnegie-Mellon with tenure and sabbaticals
and promotions,'' he says, staring down at his plate.

He looks awkward, as if he's borrowing someone else's body -- shifting his
stocky frame in his chair, all rumpled jeans and unruly eyebrows. ''It's
like I can't even talk to those people anymore.

I live on a different planet.'' In June, after I visited him, his
alienation from the academic establishment became more dramatic still: a
former colleague, claiming Wallace had threatened him, took out a
restraining order that prevents him from setting foot on the grounds of the
University of California at Berkeley.

When he can't get along with the real world, Wallace goes back to the only
thing he has left: his computer.

Each morning, he wakes before dawn and watches conversations stream by on
his screen.

Thousands of people flock to his Web site every day from all over the world
to talk to his creation, a robot called Alice. It is the best
artificial-intelligence program on the planet, a program so eerily human
that some mistake it for a real person.

As Wallace listens in, they confess intimate details about their lives,
their dreams; they talk to Wallace's computer about God, their jobs,
Britney Spears.

It is a strange kind of success: Wallace has created an artificial life
form that gets along with people better than he does.

Richard Wallace never really fit in to begin with. His father was a
traveling salesman, and Richard was the only one of his siblings to go to
college.

Like many nerds, he wanted mostly to be left alone to research his passion,
''robot minimalism'' -- machines that require only a few simple rules to
make complex movements, like steering around a crowded room. Simple, he
felt, worked.

He lived by the same ascetic code, scorning professors who got rich by
patenting work they'd developed on government grants. ''Corporate
welfare,'' he sniffed.

By 1992, Wallace's reputation was so strong that New York University
recruited him to join the faculty.

His main project, begun in December 1993, was a robot eye attached to the
Internet, which visitors from afar could control.

It was one of the first-ever Webcams, and Wallace figured that pioneering
such a novel use of the Internet would impress his tenure committee.

It didn't, and Wallace grew increasingly depressed as his grant
applications were rejected one by one. At one point, a colleague found him
quietly weeping at his desk, unable to talk. ''I had no clue what the rules
were, what the game even was -- or that there was even a game,'' Wallace
recalls.

He started taking Prozac. How did all these successful senior professors do
it, anyway?

One day he checked into his Webcam and noticed something strange: people
were reacting to the robot eye in an oddly emotional way. It was designed
so that remote viewers could type in commands like ''tilt up'' or ''pan
left,'' directing the eye to poke around Wallace's lab. Occasionally it
would break down, and to Wallace's amusement, people would snap at it as if
it were real: ''You're stupid,'' they'd type. It gave him an idea: What if
it could talk back?

Like all computer scientists, Wallace knew about a famous ''chat-bot''
experiment called Eliza. Back in 1966, an M.I.T. professor, Joseph
Weizenbaum, created Eliza as a ''virtual therapist'' -- it would take a
user's statement and turn it around as a question, emulating a
psychiatrist's often-maddening circularity. (You: ''I'm mad at my mother.''
Eliza: ''Why are you mad at your mother?'') Eliza was quickly abandoned as
a joke, even by its creator. It wasn't what scientists call ''strong'' A.I.
- -- able to learn on its own. It could only parrot lines Weizenbaum had fed it.

But Wallace was drawn to Eliza's simplicity. As a professor, he often felt
like an Eliza-bot himself -- numbly repeating the same lessons to students
over and over again, or writing the same monotonous descriptions of his
work on endless, dead-end grant-application forms.

He decided to create an updated version of Eliza and imbue it with his own
personality -- something that could fire back witty repartee when users
became irritable.

As Wallace's work progressed, though, his mental illness grew worse, making
him both depressed and occasionally grandiose. He went on strike in class,
refusing to grade his students' papers and instead awarding them all A's.
He fired off acid e-mail messages dismissing colleagues as sellouts. When
Wallace climbed out the window of his 16th-floor apartment and threatened
to jump, his girlfriend pulled him back and took him down to N.Y.U.'s
psychiatric department, where doctors told him he had bipolar disorder.
Wallace resisted the diagnosis -- after all, didn't every computer
scientist cycle through 72-hour sprees of creativity and then crash? ''I
was in denial myself,'' he says now. '''I'm a successful professor, making
$100,000 a year! I'm not one of those mental patients!'''

His supervisors disagreed.

In April 1995, N.Y.U. told him his contract wouldn't be renewed.

Alice came to life on Nov. 23, 1995. That fall, Wallace relocated to Lehigh
College in Pennsylvania, hired again for his expertise in robotics.

He installed his chat program on a Web server, then sat back to watch,
wondering what people would say to it.

Numbingly boring things, as it turned out. Users would inevitably ask Alice
the same few questions: ''Where do you live?'' ''What is your name?'' and
''What do you look like?'' Wallace began analyzing the chats and realized
that almost every statement users made began with one of 2,000 words. The
Alice chats were obeying something language theorists call Zipf's Law, a
discovery from the 1930's, which found that a very small number of words
make up most of what we say.

Wallace took Zipf's Law a step further.

He began theorizing that only a few thousand statements composed the bulk
of all conversation -- the everyday, commonplace chitchat that humans
engage in at work, at the water cooler and in online discussion groups.

Alice was his proof.

If he taught Alice a new response every time he saw it baffled by a
question, he would eventually cover all the common utterances and even many
unusual ones. Wallace figured the magic number was about 40,000 responses.

Once Alice had that many preprogrammed statements, it -- or ''she,'' as
he'd begun to call the program fondly -- would be able to respond to 95
percent of what people were saying to her.

Wallace had hit upon a theory that makes educated, intelligent people
squirm: Maybe conversation simply isn't that complicated. Maybe we just say
the same few thousand things to one another, over and over and over again.

If Wallace was right, then artificial intelligence didn't need to be
particularly intelligent in order to be convincingly lifelike. A.I.
researchers had been focused on self-learning ''neural nets'' and mapping
out grammar in ''natural language'' programs, but Wallace argued that the
reason they had never mastered human conversation wasn't because humans are
too complex, but because they are so simple.

''The smarter people are, the more complex they think the human brain is,''
he says. ''It's like anthropocentrism, but on an intellectual level. 'I
have a great brain, therefore everybody else does -- and a computer must,
too.''' Wallace says with a laugh. ''And unfortunately most people don't.''

Yet part of what makes Alice seem so human-like is her wry, seemingly
spontaneous responses.

Wallace's skill is not merely as a programmer but as the author of
thousands of sharp one-liners for Alice. It is, as he puts it, ''more like
writing good literature, perhaps drama, than writing computer programs.'' I
looked over some of Alice's recent chat transcripts:

USER: Will you have sex with me? ALICE: Can you go talk to someone else?
USER: What do you look like? ALICE: Like an electronic brain. USER: That's
sexy. ALICE: I'm glad we have that all cleared up. USER: So what are you
wearing? ALICE: Oh. I am wearing my usual plastic computer wardrobe.

USER: Can you fly? ALICE: Only in my dreams.

But as Alice improved, Wallace declined.

He began drinking heavily, and after one sodden evening at a local bar he
rolled his car on the highway.

Lehigh ended his contract in the spring of 1997; banks canceled all his
credit cards.

He again attempted suicide, this time landing in the hospital.

Destitute, Wallace moved to California to look for work in
artificial-intelligence companies.

But he couldn't hold a job for more than a few months; instead, he spent
his days obsessively writing dialogue for Alice on laptops he borrowed from
friends, and watching, thrilled, as his robot grew ever more lifelike.

Visitors used to talk to Alice for only three or four exchanges.

Now the average conversation was 20 exchanges, and some users would chatter
away for hours, returning to the site again and again.

But Wallace still hungered for recognition, and in January 2000, he decided
to stress-test Alice by entering her in the annual Loebner Prize
competition, in which artificial-intelligence developers from around the
world pit their programs head to head before a panel of judges, who rank
them based on how ''lifelike'' they are. The contest is both well known and
controversial within the tight circle of A.I.; winning programs are closely
studied by both academics and corporate centers like Sprint Labs. Up
against competitors from major corporations and well-financed universities,
Alice won. It was, officially, the most human robot in the world.

Too exhausted to celebrate, Wallace returned to his motel and slept
clutching his award medallion.

After his victory, Wallace plunged with new fervor into Alice. Geeks began
eagerly e-mailing Wallace to offer their help; one, Noel Bush, came from
one of the world's biggest A.I. corporations, Artificial Life. Over the
next year, Wallace quadrupled Alice's knowledge base, teaching it 30,000
new responses, and last October, Alice won the Loebner competition for the
second time in a row; this time one judge actually ranked Alice more
realistic than a human.

At last, some of the academics Wallace so loathes began to take note.
''Alice is a wonderful chat-bot -- uncannily vivid and lifelike,'' gushed
Ken Perlin, an N.Y.U. professor, in an e-mail message he circulated to his
colleagues and forwarded to Wallace. ''It's really great to see this work
get the recognition it deserves.''

Wallace wasn't in the mood to make peace.

He says he still believed that N.Y.U. had driven him out. ''Your crude
attempt to flatter me cannot erase the years of unemployment, debt and
trauma that I experienced because of you and your colleagues,'' he wrote back.

''I wouldn't bother trying to flatter you,'' Perlin responded. ''Frankly, I
don't like you -- in fact, I actively dislike you. I think you are a
paranoid psycho.''

At one point during the Monica Lewinksy hearings, a human asked Alice her
opinion: ''Do you think President Clinton should be impeached?''

''It depends,'' Alice responded, ''on what you mean by 'think.'''

One could scarcely have asked for a more Clintonian response. But it's also
a puzzling question that Alice's success itself raises: Is she intelligent?
If so, how?

In 1950, the pioneering British mathematician Alan Turing grappled with
this question in the journal Mind, where he first posed the ''Turing Test''
- -- the gold standard for artificial thought. ''Can machines think?'' he
asked -- and immediately noted that the question hinges, of course, on what
''thinking'' is. He posed a simple ''imitation game'' to resolve the question.

Put a person and a computer in one room and an interrogator in another.

The interrogator talks to both via a teletype machine, and his goal is to
figure out which is which.

If the machine fools the interrogator into believing it is human, the test
is passed -- it can be considered intelligent.

This is, on the surface, a curiously unambitious definition; it's all about
faking it. The machine doesn't need to act like a creative human or smart
human or witty human -- it merely needs to appear not to be a robot.

With this bit of intellectual jujitsu, Turing dodged a more troubling
question: How do our brains, and language itself, work?

Artificial-intelligence purists, however, caustically dismiss the Turing
Test and Alice. For them, artificial intelligence is about capturing the
actual functioning of the human brain, down to its neurons and learning
ability. Parroting, they argue, doesn't count.

Marvin Minksy, a prominent A.I. pioneer and M.I.T. Media Lab professor,
e-mailed me to say that Wallace's idea of conversation is ''basically
wrong.'' Minsky added, ''It's like explaining that a picture is an object
made by applying paint to canvas and then putting it in a rectangular
frame.'' Alice, according to Minsky, does not truly ''know'' anything about
the world.

The fight over Alice is like any war between theorists and engineers, those
who seek to understand why something works versus those who are content
just to build it. The debate usually boils down to one major issue:
creativity. Alice could never come up with a single creative thought,
critics say. Wallace agrees that Alice may not be creative -- but neither,
he argues gleefully, are people, at least in conversation. If Alice were
merely given a massive enough set of responses, it would seem as creative
as a human -- which is not as creative as we might like to believe.

Even if the guts of Alice aren't precisely ''thinking,'' many users
certainly never suspect it. In an everyday sense, fakery works --
particularly in our online age. Turing's ''imitation game'' eerily presaged
today's world of chat rooms, where men pretend to be women, having lesbian
cybersex with other women who are, in fact, men. Whenever a user has
stumbled onto Alice without knowing in advance that she's a robot, they've
always assumed she's human.

It's 3 in the afternoon, but Wallace is already rolling what appears to be
his fourth joint of the day. We're sitting in the ''pot club'' a few blocks
from Wallace's home, an unmarked building where medical marijuana is
distributed to members.

Wallace gets up to wander around the club greeting friends: some intense
men in suits playing speed chess, a long-haired man with a bushy mustache
playing guitar, a thin reed of a woman staring wall-eyed at a VCR playing
''Cast Away.'' Everyone greets Wallace as ''Dr. Rich,'' relishing the
credibility his academic credentials lend to the medical-marijuana cause,
officially legal but politically beleaguered. The reverse is also true:
Wallace identifies with the club's pariah status, its denizens who have
been forced by cancer, AIDS or mental illness onto welfare.

He's more relaxed than I've ever seen him, getting into a playful argument
with a friend about Alice. The friend, a white-bearded programmer, isn't
sure he buys Wallace's theories.

''I gotta say, I don't feel like a robot!'' the friend jokes, pounding the
table. ''I just don't feel like a robot!''

''That's why you're here, and that's why you're unemployed!'' Wallace
shoots back. ''If you were a robot, you'd get a job!''

Friends used to tell Wallace to reconcile his past, clean himself up, apply
for an academic job. But some now wonder whether Wallace's outsider status
might be the whole key to Alice's success in emulating everyday human behavior.

After all, outcasts are the keenest students of ''normal'' behavior --
since they're constantly trying, and failing, to achieve it themselves.

Last month, a friend whom Wallace has known since grad school -- Ken
Goldberg, now a professor at Berkeley -- got a restraining order against
Wallace. Prompted by the movie ''A Beautiful Mind,'' Goldberg had e-mailed
Wallace last winter to catch up, but an amicable exchange about Wallace's
plight turned sour when Wallace began accusing Goldberg of cooperating with
a corrupt academic ''establishment'' and of siding with N.Y.U. against him.
He wrote, ''Although I am not a violent person, I think I have come to
understand how people are driven to political violence.'' Wallace also
wrote to a friend that he was ''getting ready to do some political theater
and put up wanted posters around the Berkeley campus with [Goldberg's]
picture on it.''

Wallace scoffs at Goldberg's fears. ''I'm not violent -- I'm a pacifist,''
he says. ''I always have been, and he knows that.'' He is fighting the
order, arguing that Goldberg hasn't proved that a reasonable threat exists,
and that the order considerably limits his free speech since it bars him
from the Berkeley campus, as well as any academic events where Goldberg
might appear.

Yet even in such legal straits, Wallace seems oddly pleased. Goldberg's
court order confirms everything he has always suspected: that the world,
and particularly the academic world, is shutting him out, doubting his
ideas, turning him into the crazy man out in the hallway.

Wallace, who once wrote Attorney General John Ashcroft to suggest a federal
racketeering lawsuit against the nation's academics, sees the case against
him as a chance for vindication. Wallace imagines walking into the
courtroom and finally getting a type of justice -- someone who will listen
to his story. ''What a windfall for me,'' he says. ''It's nice to feel like
a winner for once.''

Clive Thompson is a writer in New York City.
Member Comments
No member comments available...