The Beatrice Interview


Astro Teller

"[People] think that once you're intelligent, you're intelligent --

as a scientist, I can tell you that it doesn't work like that."


interviewed by Ron Hogan

27-year-old Astro Teller is the grandson of legendary atomic scientist Edward Teller (and, through his mother's side of the family, Nobel-winning economist Gerard Debreu). He's got a bachelor's and a master's in computer science from Stanford, and will be picking up a doctorate from Carnegie Mellon as soon as he finishes his dissertation on the processes by which machines 'learn' to distinguish various information patterns from each other. In the meantime, he's taken an interesting detour with a science-fiction novel called Exegesis.

Set in the year 2000, the slim volume chronicles the email correspondence of Alice Lu, a frustrated graduate student, and "Edgar," the artificial intelligence program she's created without quite knowing how. As Edgar's "mind" develops, he becomes increasingly reluctant to exist in a subservient position to humans, even after the government takes an active interest in 'persuading' Edgar to join their team. The email format allows Teller to pack his concepts into a highly focused, densely packed story that's part technothriller, part philosophical inquiry.

RH: One of the first things I want to ask you about is the way in which artificial intelligence combines the two strains of science that run through your family. You've got 'hard' and 'soft' science running through your veins.

AT: The 'hard' science side has certainly impacted me for a number of reasons, such as issues concerning the responsibility of scientists. I don't think, though, that the writer side of me came from my dad being a philosopher or my grandfather on my mother's side being an economist. But the fact that most of my family has been scientists is probably why I ended up in science, sure.

RH: You grew up in an environment where you were encouraged to ask deep questions.

AT: On one hand, I had a lot of people in my family who were very into science and encouraged me to be interested. On the other hand, when my father was spending time with us, he would talk to us about philosophy. If my brother and I were willing, and we happened to be, we'd talk about the mind-body problem, the transporter room problem, many of which fueled my interest in questions that are outside the bounds of 'straight' science.

RH: What got you interested in computer science?

AT: I had a series of computers as a kid, and enjoyed working with them -- not just playing videogames, but writing programs, but that wasn't the thing that made up my mind. My first year at Stanford I was taking a computer science course for credit, and the TA was so energized by what he was doing, so inspired to be in computer science, that I decided I wanted to feel like that. It pushed me to take more computer science courses even though I didn't know much about artificial intelligence before I got to college. There's not only real science in AI, there's this opportunity for being particularly creative, for romanticizing the future, that you might not get in, say, high-energy physics.

RH: It's interesting that you should say that, because I think that for our generation, which was the first to have a chance to grow up with computers almost from the start, computer science is a field with the same allure that atomic research had for your grandfather and his contemporaries.

AT: Absolutely. That's why I'm in artificial intelligence. When I say 'romance,' I mean that the world is wide open. We're just starting to get an idea of what computers can do. So many opportunities to discover the kinds of problems computers can tackle come up every day. It's absolutely fascinating to be in this field.

I don't think, though, that it's the only field that's like that. Genetics is also a really amazing field in which, despite the amount of money that we've thrown into it, we've just scratched the surface of what's going on. Another field is neurology, where we're studying the brain not at the behavioral level, like psychologists do, or trying to remodel it, which by some accounts is what artificial intelligence research is trying to do, but by actually getting inside there and figuring out how the components work.

RH: Elsewhere, you've described Exegesis as an attempt to fill a gap in previous 'creation' stories. What is that gap?

AT: In a lot of these stories, the character that isn't human is being used as a foil for what humanness is. But, through poetic license and the feeling that the story wouldn't be 'real' otherwise, all of these stories -- Frankenstein, Pygmalion, HAL, "Flowers for Algernon," even Jesus Christ -- really don't allow the thing that's inhuman to be inhuman.

Take Frankenstein's monster. He's a monster in the sense that he's large and grotesque. But he really is human. What we're seeing is people's inability to see past superficial details. Pygmalion continues to treat Galatea like a statue once she's come to life, not as a person, which is why she leaves him. And that's why Eliza Doolittle leaves Henry Higgins in Shaw's modern Pygmalion. These are all interesting stories, and one way of getting at what it is to be human, but by really allowing the character to be inhuman, you can define the boundary of humanity by watching a human and non-human try to interact.

RH: One of the things I like about this story is Alice's incredulity that Edgar doesn't behave like a human, because as I read it, I'm thinking, "Well, what do you expect a computer program to behave like?"

AT: There's a lot of things he just doesn't get. In the book, the NSA is trying to socialize Edgar, and he just doesn't have the equipment to be socialized. It's not just about psychology, you have to have built-in desires to fit into the world, and he just doesn't have that. He doesn't understand the difference between truth and fiction. That's not a misunderstanding, or a missing component, but a result of his lack of certain senses of perception.

We often assume that an artificial intelligence stuck in a box would be able to think about the world the way we do. But I feel this table -- and all philosophical questions aside for a moment, I'm pretty sure about this table. Edgar has nothing but words. Everything is just stories to him. He has no absolute basis from which to judge things.

RH: It feels like, and perhaps it's unconscious on his part, but it feels like he deals with his capture and interrogation with increasing dark humor, albeit very deadpan.

AT: There's a very strong pressure for Edgar to learn to approach the world by the way the world approaches him. Not 'mimics' necessarily. I imagine that the NSA treats him very drily with their language, and the way that he responds to them becomes dry in response, although Edgar himself would never recognize or describe it as humorous.

RH: Did you know from the beginning that you were going to write in e-mail format?

AT: The first two pages that I wrote were "Hello Alice" and "Goodbye Alice." I had feelings about the beginning and the end of the story, although I couldn't tell you then why I had those feelings, and I tried to figure out a story that would make you have those feelings. It turned out that the way to do that was by writing it as e- mail.

RH: The technique held my attention by demonstrating how Edgar comes to learn to use language.

AT: I used e-mail to focus on Edgar's language because it says so much about how he develops. He really isn't anything more than what he says and does. The format became a convenient way for me to show within the confines of fiction some of the ideas I have about artificial intelligence -- what a computer would learn, what it wouldn't learn.

People who aren't in artificial intelligence, for example, don't seem to appreciate how different it is to read about something and to see an image of something. They think that once you're intelligent, you're intelligent; as a scientist, I can tell you that it doesn't work like that. I went to lengths to demonstrate that Edgar would have no idea how to look at a picture, and because he doesn't get to see anything -- or to use any of the sensory perceptions we take for granted -- I used e-mail to make the reader 'see' reality the way Edgar sees it. You're in his world looking out to what little of Alice you're allowed to see.

RH: Why is the novel set in the year 2000?

AT: I wanted to invoke the millennial fears society has of artificial intelligence, and I chose 2000 over 2001 first because that's when most people will be celebrating the millenium even though it's not the real changeover date, and second because 2001 is already firmly associated with Arthur C. Clarke's book.

RH: 2000 is also just far enough in the future that we don't have to stretch our imaginations too much to envision a whole new world.

AT: Right, and the format kept me close to the present as well. That ASCII text format just wouldn't be plausible in twenty years. But if I'd set the book in 1997, it would be outdated as soon as it came out.

RH: About how long did it take you to write this book?

AT: It took a year to write the rough draft. I had imposed a one-year deadline on myself to get the rough draft done, an arrangement with a friend to write our first novels together. But once I finished, there was a period where I went back to computer science research. I showed the novel to a few people but I wasn't really trying to get it published, and then through a series of events that happened rather quickly, it ended up being accepted for publication, and I spent a reasonable amount of time during the next four or five months revising it.

RH: Do you intend to continue writing fiction?

AT: I've always wanted to be a fiction writer, long before I wanted to be a scientist. I never thought it was plausible, which is why I didn't pursue it until quite recently. I'm considering continuing in academia, but it's not at all clear that I'll be a professor this time next year. I most certainly will write another book at some point. And there are other things that I might do. I said half-jokingly to a friend of mine that it would be a good idea to spend five to ten years becoming proficient in a different field and then writing a novel about what I found most interesting about that field.

That's not to say that I'm abandoning artificial intelligence, but if it turned out that my best output from artificial intelligence research is this book, that wouldn't be so bad.

BEATRICE Suggested further reading
David Shields | Steven Holtzman

All materials copyright © 1997 Ron Hogan