A wide shot of a ceramics studio, featuring students working with pottery wheels and other tools.

David Gondek: Scientist-in-Residence

Image

by Jeremy Ohmes

David Gondek is a key team member at IBM that created Watson—the artificial intelligence (AI) machine that beat the best on Jeopardy! in 2011. This spring Gondek will join the SAIC community as the first scientist-in-residence, teaching a liberal arts course called Algorithms, Information, and AI. He will also give a lecture about his research on February 19, 4:30 p.m., SAIC Ballroom, 112 South Michigan Avenue as part of SAIC's Conversations on Art and Science lecture series.

What is your background?

I'm a computer scientist. I've worked on a number of things: crypt analysis or code breaking, and game theory—algorithms for economic games. Then I settled on a branch called machine learning or data mining, looking for patterns in data to either visualize or answer questions, or to structure your data.

Can you talk about the Watson project? What were some of the challenges—and how did you tackle them?

We had no idea whether a computer could answer any Jeopardy! questions with any kind of accuracy. I was involved in the feasibility analysis that asked if this was even remotely possible. I said no because the domain is so wide. It's not the same 12 categories in every Jeopardy! game; the categories are clever and they often have puns in them. There have been some successes in limited domain for questioning and answering—if you call your bank and the automated system asks what you want to do, you can ask for your checking balance. But you can't say, What good movies do you recommend? You can't go out of domain. So that was one of the daunting things about the open nature of Jeopardy!.

We took a very scientific approach: We would train Watson on thousands of old questions, giving it the question and the correct answer. It would run analytics—textual evidence, geographic reasoning, date knowledge bases, etc. It's almost like having a panel of people where each person has his/her own specialty. And what machine learning does is that it learns how to combine all of those specialists into a final confidence score: "I'm going to give an answer with 80 percent confidence."

Where is AI technology headed?

For me, the future is about what you want this technology to do for you. You can train it for other question sets and it can adapt to other domains. So what I'm interested in now is how do you take that language knowledge that Watson had, and combine that with understanding more about the world.

How can this technology be helpful to artists?

One thing I'm really interested in exploring with artists is what the machine is good at that people might not be as good at. You don't want to replace people, you want to use the machine's strengths and combine it with people's strengths. I don't know what the answer is yet, but what would an AI artist's assistant look like?

How are you going to design the upcoming course?

I want to start off by discussing the information that's all around us. We have information and interactive algorithms where you're manipulating that information. That could be in the context of working with an art program, or Angry Birds, or talking to Siri. So the class will cover these concepts so students can really understand what it means to be digitized and what the machine is doing to understand what I'm saying or searching for. To bring it all together, I'd like to have projects where we build simple AI—something that you can interact with that can come back with an intelligent result.

How will you critique these students and their AI projects?

One thing I want to convey is that as a class, we come up with very explicit evaluation metrics. For instance, I want the AI to tell my mood and it shows pictures of what mood I'm in. You come up with a metric that shows if it's successful or not. It's not solely a mathematical exercise though. There are all of these human factors as well: Is it responding with interesting answers? How fluid is the interaction with it? You can come up with a great scientific solution for something, but if it's hard to use, it doesn't matter. And the engineering or science behind your system might get some things wrong, but how can the interface ameliorate that fault somehow. If you're talking to the computer and it doesn't have a good answer, can it look uncertain? Could it speak slower or speak in a question? And how will the system understand that it did something wrong?

What are you excited and nervous about with this class?

I feel I have an extreme responsibility being the first scientist-in-residence at SAIC. I would just love it if I can build a buzz here around AI and computer science. There's a common misperception that computer scientists just sit in front of a computer and program and drink Mountain Dew in the dark for 12 hours a day. My experience was nothing like that. At IBM I spent the majority of the time talking with other people about ideas and applications. Building AI is really a collaborative research effort—talking with other scientists and hashing out ideas. I hope to bring some of that ethos to this class at SAIC.