Ashok Goel needed help. In his regular courses at Georgia Tech, the computer science professor had at most a few dozen students. But his online class had 400 students — students based all over the world; students who viewed his class videos at different times; students with questions. Lots and lots of questions. Maybe 10,000 questions over the course of a semester, Goel says. It was more than he and his small staff of teaching assistants could handle.

“We were going nuts trying to answer all these questions,” he says.

Sign up to get Backchannel's weekly newsletter, and follow us on Facebook, Twitter, and Instagram.

And there was another problem. He was worried that online students were losing interest over the course of the term. It was a well-founded concern: According to the data of educational researcher Katy Jordan, fewer than 15 percent of students complete a Massive Open Online Course (MOOC) in which they’ve enrolled.

It so happens that Goel is an expert in artificial intelligence. In fact, the course he was teaching, Computer Science 7637, is titled Knowledge-Based Artificial Intelligence. It occurred to him that perhaps what he needed was an artificially intelligent teaching assistant—one that could handle the routine queries, while he and his human TAs focused on the more thoughtful, creative questions. Personal attention is so important in teaching; what if they could give personal attention at scale?

Enter Jill Watson.

Jill Watson is the AI that Goel conceived. She lives in Piazza, the online Q&A platform used by Georgia Tech. It’s a utilitarian message board, set up like Microsoft Outlook; questions and topics are in the lefthand column, each of which opens to a threaded conversation on the right. Jill assists students in both Goel’s physical class, which has about 50 students, and the more heavily attended online version.

The questions she takes are routine but necessary, such as queries about proper file formats, data usage, and the schedule of office hours — the types of questions that have firm, objective solutions. The human TAs handle the more complex problems. At least for now: Goel is hoping to use Jill as the seed of a startup, and if she’s capable of more, he’s keeping the information under wraps because of “intellectual property issues,” he says.

Jill’s existence was revealed in April, at the end of her first semester on the job. But students are largely still unaware that she’s pitching in. For this current fall semester, she’s been operating under a pseudonym — as are most of the other TAs, so they can’t be Googled by curious students who want to figure out who the robot is.

“I haven’t been able to tell,” says Duri Long, a student in Goel’s physical class, to the nods of her friends. “I think if you can’t tell, it’s pretty effective, and I think it’s a good thing, because people can get help more rapidly.”

Help is the goal in this age of artificial intelligence. Apple’s Siri gets new capabilities with every OS; Amazon’s Alexa is primed to run your home. Tesla, Google, Microsoft, Facebook: all have made major investments in AI, taking tasks away from their human creators. IBM’s Watson, which won a “Jeopardy!” tournament, has been tapped for medical and consumer applications and recently had a hit song.

Jill might be considered a grandchild of Watson. Her foundation was built with Bluemix, an IBM platform for developing apps using Watson and other IBM software. (Goel had an established relationship with the company.) He then uploaded four semesters’ worth of data — 40,000 questions and answers, along with other Piazza chatter — to begin training his AI TA. Her name, incidentally, came from a student project called “Ask Jill,” out of the mistaken belief that IBM founder Thomas Watson’s wife’s name was Jill. (Mrs. Watson’s name was actually Jeannette.)

Jill wasn’t an instant success. Based on the initial input, her early test versions gave not only incorrect answers but “strange answers,” Goel recalled at a TEDx talk in October. In one case a student asked about a program’s running time; Jill responded by telling the student about design.

That wouldn’t do. “We didn’t want to cause confusion in the class, with Jill Watson giving some answers correctly and some answers incorrectly,” Goel said. The team created a mirror version of the live Piazza forum for Jill so that it could observe her responses and flag her errors, to help her learn. Tweaking the AI was “almost like raising a child.”

Eventually, the bugs were ironed out. Then came a breakthrough, what Goel calls a “secret sauce” (part of the intellectual property he’s coy about). It included not only Jill’s memory of previous questions and answers, but also the context of her interactions with students. Eventually, Jill’s answers were 97 percent accurate. Goel decided she was ready to meet the public — his students.

Jill was introduced in January for the spring 2016 online class. For most of the semester, the students were unaware that the “Jill Watson” responding to their queries was an AI. She even answered questions with a touch of personality. For example, one student asked if challenge problems would include both text and visual data. “There are no verbal representations of challenge problems,” Jill responded correctly. “They’ll only be run as visual problems. But you’re welcome to write your own verbal representations to try them out!” (Yes, Jill used an exclamation point.)

At the end of the semester, Goel revealed Jill’s identity. The students, far from being upset, were just as pleased as the instructors. One called her “incredibly cool.” Another wanted to ask her out for dinner.

Perhaps only Jill was unmoved. Her response to the student’s request for a date was a blank space: literally, no comment.

For all of Jill’s programming, Goel says there was a distinctly human element that made her better: his own experience. “Because this is a course I’ve been teaching for more than a decade, I already knew it intimately,” he says. “I had a deep familiarity with it. The deep familiarity and the presence of data — that helped a lot.”

Goel’s Yoda-like demeanor, and his gift for teaching, are on display in his physical class. CS 7637 is held in a small auditorium in the Klaus Advanced Computing Building, a semi-circular glass-and-brick structure that looks like a chunk of flying saucer that was dropped into the middle of the Georgia Tech campus. But if the building is futuristic, the auditorium where Goel teaches is just the opposite: a couple hundred seats, arrayed between walls hung with beige and gray baffles, facing a long series of whiteboards. It’s a room denuded of technological dazzle, a perfect place for Goel’s unruffled instruction.

On a Monday in November, Goel wants his students to ponder the qualities of a teacup. Behind him, on two large screens, are some of the properties of the item, connected by arrows: object is → cup, object has → bottom, object is made of → porcelain.

He observes that other objects have some of the same qualities. A brick, for example, has a flat bottom. A briefcase is liftable and has a handle. So how does a machine sift through these qualities and logically determine what a teacup is? Humans, with their powers of memory and perception, can visualize a teacup instantly, Goel says. But a robot doesn’t come instantly “bootstrapped,” as he puts it, with all the knowledge it will need.

“A robot must prove to itself that this object has the characteristics of an object that can be used as a cup,” he tells the class. Moreover, aided by input, it must also use creativity and improvisation to determine what the object is; after all, just because humans may deduce logically doesn’t mean they use formal logic in their minds.

Goel is a classroom veteran, having joined Georgia Tech’s faculty in 1989. Teaching runs in his family: a native of Kurukshetra, an Indian city known as an ancient learning center, he’s the son of a physics professor and the grandson of a primary school teacher. Despite his research responsibilities, Goel welcomes the opportunity to teach.

“I enjoy both research and education. Fortunately, for me the two are intertwined,” he says. “In one direction, some of my research is driven by issues of learning, and in the other, I apply results from my research to teaching. Thus, my classroom is also a research laboratory for me.”

He relishes the human connection with students — a bond that’s the holy grail of teaching. Or as Christopher Michaud, a computer science teacher who took Goel’s online course, puts it, “Teaching is a human activity, and fundamentally it’s about forming bonds with your students. A machine can’t do that. A machine can’t love the students.”

Goel envisions Jill Watson as the basis of a startup. In that, he’s not alone in seeing AI as both a promising and lucrative tool in the education field.

Education is big business, after all. In 2015, more than 35 million students signed up for a college-level MOOC, according to Class-Central.com. That’s more than double the number from 2014.

IBM Watson has entered into partnerships with Sesame Workshop, Apple and the education company Pearson to spread the AI gospel to primary and second schools, and other companies are jumping in. Amy Ogan, a computer and learning sciences professor at Carnegie Mellon, says that Amazon, intrigued that children were using Alexa as a tutor, is ramping up its efforts. (Amazon did not comment.)

From Pearson’s perspective, “The goal is to enable better teaching and help reach every student where they are,” says Tim Bozik, the education and media company’s president of global product. Pearson has been developing AI approaches with IBM Watson for the past two years; it expects to make its first releases, for higher ed, in 2017.

In a demo of the Pearson/Watson technology, a student reading an online version of a psychology textbook can click on a floating Watson symbol at any time. At the end of each section in the book, Watson opens a dialogue with the student to check her comprehension. If the student expresses uncertainty or resistance, Watson may hint at or prompt the answers. Then there’s a quiz. Once again, if the student is confused, Watson opens a dialogue and reiterates the points of the text. The Watson AI isn’t foolproof — you can’t compel the student to stay in the dialogue — but it can provide insights a teacher may be unaware of.

Pearson is introducing the technology in a handful of American schools. In years to come, it and other firms expect to spread the use of AI tutoring to underserved communities in the US and other countries. This is a topic close to Goel’s heart: In the US alone, at least 30 million people are functionally illiterate; worldwide, that figure is close to 800 million.

For that reason, AI theorists aren’t concerned that agents will take human teachers’ jobs. What the agents are intended to do is assist them and improve learning in general. Those who worry that AI might replace teachers, Goel says, should look at the other side of the coin. “It’s not a question of taking human jobs away. It’s a question of reaching those segments of the population that don’t have human teachers,” he says. “And those are very large segments.”

But first, AI instructors have a lot to learn. A colleague of Carnegie Mellon’s Ogan tested an AI model in urban, suburban, and rural settings; it was most effective in the suburban context, but less so in urban and rural areas. And different students require different strategies. A human teacher who deliberately makes errors may help her students solve problems; an AI version, without the same kind of connection, may fail. In one study, Ogan and her colleagues created a teachable agent named Stacy, designed to solve linear equations when interacting with children. The hope was that students would respond to Stacy’s mistakes and realize their own. Instead, students struggled even more. Some lost interest entirely.

“This is one of those places in which the AI is going to have to get smarter in terms of learning from the students when it’s doing something that doesn’t make sense,” Ogan says. Otherwise, AI is no better than those frustrating phone trees you get when you call the cable company. Ogan says there are ways to detect what the students respond to, using cameras and software to detect facial expressions, but it’s very much a work in progress.

Perhaps more of a concern is what will happen to the data. At the primary and secondary school level, there’s already plenty of contention over Common Core and local control. In an era when information is the coin of the realm, who’s to say the system won’t be abused?

Goel admits some concerns. After all, the identity of Jill Watson wasn’t revealed until the end of the spring semester. Until then, his students were essentially part of a big experiment: If you know you’re dealing with an AI, does it fundamentally alter interaction?

“It’s an uncharted question,” he says. He’s seen a lot of interest from social scientists, he adds, though he hasn’t yet established any partnerships. After all, the original purpose was just to ease TA workloads.

Meanwhile, an improved Jill continues her work under a pseudonym. Goel and the TAs still don’t want students knowing when they’re getting the AI, so all but two TAs are working under pseudonyms as well. She continues to do an excellent job, the human TAs say.

But she’s not ready to teach — or even take on all the responsibilities of a human TA. “To capture the full scope of what a human TA does, we’re not months away or years away. We’re decades, maybe centuries away, at least in my estimation,” Goel says. “None of us (AI experts) thinks we’re going to build a virtual teacher for 100 years or more.”

Those questions, unlike the ones in his class, will have to wait.

Creative Art Direction by Redindhi Studio

Illustrations by Giacomo Gambineri