In early 2019, the Google Creative Lab partnered with Bill T. Jones, a pioneering choreographer, two-time Tony Award Winner, MacArthur Fellow, National Medal of the Arts Honoree, and artistic director and co-founder of the Bill T. Jones/Arnie Zane Company of New York Live Arts. We teamed up to explore the creative possibilities of speech recognition and PoseNet, which is Google’s machine-learning model that estimates human poses in real time in the browser.

We sat down with Bill to hear his reflections on working at the intersection of art, technology, identity and the body—you can try out the experiments and watch a short film about the collaboration.

Why did you collaborate with Google on AI experiments?

The idea of machine learning intrigues me. The theme of our company’s Live Ideas Fest this year is artificial intelligence. AI is supposed to take us into the next century and important things are supposed to be happening with this technology, so I wanted to see if we could use it to stir real human emotion. Maybe it’s ego, but I want to be the one to know how to use PoseNet to make somebody cry. How do you get the technology to be weighted with meaning and import?

How have you experimented with technology over the course of your career?

Back in the ‘80s, Arnie Zane [Jones’s partner and company co-founder] and I decided we didn’t want to work with technology anymore because the pure art of sweat and bodies on stage should be enough. Technology just steals your thunder. Then a friend said, “Technology can suggest the beyond. Technology can project what is at stake when you die. When you see these figures, they’re no longer human, they’re something else.” So we started working with more state-of-the-art technologies. Later, I did a project called “Ghostcatching” with 3D motion capture. At that time, the team was saying, “we want to capture your movement so that in 50 years we could reconstitute your performance.” That’s how people were thinking years ago, and seems to still be a preoccupation now. They said they wanted to “decouple me from my personality.” Maybe I’m romantic, but I don't think that’s possible. So, my focus with this project was not on how to replace the performer, but complement them.

What was it like experimenting with AI?

I’ve never collaborated with a machine before. It's a whole other learning curve. We are taught in the art world that you don’t get many chances. This experience contrasted that notion. It was refreshing to co-create with the Google team whose approach was playful and iterative.

Were there moments you felt this technology was in the service of dance?

In the service of dance? I say this with great respect: it's almost antithetical to everything I thought dance was. The webcam’s field of vision determines a lot about how we move. Dance for us is often times in an empty room that implies infinite space. But working with a webcam, there is a very prescribed space. Limitations are not bad in art making, but they were a new challenge. It was a shift creating something for the screen and not the stage.

What was it like shifting from creating for the stage to the screen?

I felt like I was being asked: Come out of the place that you as an artist come from, the avant-garde. Come and work with a medium that's available to millions of people. That's wonderful, but it's also a responsibility. The meaningful things people make with this are going to be very weird in a way, aren't they? Very kind of exciting. I'm appreciative of being part of the development of this.

Where do you see AI going? Will you work with it more in the future?

I understand context is the next frontier in machine learning. This seems paramount for art making. I hope one day soon they make a machine I can dance with. I’d like to dance with a machine, just to see what that’s like.