Zero-Shot Visual Imitation

University of California, Berkeley * equal contribution

The current dominant paradigm of imitation learning relies on strong supervision of expert actions for learning both what to and how to imitate.



We propose an alternative paradigm wherein an agent first explores the world without any expert supervision and then distills its own experience into a goal-conditioned skill policy using a novel forward consistency loss formulation. In our framework, the role of the human expert is only to communicate goals (i.e., what to imitate) during inference. The learned policy is then employed to mimic the expert (i.e., how to imitate) after observing just a visual demonstration. Our method is "zero-shot" in the sense that the agent never has access to expert actions either during training or for task demonstration at inference.



We evaluate our zero-shot imitator in two real-world settings: complex rope manipulation using a Baxter robot and navigation in previously unseen office environments using a TurtleBot.

Through further experiments in VizDoom simulation, we provide evidence that better mechanisms for exploration lead to learning a more capable policy which in turn improves end task performance.

Source Code and Environment

We have released the TensorFlow based implementation on the github page. Try our code!

Goal-Conditioned Skill Policies (GSP)

We pursue an alternative paradigm wherein an agent first explores the world without any expert supervision and then distills its experience into a goal-conditioned skill policy with a novel forward consistency loss. The key insight is the intuition that, for most tasks, reaching the goal is more important than how it is reached.

Rope Manipulation





Rope manipulation training data was re-used from ICRA17 paper on "Combining Self-Supervised Learning and Imitation for Vision-Based Rope Manipulation", available here

Visual Navigation

Left half: View as seen by TurtleBot

Right half: Goal image (or landmark image) shown to TurtleBot

Note that all these results are in unseen novel environment.

Goal finding via single image [ unseen environment ]

Visual navigation via landmark waypoints [ unseen environment ]

Failure Examples: Visual navigation via landmarks

Paper and Bibtex [Paper] [ArXiv] [Slides] Citation



Deepak Pathak, Parsa Mahmoudieh, Guanghao Luo, Pulkit Agrawal, Dian Chen, Yide Shentu, Evan Shelhamer, Jitendra Malik, Alexei A. Efros and Trevor Darrell. Zero-Shot Visual Imitation

In ICLR 2018. [Bibtex] @inproceedings{pathakICLR18zeroshot, Author = {Pathak, Deepak and Mahmoudieh, Parsa and Luo, Guanghao and Agrawal, Pulkit and Chen, Dian and Shentu, Yide and Shelhamer, Evan and Malik, Jitendra and Efros, Alexei A. and Darrell, Trevor}, Title = {Zero-Shot Visual Imitation}, Booktitle = {ICLR}, Year = {2018} }