Viewpoint invariant manipulation for visually indicated goal reaching with a physical robotic arm. We learn a single policy that can reach diverse goals from sensory input captured from drastically different camera viewpoints. First row shows the visually indicated goals.

How can we make it feasible to provide the right amount of experience for the robot to learn the self-adaptation behavior based on pure visual observations that simulate a lifelong learning paradigm?

How can we design a model that integrates robust perception and self-adaptive control such that it can quickly transfer to unseen environments?

Visually indicated goal reaching task with a physical robotic arm and diverse camera viewpoints.

We use domain randomization technique to learn generalizable policies in simulation.

Viewpoint invariant manipulation for visually indicated goal reaching with a simulated seven-DoF robotic arm. We learn a single policy that can reach diverse goals from sensory input captured from dramatically different camera viewpoints.

Real-world robot and moving camera setup. First row shows the scene arrangements and the second row shows the visual sensory input to the robot.