Deep learning for manipulation
Motivation and scope
Deep and end-to-end reinforcement learning usually requires large amounts of training experience which are expensive to collect for an embodied agent (robot). For this reason, it suggested that the policy network is decomposed into several parts, some of which that can be trained offline, in order to make learning an end-to-end policy feasible. The example use case in this project is a robot arm that manipulated objects with different properties by sliding then on a work-surface.
The tasks list includes (a) preparing training data and learning a policy space for sliding motions, (b) preparing training data and learning an embedding space for sensor data, (c) collecting training data and learning a mapping from sensor data to policy. Tasks (a) and (b) are partially done in simulation while task (c) is done on a robotic platform.
Using a deep learning framework. Basic understanding of deep learning. Programming a robot simulator. Programming a robot in ROS.