Semantic Understanding and Learning for Range-based Indoor Robots (1)
Motivation and scope
Mobile robots have the potential to revolutionise our daily lives and help with many everyday chores. The more a robot understands about the complex world around it, the better it can navigate through its environment, more efficiently using its resources, completing its tasks, and interacting with its users. An ongoing research problem is to help robots better understand the world by interpreting incoming sensor data into semantically meaningful information and using it to build models of the world.
This project will be performed in partnership with Electrolux. In 2017 Electrolux launched its robot vacuum cleaner, the PURE i9. The PURE i9 uses a 3D vision sensor, using an infra-red (IR) camera and 2 IR line lasers to build a 3D map of the environment (see below). Electrolux has acquired a large supply of sensor data and maps from robots in their training environments which will be made available for algorithm development.
The goal of this project is to develop pose-aware feature representations. The nature of range-based data means that it is highly sensitive to the position from which is was acquired – an object may look very different depending on whether the robot was close to it, or far away. This project will investigate if information about the position of the robot when a reading was captured is integrated into the environment map, whether this information can be used to improve the ability to recognise places.
This project offers the opportunity to work in partnership with engineers at Electrolux, one of the world’s largest appliance makers and a company that has identified robotic vacuum cleaners as a priority for their home care and small domestic appliances, with a focus on the best-in-class consumer offering. You will gain experience with robot mapping and navigation by working with and extending an existing SLAM (Simultaneous Localisation and Mapping) system. You will also learn about 3D perception and implementation of robot perception systems by working with the 3D vision sensor data.