Training an AI for mapless robot navigation
Researchers from Digital Creativity Labs and the Department of Computer Science have developed an AI for mobile robot navigation using a videogame development and simulation tool.
The project is part of the Assuring Autonomy International Programme (AAIP), a £12 million initiative funded by Lloyds Register Foundation and the University of York to address global challenges in assuring the safety of robotics and autonomous systems.
Until very recently, AI systems used in mobile robots, such as unmanned aerial vehicles (drones) have been task specific, learning through fusing data from multiple sources. This study uses data from on-board sensors to guide the robot to the site of the problem.
Using Unity 3-D ML-Agents Toolkit, researchers developed an AI that learns during simulation. This provides a cost-effective and straightforward way to transfer the learned navigator to the real world.
The AI is trained by deep reinforcement learning, learning by trial and error. Curriculum learning (starting with a simple task and gradually increasing the complexity of the task as learning progresses) teaches it to move firstly in a straight line and then to navigate more and more complex objects.
The AI only needs to know its immediate vicinity and so navigates without a map, using local information. The system incorporates a memory (LSTM neural network) so it can remember where it has been and does not get stuck.
The trained AI was tested in simulated real-world scenarios and can navigate grids and obstacle layouts of varying size and complexity. Researchers sought assurance across three critical areas to meet a defined safety requirement that the system shall never plan a move that leads to a collision with an obstacle. Assurance of the training, the learned model and the overall performance of the drone were all rigorously assessed.
The navigation model could be deployed in a number of different domains including the detection of chemical leaks, gas leaks, forest fires, disaster monitoring, and search and rescue.
It could also be used for navigating autonomous ground vehicles that need to navigate complex and dynamic environments for delivery robots that deliver parcels or food. The same AI navigator can also be used to guide autonomous underwater vehicles used for maintenance and it could even be used in video games to navigate characters within the video game.
For each of these new domains, the algorithm would remain the same; the only change needed is to select suitable sensors and data to provide the local navigation information required as inputs.
Click here to read the full paper (Hodge, V.J., Hawkins, R. & Alexander, R. Deep reinforcement learning for drone navigation using sensor data. Neural Comput & Applic (2020)).