Maritime Navigation and Contact Avoidance through Reinforcement Learning

Restricted (Penn State Only)
Author:
Davis, Steven Lee
Area of Honors:
Computer Engineering
Degree:
Bachelor of Science
Document Type:
Thesis
Thesis Supervisors:
  • John Phillip Sustersic, Jr., Thesis Supervisor
  • Vijaykrishnan Narayanan, Honors Advisor
Keywords:
  • reinforcement learning
  • machine learning
  • maritime
  • unmanned underwater vehicle
  • autonomous underwater vehicle
  • navigation
  • q learning
  • autonomous
Abstract:
This thesis explores the potential for applying reinforcement learning to provide autonomous navigation and contact avoidance to an unmanned underwater vehicle. A major area of interest is using reinforcement learning for the navigation of land vehicles, but few works explore these techniques in a maritime setting, where control and sensing of the vehicle function much differently. Additionally, previous works in the maritime setting have mainly focused on control systems or relied on potentially unrealistic sensor information. Operating on purely relational measurements, this thesis explores deep Q-Learning, experience replay, and reward shaping in pursuit of achieving autonomous navigation and contact avoidance. It demonstrates the potential of these reinforcement learning algorithms by successfully inducing a simulated underwater vehicle to navigate to its objective without detection by enemy contacts.