Reinforcement Learning Control with Genetic Algorithm and Physical UGV Applications
Artificial intelligence (AI) has been a critical issue in robotics since it is based in iterative algorithms. In general, simulations of non-mathematical physical models are used to show outcome of learning algorithms or proof of concepts. Since deep q-network (DQN) is generated based on parameter estimations of the training data, it is crucial to iterate a long time in order to model an accurate classification function. Thus, it would take a substantial amount of time for a robot to generate such function. For this reason, it is important to optimize neural network structure to achieve best results. In this thesis, an implementation of reinforced learning and genetic algorithm will be applied on a unmanned ground vehicle (UGV) learning simulated model that will be translated into a physical UGV using Robotic Operating System (ROS) to test performance of the given model.