Reinforcement Learning Control with Genetic Algorithm and Physical UGV Applications

dc.contributor.advisorJamshidi, Mo
dc.contributor.authorPerez, Edgar M.
dc.contributor.committeeMemberJamshidi, Mo
dc.contributor.committeeMemberGrigoryan, Artyom
dc.contributor.committeeMemberBenavidez, Patrick
dc.descriptionThis item is available only to currently enrolled UTSA students, faculty or staff. To download, navigate to Log In in the top right-hand corner of this screen, then select Log in with my UTSA ID.
dc.description.abstractArtificial intelligence (AI) has been a critical issue in robotics since it is based in iterative algorithms. In general, simulations of non-mathematical physical models are used to show outcome of learning algorithms or proof of concepts. Since deep q-network (DQN) is generated based on parameter estimations of the training data, it is crucial to iterate a long time in order to model an accurate classification function. Thus, it would take a substantial amount of time for a robot to generate such function. For this reason, it is important to optimize neural network structure to achieve best results. In this thesis, an implementation of reinforced learning and genetic algorithm will be applied on a unmanned ground vehicle (UGV) learning simulated model that will be translated into a physical UGV using Robotic Operating System (ROS) to test performance of the given model.
dc.description.departmentElectrical and Computer Engineering
dc.format.extent48 pages
dc.subjectDeep Q-Network
dc.subjectGenetic Algorithm
dc.subjectReinforcement Learning
dc.subjectRobot Operating System
dc.subjectUnmanned ground Vehicle
dc.subject.classificationComputer science
dc.subject.classificationArtificial intelligence
dc.titleReinforcement Learning Control with Genetic Algorithm and Physical UGV Applications
dcterms.accessRightspq_closed and Computer Engineering of Texas at San Antonio of Science


Original bundle

Now showing 1 - 1 of 1
No Thumbnail Available
2.32 MB
Adobe Portable Document Format