Optimization of Multi-Objective Resource Scheduling in Cloud Manufacturing Environment via Integrating Reinforcement Learning and Deep Neural Network
Cloud manufacturing (CMfg), inspired by cloud computing, has emerged as a service-oriented manufacturing paradigm offering on-demand services over the internet. In this context, resource scheduling has become a critical aspect, calling for innovative approaches to resource allocation and decision-making. In this dissertation, the multi-objective scheduling model for cloud manufacturing is mathematically formulated as a mixed integer linear programming (MILP) problem, and a preemptive fuzzy goal programming (FGP) approach is employed to solve it, incorporating linguistic terms as an "importance factor." Afterward, a novel integrated algorithm using reinforcement learning (RL) and deep neural networks is developed for implementation on a cloud manufacturing system with real-world manufacturing data. Three RL-based algorithms such as State-Action-Reward-State-Action (SARSA), Q-learning, and Deep-Q-Network (DQN) are proposed and implemented to effectively solve the resource scheduling problem in the context of cloud manufacturing. In order to evaluate the implemented algorithms, the SARSA, Q-learning, and DQN algorithms are compared in the context of some specific tasks such as average reward per episode, Completion time, total cost, reliability, and the effect of "importance factor" on optimal objective functions. Overall, the results indicate that DQN is most effective in maximizing the average reward per episode and more stable results and provides a promising approach for scheduling tasks in CMfg. The results of this study can be used to improve the performance of CMfg systems and make them more responsive to the needs of customers. .