| In order to adapt to the rapidly changing and drasticly competitive marketing environment,manufacturing companies have shifted to a multi-type,small-scale discrete manufacturing model.As a result,the production process of the workshop becomes complex and dynamic,and the probability of an unexpected event is greatly increased.At present,the workshop scheduling system used by discrete manufacturing enterprises is far from the actual production situation,and it is difficult to apply in complex and dynamic occasions.Usually,it is necessary to manually adjust the scheduling scheme.However,the quality of manual adjustment hinges on the knowledge level of the dispatcher,and often requires a large amount of time and manpower.The stability of the scheduling scheme and the production efficiency of the workshop are difficult to guarantee.Therefore,there is an urgent need to study the problem of the production workshop relying on manual adjustment.At the same time,the development of intelligent manufacturing and digital factories has resulted in a large amount of data accumulated in the production workshop.In this environment,use deep learning and deep reinforcement learning to develop self-learning and adaptive workshop scheduling systems based on these data to guide various dynamic problems in the actual production process.Improving workshop scheduling depends on manual adjustment to achieve the dynamic perception and intelligent control of the actual dispatch workshop.Therefore,this paper aims at the characteristics of over-reliance on labor in the practical application of the workshop scheduling system.Through the study of the history and real-time data of the workshop and the summary of manual adjustment experience,deep learning and deep reinforcement learning are integrated into the APS technology to train the production system to learn Ability to improve the adaptability and real-time nature of manufacturing scheduling.,So that the scheduling scheme can better adapt to the complex and dynamic actual production process.The main research work is as follows:(1)Through the research on the hidden disturbance scenarios of production scheduling,the manual adjustment operations and their effects in different production scenarios are summarized,and the selection model of manual adjustment methods in hidden disturbance scenarios is designed.First,the genetic algorithm is used to simulate the cumulative error time between the ideal scheduling scheme and the actual scheduling scheme as the trigger point for rescheduling.Secondly,record the production information of genetic simulation at this moment as the input of deep learning,and the optimal manual adjustment method of genetic simulation as the output.Finally,through the LSTM network,learn the internal relationship between production information and manual adjustment methods in the production process,realize the selection of manual adjustment methods under hidden disturbances,and verify the feasibility of the algorithm through specific examples.The combination of deep learning methods in recessive disturbance scenarios makes the scheduling scheme highly applicable and versatile.(2)Through the study of the dominant disturbance scenarios in workshop,combining deep learning with perception and reinforcement learning with decision-making ability,the method of deep reinforcement learning is applied to complex dynamic workshop scheduling.First,the workshop scheduling problem is regarded as a sequence decision problem,with real-time production environment information as the state space,scheduling rules as the action space,and a combination of processing time and maximum completion time to set the reward and punishment function.A multi-agent DDPG method is used to train the model,improve the algorithm update efficiency,and realize the automatic matching of the better real-time scheduling strategy based on the real-time production environment information.Incorporating deep reinforcement learning into the APS system provides a general framework.A deep reinforcement learning method is used to train the model to improve the algorithm update efficiency,(3)Through the standard test data of workshop scheduling,comparing the genetic algorithm,traditional reinforcement learning algorithm and a single scheduling rule,the real-time and effectiveness of the job workshop deep reinforcement learning algorithm are verified in static and dynamic scenarios,respectively.The accuracy rate of the proposed method in the static environment and the optimal solution is 91.05%,and the accuracy rate in the dynamic environment is 81.28%.Compared with a single scheduling rule,the algorithm in this paper improves the solution quality while ensuring real-time performance.Compared with the heuristic search algorithm,although the quality of the solution is slightly insufficient,the sensitivity and dynamics of the algorithm have been greatly improved.Finally,according to the research content of this article,build a dynamic workshop scheduling visualization platform to facilitate the use of operators. |