Font Size: a A A

The Reasearch On Big Data Teechnology Of Fault Equipment Based On Cloud Computing

Posted on:2020-07-06Degree:MasterType:Thesis
Country:ChinaCandidate:D B HongFull Text:PDF
GTID:2428330572971240Subject:Electronic and communication engineering
Abstract/Summary:PDF Full Text Request
With the rapid development of social science and technology,intelligent equipment is also changing with each passing day.The analysis of equipment operation status is also accompanied by the characteristics of multiple sensor nodes,high sampling frequency and long analysis period.The amount of data generated by the equipment is also increasing,even reaching TB level.For such a large-scale data analysis,the traditional analysis technology can no longer meet the requirements,so it is necessary to adopt relevant big data analysis technology.Because big data often hides certain knowledge rules and associations,it is impossible to judge and identify effectively in the case of limited data volume.Only through big data analysis technology can we further mine and judge the big data[12].Cloud computing can provide a large number of low-cost computing and storage resources,and has the characteristics of easy deployment and strong scalability,which can meet the needs of software and hardware for big data analysis.Therefore,it is of great practical significance to effectively integrate cloud computing and big data technologies and give full play to their respective advantages in equipment fault analysis[3].This paper focuses on the existing problems of equipment fault analysis,and takes big data analysis as the starting point to conduct in-depth research on big data processing technology.Taking the failure analysis of wind turbine equipment in a certain wind field in China as an example,a big data platform for equipment failure analysis is designed and implemented to provide data and technical support for developers.The research content includes hierarchical thinking based on system design,and the overall architecture of the computing platform is studied and designed.The architecture is divided into data source,data access layer,data storage layer,resource management layer,business layer alnd application layer.Explore the best practices of hadoop-spark cluster and its related components deployment,design and implement the supporting framework of big data analysis,and provide platform support for big data analysis of failure equipment;Through the organic combination of Flume+Kafka+Spark Streaming framework,realize the real-time analysis of data flow.The deployment of TensorFlowOnSpark was studied and implemented to provide distributed training environment for model training,and the computing platform used the advantages of cluster resources to effectively improve the training efficiency.Finally,the deployment of the cluster was completed according to the designed big data analysis architecture,and module testing to verify the feasibility and stability of the big data technology studied in this project.
Keywords/Search Tags:Equipment Failure, Big Data, Hadoop-Spark Cluster, Distributed Training
PDF Full Text Request
Related items