Font Size: a A A

Research On Visual Positioning Method Of Bogie Bolster Spring Robot Grasping

Posted on:2023-03-18Degree:MasterType:Thesis
Country:ChinaCandidate:D F LiFull Text:PDF
GTID:2558307073481774Subject:Mechanical and electrical engineering
Abstract/Summary:PDF Full Text Request
With the rapid development of China’s railway freight transportation system,the operation safety of train has become more and more concerned.As one of the key parts of the vibration damping device of railway freight trains,bolster spring of bogie plays an important role in transmitting load,mitigating vibration and impact.During the overhauling process of bogie damping device,the bolster spring necessitates to be disassembled and assembled.At present,this process is completed manually,which has the problems of high labor intensity,low operation efficiency and high potential safety hazards.The rapid development of industrial robot and visual servoing technology provides a novel technical idea and research entry point for the automation and intelligent transformation of bogie bolster spring disassembly or assembly process.The key to the successful disassembly or assembly of the bolster spring visual servoing grasping robot is to realize the bolster spring object detecting and positioning with complex geometric features under cluttered background.Based on the above background,this thesis has carried out relevant research work around the visual positioning method of bogie bolster spring robot grasping,including the construction of bolster spring visual detecting and positioning model,the bolster spring robot grasping based on visual servoing,the bolster spring local feature positioning method based on laser and vision,and the lightweight of visual detecting and positioning model.Firstly,since the object detection algorithm based on deep learning has good advantages over the traditional object detection algorithm in speed,accuracy and robustness under cluttered background,the algorithm principle and network structure of Two-Stage object detection representative algorithm Fast R-CNN,One-Stage object detection representative algorithms SSD and YOLOv3 are described.Based on the custom dataset of bogie bolster spring,the bolster spring visual detection model based on deep learning is constructed.Secondly,in order to further explore the feasibility and effectiveness of applying the bolster spring detecting and positioning model based on deep learning to the visual servoing grasping of the bolster spring robot,an image-based visual servoing system model based on the corner points features of the bounding box of the bolster spring object detecting and positioning is established by using the D-H parameters of ABB IRB4600 and the parameters of the camera.The good positioning and grasping effect of this method is validated by the built robotic grasping physical platform of bolster spring.Thirdly,aiming at the problem that the bolster spring in the narrow side frame space is easy to collide or even get stuck when it is disassembled and taken out,the size features of the first and second layers of the outer spring of the bearing springs and damping springs of K6 bogie are deeply analyzed,and the mathematical models of the corresponding relationship between the height ratio of the two kinds of bolster spring and the gap orientation of the end face of the bolster spring are established respectively.A visual indirect positioning method for the gap orientation of the bolster spring in a narrow space is proposed.The effectiveness of the positioning method is validated by the test platform,which lays a good foundation for the pose control when the bolster spring is stuck.Finally,based on the lightweight network and model pruning algorithm,the lightweight idea of bolster spring visual detecting and positioning model is proposed.The performance of the improved lightweight model is evaluated and analyzed through the indexes of model size,parameters,FLOPs,m AP and inference time per frame.The results show that compared with the original YOLOv3 model,the improved lightweight M-YOLO-Small-Tiny model achieves the best comprehensive performance.The inference time for single frame of M-YOLO-Small-Tiny model on CPU is only 104.36 ms,and the detecting and positioning speed is nearly 7.6 times faster.
Keywords/Search Tags:Bolster springs of bogie, Object detecting and positioning, Visual servoing, Combination of laser and vision, Lightweight model
PDF Full Text Request
Related items