| With the development of intelligent manufacturing,human-robot col-laborative assembly technology has gained more and more extensive at-tention.Faced with complex tasks such as assembly,robots and operators have difficulty in completing tasks independently.Robots have better ac-curacy,speed and power,while humans have the experience,thinking and decision-making capabilities that robots lack.Human-robot collaboration can combine the advantages of human and robot to jointly complete com-plex assembly tasks,which has important research significance and applica-tion value.Therefore,in this thesis,we conduct a research on human-robot collaborative assembly based on gesture interaction and mixed reality,and carry out the following main research work:Firstly,we analyze the dynamic human-robot gesture interaction re-quirements of assembly scenes and carry out the research of dynamic ges-ture recognition based on 3D skeleton data.The improved Res DDNet net-work is proposed for the problem that the accuracy of DDNet network de-creases by adding layers.Analyzing the jittering of continuous dynamic gesture recognition results,a stable recognition mechanism combining multi-size sliding window voting and moving average method is proposed.A fi-nite state machine model of gesture interaction is established,and the tran-sition process of different states of the interaction process is designed.Fi-nally,a dynamic gesture dataset is built and it is verified that the improved model has better accuracy and the proposed stable recognition mechanism can recognize continuous dynamic gestures stably.Secondly,the research of RGB-D point cloud based assembly work-piece grasping is carried out for the demand to grasp the target workpiece according to the assembly process in collaborative assembly.The need for segmenting workpiece point clouds from scene is analyzed,and the Point-Net++ scene segmentation model is used to segment the target workpiece point clouds by combining point cloud geometric features and image tex-ture features.For the problem that different parts of the assembly workpiece have different graspability,a workpiece graspable part extraction method based on Point Net parts segmentation network is proposed.Finally,the principal component analysis method is employed to calculate the oriented bounding boxes of the graspable parts,estimate the grasping point and an-gle,and verify that the proposed assembly workpiece grasping recognition method can meet the grasping needs.Subsequently,the application of mixed reality in collaborative assem-bly is investigated for the need of hazardous area prompting and mixed-reality assembly guidance.The demand for dynamic danger zone prompt-ing of robotic arm is analyzed,and the method of calculating it by synchro-nizing virtual and real robotic arm is proposed.A virtual robotic arm is modeled and the kinematic model is established,the communication pro-cess of motion information between the virtual and real robotic arms is de-signed,and it is experimentally verified that the virtual and real robotic arms can maintain motion synchronization.A method is proposed to calcu-late the dynamic danger zone by the virtual robotic arm after the virtual-real synchronization,and display the collaborative danger zone by mixed reality overlay.The collaborative assembly guideline requirements are analyzed,and graphic annotations,interactive UI annotations and virtual model anno-tations are designed to provide assembly guidelines through mixed reality.Combining the above key technologies,three main modules,namely dynamic gesture interaction,robotic arm workpiece recognition and grasp-ing,and mixed reality assembly information and danger zone visualization,are developed and integrated to build a human-robot collaborative assem-bly system.Meanwhile,the effectiveness of each module is verified sep-arately.Finally,the proposed system is verified to be able to accomplish the collaborative assembly task by completing different assembly tasks in a human-robot collaborative manner using an assembly example of a regu-lator. |