Font Size: a A A

The Research And Implementation Of Multimodal Interactive System For Space Robot

Posted on:2017-09-08Degree:MasterType:Thesis
Country:ChinaCandidate:Z DingFull Text:PDF
GTID:2348330503990922Subject:Mechanical engineering
Abstract/Summary:PDF Full Text Request
In order to release the burden on the work of the astronauts and reduce their risk of extravehicular activity, space robot can assist or replace the astronauts to complete a number of space missions. In many key technologies of space robot, human-robot interaction is particularly important. However, according to the current situation of human-robot interaction technology, the natural and friendly interaction between human and robot can't be carried out, which often requires the operator to take a lot of learning load. In this paper, how to improve the control and interactive ability of the operator to the space robot is studied, and a multimodal human-robot interaction system based on gesture and speech input is established. Human can interact with space robot in a natural and friendly way by the multimodal information fusion and information feedback mechanism.This paper has done the following research work on the aspect of gesture and speech interactive modal. Firstly, the paper establishes a multimodal human-robot interaction interface. In the process of gesture interaction, using date glove and somatosensory input devices to read hand gesture data and establishing a relevant hand gesture database based on task analysis. Respectively, template matching method and BP neural network model are used to identify the corresponding static gestures. In speech interaction, the grammar rules of speech recognition and the dictionary of common speech commands are defined based on SAPI platform, which can realize the function of speech recognition and synthesis.Secondly, the information fusion strategy of gesture and voice modals is studied. According to the relationship between different kinds of modals, the paper classified the fused methods, complementary, redundancy and independent. In the process of multimodal fusion, each modal of the input information is mapped to interaction primitives. By the method of filling slot, the semantic integration of multimodal is achieved. The interaction process is regarded as the filling process of the slot, through the judging whether the task slot is completely filled to clear the operator's intention. In the execution of semantic commands, a task priority execution mechanism is proposed, which is based on the parallel processing of speech input and gesture input signal. This way improves the input bandwidth and efficiency of interaction.Finally, to realize a task-oriented multimodal human-robot dialogue system and allow the operator to interact with space robot naturally, the paper constructs the virtual simulation model of the space robot and the multimodal human-robot interaction interface. Compared with the single modal interaction, the evaluation index of system was put forward to verify the effectiveness of multimodal interaction.
Keywords/Search Tags:Space robot, Multimodal interaction, Gesture recognition, Speech recognition, Multimodal fusion
PDF Full Text Request
Related items