Sign language, which expresses specific information such as hand shape, position, motion, orientation, facial expression etc, could not only help the deaf to communicate with others but also provide a new approach for human-computer interface (HCI) applications. Research on sign language and gesture recognition has important significance in improving human language comprehension level of computer and developing multimode human-computer interaction.The surface electromyography (SEMG) sensor and accelerometer (ACC) provide two potential technologies for gesture sensing. SEMG signals contain rich information about hand gestures, such as hand shape, knuckle flexion and extension, muscular activities etc., and SEMG information is capable of distinguishing subtle finger movements. However, as a kind of weak electrophysiological signal, SEMG is sensitive to the individual difference and electrode displacement. Accelerometer is well suited to capture noticeable, larger scale gestures with different hand trajectories of forearm movements but it's not good at distinguishing the static gestures.Combining the advantages of two sensors for hand gesture recognition, an approach aiming at fusing effectively SEMG and ACC information is proposed for the recognition of large vocabulary gestures firstly in this paper. Then statistical language model and syntactic model are used to overcome influence caused by individual difference. The main contribution of our work focuses on:(1) The subwords are used instead of gesture words as the basic unit for sign language recognition. And decision tree was used as the main classification framework to fuse effectively the sign language subword features extracted from the ACC and EMG signals. The experimental results demonstrate that the proposed method can obtain high recognition rates with low computation cost .(2) Based on N-gram model, an error detection and correction strategy for Chinese sign language subwords recognition was proposed. This approach builds statistical model from sign language database, and further uses mutual information and transition probability for error detection between neighboring subwords. The advantage of the method is that mutual information and transition probability are two types of probabilistic information which is not affected by SEMG and ACC signals. Experimental results show that the proposed error detection and correction method can improve significantly recognition accuracies at subword-level and sentence-level. (3) By introducing syntactic model in sign language recognition, a method for analyzing the rationality of sentence recognition results is proposed. In this method, the subwords are translated into words sequence marked with the part of speech. Then syntactic rules are used for phrase detection to find the part of the sentence which does not satisfy the rules. Our experimental results also demonstrate the efficacy of syntactic rules for error detection at sentence-level. |