Font Size: a A A

The Visual English Mandarin Computer-Assisted Pronunciation Training System

Posted on:2014-08-12Degree:MasterType:Thesis
Country:ChinaCandidate:H N ZhengFull Text:PDF
GTID:2268330401477120Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
This paper focuses on speech visualization, and targets for revealing actual articulatory movement. First, real speaker’s articulatory movements are collected by the Electro-Magnetic-Articulography (EMA) AG500every5msec. Then, a virtual3D talking head model is animated using the collected articulatory data, so as to display both internal and external motions.In order to get confusing pronunciation text pairs of English and Mandarin, and reveal bilingual articulatory movements, the cross linguistic comparison of Mandarin and English is applied. The first task of cross linguistic comparison is to eliminate the variability which came from speaker-specific vocal-tract structure and the other individual biomechanical properties. This paper use the speaker normalization based procrustes to do speaker normalization, and then the hierarchical clustering analysis (HCA) and the multi-dimensional scaling (MDS) algorithms are used to do a quantized comparison, so as to obtain bilingual vowel and consonant mini-pairs respectively.So, the pronunciation texts are composed of Mandarin corpus, English corpus and the bilingual confusing pronunciation text pairs. Then, in order to obtain articulatory movements of any syllable, a modified CM co-articulation model is proposed to generate any syllable’articulatory movements based on a string of phonemes’articulatory data. Experimental results showed that the synthetic articulatory trajectories can approach real articulatory trajectories more accurately, then the synthetic articulatory trajectories which obtained by the modified CM co-articulation model were used to animate a virtual3D musculoskeletal model. The standard sound pronunciation was added, so a visual English-mandarin based computer-assisted pronunciation training (CAPT) system was generated finally.In addition, a perception test was applied to evaluate the performance of this system. The perception test results showed that this system can animate both internal and external articulatory motions effectively. Then two experiments were did in order to test the instruction assist for the hearing loss children and the second language learners, the results showed that the system can help the hearing loss children and the second language learners do train pronunciation and correct the fallible pronunciations, all the subjects can improve their pronunciation after a short-term pronunciation trainings. Finally, a perception testing experiment was designed to test the function of tongue reading in speech perception and recognition. The findings demonstrated that tongue reading can take over lip reading to supplement the audio information when the audio information is not sufficient, and tongue reading as well as the lip reading has recognition capability.
Keywords/Search Tags:Electro-Magnetic-Articulography (EMA), visual speechsynthetize, a modified CM co-articulation model, procrustes algorithm, computer-assisted pronunciation training (CAPT), 3D talking head
PDF Full Text Request
Related items