| According to the results of the second national sample survey of people with disabilities,the number of people with hearing impairment in China is as high as 27.8million,and deafness to mute has become a major obstacle for hearing-impaired groups to carry out normal social activities.This project combines the sensory compensation mechanism and realizes the transformation from the traditional interaction mode of everyone to the ideal human-computer interaction mode from the perspective of visual,auditory and tactile multi-channel interaction,respectively,in order to reduce the cost of speech rehabilitation training for the hearing-impaired.The main research process of this topic is as follows:First,the feasibility of facial recognition technology,speech recognition technology,and miniature vibration motor to realize digital interaction in the process of speech rehabilitation training for the hearing impaired was analyzed.Second,through qualitative research methods such as semi-structured interviews and quantitative analysis,it was clarified that enhancing the accessibility of haptic perception was the main experience enhancement point.Thirdly,through experimental method,we used an oscilloscope to collect the motion data generated by the larynx during the articulation of healthy listeners,and then used the FFT function in the Numpy scientific computing library of Python platform to perform discrete Fourier analysis on the data and clarify the direction and frequency pattern of the larynx motion during articulation.The human-computer interaction mode of the tactile channel is implemented by verifying the effectiveness of the micro-vibrating motor for conveying tactile information.Fourth,with reference to the Mandarin pronunciation atlas,the Figma platform is used to digitally map the learning information of the visual channel,such as mouth shape learning and tongue position learning,to realize the human-computer interaction mode of the visual channel.Finally,we use Maze platform and Nielsen interaction ten principles to verify and optimize the product in terms of effectiveness,interaction efficiency and satisfaction.The main research results of this project are as follows:(1)In the process of optimizing human-computer interaction for speech rehabilitation training for the hearing impaired,improving the accessibility of tactile information is a key factor in optimizing the overall training efficiency.The use of tactile sensation to express the direction of laryngeal movement and vibration frequency during articulation is a key point to achieve tactile digitization.(2)From the viewpoint of the effectiveness of training methods,mouth shape learning > tongue position learning > tactile learning.In terms of the accessibility of training methods,oral learning > lingual learning > tactile learning.Therefore,the order that affects the overall training efficiency is tactile learning > oral learning > lingual learning.(3)A hardware wearable pronunciation aid training device with an app was designed to achieve a closed-loop learning process of phonological awareness,pronunciation,phonological practice,and phonological discrimination for the hearing impaired through phonological separation,tactile discrimination,and other compensatory strategies,thus reducing the learning cost of speech rehabilitation training. |