Speech production is complex for the brain to control, since it involves many neural processes such as speech planning, motor control, auditory and somatosensory feedback. Those functions are thought to work both in cascaded and parallel, and the control signals are transformed from one brain area to others with “one-to-many” relations. So in our previous model structure, the relation is described as “one-to-one” relation, not “one to-many” relation, so it is somewhat unreasonable. To describe this situation, in this study, we developed an improved framework of a neurocomputational model for speech production based on our previous study.In our study, the improved computational model framework uses a physiological articulatory model to generate the articulatory movement and speech signal during the speech production process, instead of using the geometrical model in old neural control model for speech production. The physiological articulatory could reflect the articulatory feature in the speech articulation process.The proposed model is capable of dealing with dynamic properties of speech articulation for consonant-vowel(CV-) syllables. We design the neural representation method for CV- syllables and generate the training data for the experiment. In our simulation, the neuronal groups(i.e., motor, auditory and somatosensory) were acquired by learning and stored in the self-organizing maps(SOMs), and those relations between the SOMs were investigated. The results show that the time-varying properties were represented properly. In the control signal flow, the model demonstrated “one-to-many” projections between the SOMs, where one neuron in an SOM was projected onto 1.64 neurons in another SOM on average. What’s more, the distribution of motor control map is somewhat similar to the measurement using the electrocortigraphic(ECo G) array over the left hemisphere of the brain, and it proves that our model structure and training method are reliable. |