| In the research field of Artificial Intelligence, Associative Memory has been a common and important task. Such task aims to learn the relationships between two parts of patterns by their semantic meanings. Most of the AM models are implemented by neural networks now, and the models of Hopfield networks and deep learning nets are most employed. Meanwhile, the SOM networks also performance well in solving this task, and a lot of improvements of SOM have been proposed. However, these existing improvements cannot deal with it perfectly, especially with AM between two parts of patterns in different modal. In this paper, a new MultiSOM network and its learning rule is proposed. Unfamiliar to existing models, MultiSOM don’t train the AM between different modal directly. It draws the topology of the two parts of inputs to the topology of semantic data, which is regard as a bridge. This way of learning AM conforms to psychology assumptions more.In order to test the ability of the Multisom network dealing with AM, the experiments on AM between pictures of capital letters and lowercase ones are first done. It is suggested that this new network could learn such AM well. Some parameters of this network have also been studied, and some empirical rules of parameter setting have been learnt.Secondly there are the experiments on the AM between pictures and speeches. In the control experiment with ordinary SOM network, the MultiSOM network wins to learn the AM between patterns of different modal, which differ from each other when represented. |