Avatar-based communication is a very interesting topic that attracts researchers from lots of fields like computer science, artificial intelligence, and psychology. It has wide application. At the present time, most of the avatar animation systems are either driven by vision and speech, or controlled by haptic devices. We can acquire emotive avatars through the two methods above, but both methods need extra equipments and it's not convenient for users to wear haptic devices. Moreover, the high cost of equipments makes those systems impractical. By contrast, controlling avatars directly through input languages is more natural and non-intrusive.The aim of the paper is to develop a novel avatar facial animation system. The avatar's facial expression is controlled by input Chinese and English. This system is based on emotion characteristics dictionary. We construct Chinese emotion characteristics dictionary by means of computing semantic similarities and we inquire the synonym thesaurus and the near synonyms thesaurus of WordNet to generate English emotion characteristics dictionary. After attaining the dictionaries, we first break up the input text into simple sentences and then dynamic change the avatar's emotion per simple sentence. This paper uses vector space model to describe input text. In order to get the text feature vector, we segment a simple sentence into keywords or phrases and remove the stop words. In this paper, we propose a novel method for sentiment analysis based on semantic lexicon and Native Bayesian. After obtaining the text polarity, i.e. positive or negative, through this hybrid method, we make inquiries about emotion characteristics word from dictionary. At last, we synthesize two kinds of information above to gain sentiment characteristics of input text, and then we use the sentiment characteristics to control the avatar's expression.Experiments are carried out in a hybrid balanced corpus to test the accuracy and robustness of the proposed sentiment classification algorithm. Experiments on sentiment analysis show the effectiveness and efficiency of our approach. Furthermore, experiments on emotion synthesis demonstrate that our system can get real-time text driven emotive avatars. |