| In recent years,deep neural networks have greatly promoted the development of artificial intelligence in some fields,including computer vision.speech recognition,natural language processing,etc.,and generative adversarial networks(GAN)have become a hot research direction in the field of artificial intelligence.The core idea of GAN come from the Nash equilibrium in game theory,which is composed of a generator and a discriminator.Through the training and learning in the way of the adversarial mechanism,GAN has been achieved amazing results in the field of image generation.Inspired by GAN,we apply the adversarial mechanism to the tasks related to speech and language.The paper focuses on adversarial network.It conducts research on speech separation and natural language inference,including:1.Improving speech separation with adversarial network and reinforcement learningIn contrast to the conventional deep neural network for single-channel speech separation,we propose a separation framework based on adversarial network and reinforcement learning.The purpose of the adversarial network inspired by the generative adversarial network is to make the separated result and ground-truth with the same data distribution by evaluating the discrepancy between them.Meanwhile,in order to enable the model to bias the generation towards desirable metrics and reduce the discrepancy between training loss(such as mean squared error)and testing metric(such as SDR),we present the future success based on reinforcement learning.We directly optimize the performance metric to accomplish exactly that.With the combination of adversarial network and reinforcement learning,our model is able to improve the performance of single channel speech separation.2.Natural language inference based on adversarial regularizationAt present,natural language inference models rely heavily on word information.Although the discriminant information related to the words plays an important role in inference,the inference models should pay more attention to the internal meaning of continuous text and the expression of language,and carry out inference through an overall grasp of sentence meaning rather than make shallow inference based on the opposition or similarity between individual words.In addition,the traditional supervised learning method makes the model rely too much on the language priori of the training set,and lacks the understanding of the language logic.In order to explicitly emphasize the importance of the learning sequence encoding and reduce the impact of language bias,this paper proposes a natural language inference method based on adversarial regularization.This method firstly introduces an inference model based on word encoding,which takes the word encoding in the standard inference model as input,and it can infer successfully only by using language bias.Then,through the adversarial training between the two models,the standard inference model can avoid relying too much on language bias.Experiments were carried out on two open standard datasets,SNLI and Breaking-NLI.On the SNLI dataset,the method achieves the best performance in existing inference models based on sentence embedding,and achieves 87.60%accuracy in test set.And the inference model has achieved state-of-the-art result on the Breaking-NLI dataset. |