As a key technology in human-computer dialogue systems,slot filling directly determines whether the machine can accurately understand the intentions of users.Cross-domain slot filling is the mainstream direction of current slot filling research.Although it has been widely studied by scholars,most models based on traditional deep learning lack explicit modeling of the relationship between the source domain and the target domain.Meanwhile,more and more pre-trained language models are used to solve the slot filling problem,but these methods only take advantage of the powerful coding capabilities of pre-trained language models,without deeply mining the inherent knowledge of them.As for the lack of explicit modeling of the relationship between source and target domains in traditional deep learning models,this thesis designs a slot filling method based on contrastive learning.In order to enrich the meaning of slot types in zero/few-shot scenarios,in addition to the semantic information of utterances,this thesis also introduces the syntactic information of them.Moreover,this method establishes the relationship between slot values and slot types in semantics and syntactics through contrastive learning.In addition,in order to efficiently transfer knowledge from the source domain to the target domain,this thesis designs a strategy to explicitly establish the corresponding connections between slot types in the source and target domain.As for the problem of not mining the inherent knowledge of pre-trained language models,this thesis proposes a slot filling method based on prompt learning.This method regards the slot filling task as the text generation task,and designs a prompt template that incorporates domain descriptions,slot descriptions,examples,and example contexts.Besides,in order to improve the robustness of the model,a postprocessing method is designed to alleviate the problem of vocabulary mistakes.To alleviate the problem of boundary mistakes,this thesis proposes auxiliary task and template extension methods to improve the sensitivity of the model in boundary predicting.Experimental results on multiple benchmark datasets show that the contrastive learning-based slot filling method and the prompt-based slot filling method can effectively improve the performance of the cross-domain slot filling task in zero-shot and few-shot scenarios.Compared with the same type models,the average F1 score of these two methods is improved by 1.93%and 7.47%respectively. |