In recent years,though language models have achieved great success,they still lack the ability to general knowledge modeling,which hinders the large-scale promotion of language models in practical application scenarios.It has been proved that knowledge can provide a more comprehensive and rich language modeling capability.The robustness of model can be enhanced through multi-dimensional common sense and domain knowledge.As an indispensable basic ability of the model,syntactic parsing is undoubtedly important.A number of researches have attempted to enhance the syntactic knowledge of model to improve the effect.However,most of them need to introduce a separate syntax module or can only be applied to specific tasks,which undoubtedly increases the computational complexity and application difficulty of the model and limits its wider application in natural language processing tasks.In order to solve the problems of high computational complexity and insufficient universality,this paper proposes two syntactic knowledge enhancement methods,and verifies the effectiveness of the methods through real scenarios.(1)Model knowledge enhancement based on syntactic sensitive dataThis paper further enhances the representation of input text by utilizing structured information such as dependent syntactic components and entity recognition attributes.Based on the above multi-source knowledge,this paper designs multiple syntactic transformation rules and syntactic sensitive data generation schemes,and provides usage scenarios for the model to utilize relevant sensitive data by analyzing the rule characteristics through semantic transformation relation.On the basis of achieving data augmentation,the syntactic awareness ability of the model has been greatly improved.(2)Model knowledge enhancement based on syntactic contrastive learningThe focus of contrastive learning is to use contrastive thought to improve model robustness unsupervisedly.This paper proposes introducing syntactic information into the objective function of contrastive learning.By utilizing multi-dimensional syntactic structure knowledge to construct syntactic sensitive positive and negative samples,a syntactic guided contrastive learning algorithm is proposed,which achieves utilization of syntactic contrastive thought and significantly improves the performance of the model in related general tasks.(3)Model syntactic knowledge enhancement for search scenariosSearch can be roughly understood as the global order relation formed by sorting algorithm around multiple pairs.This paper proposes a method for enhancing the focus of core terms in search scenarios,and further utilizes syntactic knowledge to enhance the focus of core terms in sorting models,effectively alleviating the problem of model insensitivity to core terms in search scenarios.The experimental results demonstrate that this method can significantly improve the effectiveness of the model,and reasonably discuss the restrictive relationship between literal overlap and the ability to focus on core terms. |