| With the development of science and technology,more and more companies choose to use recruitment websites to publish recruitment information.However,the recruitment information on recruitment websites is numerous and repetitive,and it is difficult for job seekers to quickly and accurately find jobs that match them.There is a discrepancy between job seekers’ own perceptions of the competencies required for the position.Knowledge graph is a technology that describes entities and relationships and displays the knowledge points,core structure and overall knowledge structure contained in the information in a visual way,which can organize massive recruitment information in a more efficient way.Under the impact of the fourth industrial revolution with big data as the main feature,the digital and intelligent upgrading of the industry has promoted the formation of a new pattern of employment structure,which has gradually led to a situation where the supply of talents in data positions is in short supply.Therefore,this paper combines the competency model with the knowledge map,and takes data jobs as an example to construct a job knowledge map based on recruitment information to realize the integration of massive recruitment information.This paper sorts out the competency model theory,builds a competency model for data jobs based on the iceberg model,and proposes four dimensions of competency for data jobs: knowledge,skills,attitudes,and attitudes and values.This paper adopts the top-down construction of knowledge graph,constructs the ontology library based on the job competency model at the schema layer,and defines entities and relationships for semantic modeling.In the data layer,knowledge extraction and fusion are carried out on the data,and finally the triplet composed of specific entities and relationships.The data in this article comes from recruitment data crawled from recruitment websites,including structured data such as company,salary,and unstructured data such as job requirements.This paper mainly extracts knowledge from unstructured data,uses the LTP word segmentation tool to segment the job requirements text,and uses the BIOES tagging method to construct a corpus.The deep learning algorithm BERT-Bi LSTM-CRF model is used for entity recognition,and the Bert model extracts rich text.The feature obtains a vector matrix of word granularity,and then Bi LSTM captures the semantics of each word in the context.Finally,the CRF model is used to extract the optimal sequence,and finally four types of entities of knowledge,skills,traits,and attitudes and values are obtained.Then,the relationship between the entities is analyzed by using the dependency syntax,and the semantically similar degree words are fused by the method of word vector similarity.Then,the extracted knowledge units are stored and drawn by the Neo4 j database.This paper verifies the possibility of building a job knowledge map process through practice.Based on the constructed knowledge graph,this paper explores its application in job search,job search recommendation,etc.This paper builds a job competency model and constructs the ontology,which provides a new idea for the abstract conceptualization of the competency requirements that specific jobs in the whole industry should have.Establishing a databased job knowledge map to solve the problem of long time and low efficiency for job seekers to retrieve effective recruitment information in the Internet era of information explosion has certain theoretical and practical significance. |