The main purpose of Knowledge Tracing(KT)is to model the learning process of students in order to predict their current state of knowledge acquisition and recommend personalized learning materials for them.However,the sparsity of educational data in reality,the problem of long sequences of learning interactions,and the complex forgetting mechanisms of the human brain pose great challenges to the process of modeling learning trajectories.In order to solve the above three problems,this paper proposes two new Knowledge Tracing models based on graph embedding and attention mechanism,aiming to improve the performance of the model in predicting learners’ future performance by fully extracting the dependency information from learning interaction data.·A Graph-based Attentive Knowledge-Search Model for Knowledge Tracing The model distinguishes the problem and skill into two different nodes,and the higher-order information between the nodes is obtained based on the embedding propagation learning of R-GCN.The LSTM is used to model the student’s knowledge state,and the attention mechanism is combined to compensate for the shortcomings of the LSTM in capturing long-term dependencies,as a way to more accurately assess the student’s future learning ability.The model adds two forgetting factors to the basic attention formula,one is the time distance,and the other is the past trial counts,which takes full account of the forgetting mechanism.·A Graph-based and Transformer-based Model for Knowledge Tracing Although the LSTM based on RNN variants solves to some extent the problems of gradient disappearance and long-term dependence in RNN,the LSTM still suffers from long-term dependence in modeling the student’s knowledge state because the student’s learning interactions are a sufficiently long set of sequences,so we use Transformer to model the student’s historical learning.Unlike other Transformerbased knowledge-tracking models,we stitch the question embeddings and skill embeddings obtained from R-GCN learning as input to Transformer’s encoder,and combine them with the forgetting behavior of the learners,allowing the model to learn the relationships between different exercises sufficiently efficiently.Both models proposed above have been extensively experimented on several publicly available datasets,and the results show that GASKT and GTRKT outperform the current mainstream KT models and have higher explanatory. |