| Eye tracking is a technology used to measure the subject’s gaze direction and fixation by identifying and recording the eye movement trajectories.With the development of Information Technology,eye tracking technology has been widely used in various fields,such as early diagnosis of mental illness,human-computer interaction,clinical medicine,fatigue driving detection,multimedia technology,scientific research,etc.Recently,the advance and prevalence of eye tracking technology has also contributed to lighter and more accessible eye movement acquisition devices。As a common eye movement acquisition device that records the characteristics of human eye movement trajectories,the eye tracker provides hardware support for eye tracking technology.Common types of eye trackers can be divided into Wearable eye trackers and desktop eye trackers,which can be applied in different scenarios.With the above-mentioned hardware support,eye tracking technology is now focused on high accuracy and stability,and researchers have proposed various eye tracking method for different hardware devices and application scenarios.However,the performance of existing eye tracking algorithms are affected by various factors that can lead to poor accuracy and stability,such as changes in external illumination,pupil reflection,and pupil occlusion.Additionally,the complex 3D eye tracking model requires a large amount of calculation,making real-time eye tracking and gaze point prediction challenging.To address these issues,this paper proposes an accurate,fast,and robust eye tracking algorithm based on wearable eye tracking devices.The research content and technological innovation points of this article are as follows:(1)This paper proposes a wearable eye tracking device as the hardware basis of the eye tracking system.To address the issue of low robustness of subsequent algorithms caused by hardware conditions and environmental factors that may interfere with collected pupil images,this paper presents a pupil image preprocessing algorithm based on image enhancement and a pre-trained iris segmentation model.To combat the low contrast of images captured by high-speed cameras,this paper compares various image enhancement algorithms and identifies global histogram equalization as the optimal algorithm,which solves the illumination’s influence on image quality.Additionally,to address the impact of non-pupil edge points on pupil ellipse fitting results,this paper employs a pre-trained model based on Yolo deep network.And the network structure was optimized to improve the efficiency of training and prediction,effectively filtering out edge points such as eyelids and eyelashes in the image,in order to enhance the performance of subsequent algorithms.(2)To address the problem of environmental illumination and pupil occlusion affecting the accuracy and robustness of the pupil recognition algorithm,this article proposes an adaptive pupil extraction algorithm based on improved RANSAC.The proposed algorithm improves the robustness and speed of the algorithm while maintaining recognition accuracy.This optimization improves the selection and completion of edge point sets and adaptive initial point selection based on traditional RANSAC algorithm.First,edge points are determined by calculating image pixel gradient changes,and line detection and edge completion are performed on the edge point set to enhance algorithm robustness in the presence of pupil occlusion.Additionally,the algorithm’s initial point fitting is adaptively selected based on the image centroid,reducing algorithm iterations and improving efficiency.The proposed algorithm is evaluated against traditional RANSAC and Deep VOG deep learning algorithms using CASIA and IITD public database,achieving high fitting accuracy of 97%.The proposed algorithm also offers faster processing time(12.01ms)than the Deep VOG algorithm while maintaining the same accuracy.Moreover,the proposed algorithm demonstrates better robustness in scenarios involving pupil occlusion,with a recognition rate of 83.8% for images with pupil occlusion.(3)To cater to different eye tracking scenarios,we propose two eye tracking models and evaluate their accuracy and precision at varying depths.The first model,a gaze prediction model based on multiple regression,simulates the mapping relationship between the pupil center and gaze point by solving the polynomial transformation function.This algorithm is suitable for predicting the gaze point in a specific depth plane in space.The second model,a 3D gaze prediction model based on the 3D visual axis vector,begins by transforming the pupil image using affine transformation.Then,the double-sphere model of the human eye is modeled in the 3D coordinate system to calculate the 3D optical axis vector.Finally,the intersection point of the left and right eye’s visual axis vectors in space is calculated as the 3D fixation.To address the issue of unknown depth information when constructing the 3D model of the human eye-fixation,this paper proposes a two-dimensional fixation depth correction prediction model based on 3D visual axis vector,which resolves the above-mentioned problem of unknown depth information.In summary,this paper analyzes and models the pupil image extracted by the wearable eye tracker,completes the prediction from the pupil image to the fixation,optimizes the common problems of eye tracking,and addresses the impact of external lighting conditions,pupil occlusion,blinking,and other factors on the algorithm’s robustness.It can accurately estimate the gaze point in 3D space and achieve real-time prediction of the subject’s current gaze point position. |