Font Size: a A A

Research On The Key Technology Of Human Eye And Visual Information Detection

Posted on:2016-01-04Degree:DoctorType:Dissertation
Country:ChinaCandidate:M X YuFull Text:PDF
GTID:1108330503953419Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
The eye is the most important sensory organ of human body. In the human face, eyes are the most salient and stable features in comparison with mouth or nose. Research on the eye and its movement is essential for understanding human visual information. Through the analysis of the human eye and visual information detection technology, this research mainly focuses on five tasks including eye detection, iris center localization, gaze tracking, eye fixations identification and human-robot interaction based on gaze gestures, as follows:1. Eyes with glasses, different eyes conditions(e.g. semi-closed eyes, closed eyes, and squint), rotation of frontal face, and illumination changes are not considered fully by existing eye detection methods. So a hybrid eye detection method is proposed. Firstly, because the variation of grey intensity in eye regions is more obvious than that in other regions on the face, it becomes an evidence to construct a grey intensity variance filter(EVF). The EVF is used to eliminate most of false eye regions(non-eye regions) in order to remain candidate eye regions. Then, a trained support vector machine(SVM) classifier is utilized to determine the precise eye location among the candidate eye regions. During the SVM training, the Principal Component Analysis(PCA) is used for data dimension reduction and feature extraction in order to enhance the performance of SVM. SVM parameters are optimized by a Genetic Algorithm(GA). The proposed eye detection method is evaluated on the IMM, BioID, and FERET face databases. The average correct rate of eye detection is 97%.2. The left and right states of the iris stayed within the eye region are not considered fully by existing iris center localization methods. An active and predicted iris center detection method is proposed. Firstly, the hybrid projection function is used to estimate the rough iris center. The local binarization and ten-level quantization methods are used to local left and right eye corners. The model of iris edge selection is established based on the geometric relationship between the rough iris center and two eye corners. The proposed active edge detection algorithm is utilized to extract iris edge points. The method can effectively remove noise points created by the eyelash and eyelid. When the iris rolls into left or right eye corner, no enough edge points in one side tend to the upper and lower vertex of true ellipse, resulted in fitting ellipse failure. In order to solve this problem, the paper proposes an iris predicted edge points algorithm. The evaluated result of the method on different iris states within the eye region shows the global average accuracy of 94.3% in localizing the iris center.3. A non-intrusive gaze tracking system with a single camera and visible light source is proposed in our research. The system consists of three parts: real-time eye detection, feature extraction, and gaze estimation. The real-time eye method is based on appearance and feature to locate eye regions. Firstly, the candidate eye regions are detected with double threshold binarization, glints shape filtering and positions filtering algorithms. Then, using the Independent Components Analysis(ICA) method, the accurate eye regions are localized. The propoed method can solve the robustness to different face poses, various illuminations, with glasses, and large head movements. The experimental results show the correct rate of the eye detection in real-time 2000 frames is 98.63%. The 2D gaze estimation method does not depend on hardware and eyeball physiological parameter. Thus the 2D gaze estimation with interpolation method is used to solve the decrease in accuracy due to the head movement. Additionally, a five-point calibration correction algorithm for the compensation of angle deviation between the optical axis and visual axis is proposed. Because the obtained gaze points produce a certain degree of jitter on the screen, combing the velocity-based with dispersion-based identification algorithms allow for noise remove and the robustness of the classification. The gaze estimation is performed under the head motion and without head motion. The experimental results show average accuracies 0.54° and 0.69°, respectively.4. It is not easy to determine fixations from gaze points automatically, a spatial-temporal trajectory clustering algorithm for eye fixations identification is presented. The algorithm requires two parameters Eps and MinTime. The MinTime threshold is based on the nature of the tasks. The optimum Eps threshold is derived automatically from the data sets with aid of the ’Gap Statistic’ theory. Compared the classification results obtained by our algorithm with four other algorithms for eye fixations identification show the proposed algorithm demonstrated an equal or better performance.5. HRI based on gaze is presented in this study. Because there are several of disadvantages of current task selection strategies, such as Midas touch, so this research adopts gaze gestures as task selection strategy to teleoperate the Drone. The commands design for the Drone controls and task design are proposed in our paper. The experimental results show the designed tasks are completed well with gaze gestures strategy and there is no significant difference in the mental workload between gaze gestures and the keyboard. Hence, gaze gestures strategy has a great potential as an additional HRI for use in agent teleoperation.
Keywords/Search Tags:human eye and visual information detection, eye detection, iris center localization, fixations identification, gaze gestures, human-robot interaction
PDF Full Text Request
Related items