Font Size: a A A

Head Detection Based On AdaBoost And SVM

Posted on:2011-08-17Degree:MasterType:Thesis
Country:ChinaCandidate:S S NiuFull Text:PDF
GTID:2178330338478310Subject:Communication and Information System
Abstract/Summary:PDF Full Text Request
With the development of society and technology, video-based monitoring and processing technology has been developed from analog, digital, networked to intelligentized. The procedure of monitoring also has been developed from manual to semi-manual and fully automatic. Objects such as head and face are all detected automatically. As an important member of artificial intelligence and pattern classification, head detection becomes a hot topic and attracts more and more researchers.SVM (Support Vector Machine, SVM) and AdaBoost (Adaptive Boost) of Cascade structure are two kinds of popular target detection algorithms. SVM has perfect performance in classification and object detection. But the disadvantage of SVM is time-consuming and non-real time. AdaBoost mainly uses Haar-Like and rectangular gradient Features. Haar-Like features extract the difference of the gray value of adjacent regions in the image to describe the objects. Such features are simple and effective to face detection, but ineffective to head detection. The reason is that the texture of face image is relatively fixed, but for head image, the detected region inevitably includes hair and some background which leading to uncertainty texture information. Rectangular gradient features, characterized by a combination of random sub-windows, have a large number of expressions and resulted in a large waste of time and space. So, this method also can not meet the real-time requirements. In this paper, the popular head detection algorithms are researched and analyzed. And in view of their shortcomings, we proposed several methods and strategies to improve the classifier.First of all, in order to simplify the features, an improved expression of rectangular gradient feature is proposed, which can highlight the difference between the edge region and flat areas. In addition, self-grow features are added into the feature pool. This kind of self-grow features includes several sub-windows which can grow along the edge of an object but not randomly combined by location. So, it can reduce the size of the feature pool and simplify the computational complexity.Secondly, a fuzzy approach is proposed to solve the interval division problems in the weak learning algorithm of AdaBoost. It makes the classifier more adaptive to small fluctuations of feature value, and further makes the classifier maintain a stability performance.Finally, AdaBoost algorithm will inevitably have some false alarms no matter what kinds of features is used. In this paper, a SVM classifier of cascade type is added after the AdaBoost classifier, using the global performance of SVM to remove the false alarms of AdaBoost. While an image is detected by AdaBoost classifier, the remaining area (including the correct test region, including false alarms) will be very small. It will consume little time to detect such few sub-regions.
Keywords/Search Tags:AdaBoost, SVM, feature extraction, gradient feature, self-grow gradient feature
PDF Full Text Request
Related items