With the rapid development of network and computer technology, various types of video entertainments are springing up; unfortunately, videos full of blood, or violence, or pornography are growing undesirably at the same time. Detection of objectionable videos is a challenging task, which is difficult to achieve using existing technology. Now most of researches aim at detecting objectionable images, recognition technology of video and audio is at the early stage. Therefore, how to lookup objectionable videos quickly and efficiently have become an urgent problem.Since objectionable videos image have different color distribution and different statistical characteristics, so video color style is classified by using the accumulative histogram of color components of saturation in HSI space firstly. And then we build various styles of skin models, extracting the color information better and accurately; scene and shot are segmented by using the histogram difference and Graph Partition Model in the video stream, and after key frame extraction. Because the objectionable videos is combined of different shot-related in a particular environment (the typical scene), then the video analysis platform is built based on the superposition principle and multi-color space, which is used to classify typical scene. Finally, according to skin color model and face expression to classify objectionable videos.In this paper we focus on the preliminary work of objectionable videos detection, have resolved several basal issues, the main research contents are as follows:1. We put forward a new algorithm named superposition principle of gray histogram, and construct a corresponding approximate superposition formula, which could well describe the gray histogram of many video scenes formed by background and moving objects and afford an effective method of detecting shot and scene cut, also detecting moving objects in dynamic background in future, and we give a reasonable superposition principle of verification. Experiments show that the superposition principle formula works well in such scenes as uniform background, the principle can well describe various combinations of scenes formed by the background and multiple objects.2. We build the video analysis platform based on the superposition principle and multi-color space. Multi-color space is introduced when we study probabilistic graphical of MeanShift algorithm, which can describe object comprehensively. In the study of the video scene and shot boundary detection, we found multi-color space can be well used to the scene and shot segmentation, and we develop an analysis platform for video analysis on this basis. This platform can be used to observe changes of the scene, through the statistics of the accumulative histogram, then the video style and typical scene can be classified and use histogram difference to detect scene cut; In the near future, we will improve the video analysis platform, which can also be used to detect scene and shot cut.3. Video analysis platform is used to classify the video color style and typical scenes preliminarily. Through a large number of video trainings, histogram-based analysis of the differential histogram comparison of the video to get an accurate the range of the categorization, and the platform set the threshold to determine the style of the video. In the video scene, we use the accumulative histogram of R, G, B, H and Y to detect typical scene in the multi-color space. Since that exist some unsatisfactory problems in the accuracy rate and the recall rate of the typical scene classification, we will still do our best to improve or seek after new technologies to enhance the accuracy of scene classification, and perfect the effect of objectionable videos detection. |