There are a large number of violent and politically sensitive information pictures on the Internet today.These sensitive information pictures are constantly eroding the current Internet environment and causing serious impacts on various groups of people.With the rise of social platforms and short video applications,the dissemination of sensitive information has accelerated.The state has continued to crack down on the dissemination of sensitive information related to violence and politics,and has continuously promulgated and improved relevant laws and regulations.At present,the main detection method for these sensitive information pictures still relies on the manual review mechanism,and the huge labor cost and time cost are constantly restricting the detection ability of sensitive information pictures.With the continuous development of computer hardware and deep learning technology,it is gradually possible to introduce deep learning technology to partially replace manual detection of sensitive images.In the field of violent and politically sensitive image detection,the related detection technology is still at an immature stage,and the detection capability of the model still has a lot of room for development.Therefore,this paper proposes an image-sensitive information detection technology based on full-element features,namely the multiinformation identification network MIDNet,which mainly classifies violent and politically sensitive images.The MIDNet network proposed in this paper can make full use of the global information and local target information,that is,the full element information of the picture,thereby improving the model’s ability to classify sensitive information pictures.The MIDNet network is mainly composed of four modules: the feature extraction module is responsible for extracting the texture features,edge features and SIFT stable features of the image using the feature extraction algorithm,and fuses them.The sensitive area detection module is responsible for extracting the category and location information of sensitive objects in the picture.The global feature classification module is responsible for predicting the category of the image and outputting the predicted vector.The full-element feature processing module is responsible for processing the full-element information generated by the above modules,converting it into a corresponding weight vector,and finally classifying it.In the sensitive area detection module,this paper proposes the SPN layer to replace the previous Res layer.In the global feature classification module,a downsampling module is introduced.A full-factor feature processing module is designed.Experiments were carried out on the self-established dataset SID including about 22,000 images.The experimental results show that the feature extraction module can provide more feature information for the model,which improves the performance of the sensitive area detection module and the global feature classification module respectively.0.5% and 0.6%.In the parameter selection of the total factor processing module,0.4 is selected as the best parameter.Finally,in the comparison experiment with other mainstream models,MIDNet achieved a classification accuracy of 86.6%,which is about 3.4% higher than Dense Net-121 and better than other classification models. |