Font Size: a A A

Semantic Based Similarity Analysis Of Human Video

Posted on:2019-06-20Degree:MasterType:Thesis
Country:ChinaCandidate:M Y MuFull Text:PDF
GTID:2348330542491714Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Video similarity computation is to quantify the similarity of two video contents,and judge the similarity of video content according to the score.Video similarity computation can provide a basis for content-based video retrieval and target tracking in computer vision.The similarity degree of two videos can be approximately replaced by the similarity of key frames.There are many problems in existing video key frame extraction algorithms.First,the K value and cluster center in K-means algorithm need presupposition,and the unreasonable presupposition results in the effect of the experimental results.Second,traditional algorithm can cause a lot of information missing.The third video contains a lot of semantic information.Extracting the underlying information and calculating its similarity will cause greater error.In this paper,we introduce an adaptive key frame extraction algorithm,a low level semantic video similarity calculation method combined with motion energy map and a semantic video similarity calculation method.The specific work is as follows:(1)A self-adaptive video key frame extraction algorithm combining color clustering and content is proposed.Firstly,the lens is segmented and the color information and texture information of the 3 channels are extracted and clustering.The local maximum and minimum value are calculated for the video frames after clustering,and the index of the matched frames is recorded compared with the average value.The information entropy of the frame is calculated and the information entropy is recorded most.The index of large frame is combined with the two class of frames to be identified as the key frame.This method effectively improves the redundancy of the key frames extracted from the existing algorithms,and the missing content caused by traditional algorithms.(2)A preprocessing method for the extracted video key frames is proposed,and the improved genetic algorithm which is added to the improved linear stretch is used to segment it,and the key frame is divided into two parts of the target,which reduces the redundancy of the image,greatly preserves the internal volume of the target,and shortens the time of the subsequent feature extraction.(3)A low level semantic video similarity computation method based on color,texture and motion features is proposed.Using the HSV color space,using the MEM-LBP operator to extract the texture features,make full use of the depth information of the depth sequence,establish the depth motion energy map,and use the improved LBP algorithm to code it.(4)A deep learning based high-level semantic video similarity computation method is proposed.Using the improved VGGNet deep learning framework,we use the last second layers of the frame,namely the last layer of feature extraction,to express the video features.The improved VGGNet-16 is to reduce the total link level from 4096 to 1024.It reduces the amount of parameters,reduces the storage scale and improves the computation speed.
Keywords/Search Tags:Similarity calculation, Key frame extraction, Semantic feature, Deep learning
PDF Full Text Request
Related items