Font Size: a A A

Research On2D Passive Blind Image Forensics Using Characteristics Of Discrete Wavelet Transform Coefficients

Posted on:2013-02-11Degree:DoctorType:Dissertation
Country:ChinaCandidate:Michael George Masangala ZIMBAFull Text:PDF
GTID:1228330395985278Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
In this digital era, digital visual media represent one of the principal means of communication. Digital videos and images have coherently become the main information carriers. Currently, the mainstream media, courtroom exhibits, fashion magazines, scientific journals, political campaign tools and the internet are all experiencing increasing usage, hosting and transmission of digital visual data in form of digital images. However, the reliability of digital images has lately been questioned mainly because the visual imagery is experiencing contradicting technological developments. On one hand, the development of user-centric and attractive technologies such as Web2.0, which provide user-friendly and user-generated facilities such as blogs, wikis, and social networks, has resulted in increased usage, hosting and transmission of digital images. It has become more affordable to an average computer user to store and distribute digital images.On the other hand, the availability of powerful image processing tools such as Adobe Photoshop has enabled the average computer user to doctor digital images with increasing ease and sophistication. Currently, there are numerous ways of maliciously forging a digital image. Among the most common ways of digital image forgery are the2D region duplication,2D image splicing and3D computer graphic rendering. We start by briefly describing these common kinds of image attack as followsRegion Duplication:This is also known as a region cloning or copy-move image forgery, CMIF. In a region duplication forgery, a part of an image is copied and then pasted on a different location within the same image. Usually, such an image tampering is done with the aim of either hiding some image details, in which case a background is duplicated. It may also be done with the aim of adding more details in which case at least an object is cloned. Only one image is involved in the region duplication forgery.2D Image Splicing:This is also known as image compositing. In an image splicing forgery, a part of an image is copied and then pasted on a different location within the same image or in a different image altogether. At least two images are involved in image splicing forgery.3D Computer Graphic Rendering:In a3D computer graphic rendering forgery, a photorealistic image is synthesized from augmenting a computer graphic with color and texture. The photorealistic image is given an aggregated appearance of photographs of multiple views and lightings.It is clear that the need for passive and blind image forensics, PBIF, methods to automatically assess the authenticity of the digital images cannot be overemphasized. PBIF methods assess the authenticity of digital images in absence of embedded schemes such as watermarks or signatures. In this dissertation, we propose four new, effective and non-complex PBIF methods as automated solutions to the following image tampering:(1)2D region duplications in which the duplicated image regions have been affected by minor variations due to additive noise or lossy compression where both signal-to-noise ratio, SNR, and compression quality are high;The primary task of a CMIF detection algorithm is to determine if a given image contains cloned regions without having any prior knowledge of the shape and location of the copied regions. An obvious approach to accomplishing such a task is to exhaustively compare every possible pair of regions. However, such an approach is exponentially complex.The proposed PBIF solution firstly performs the discrete wavelet transform, DWT, of the whole suspicious image and extracts only the low frequency subband to approximate the image. DWT is necessary to reduce the dimension of the image. A fixed size window is then slid over the subband, pixel by pixel, extracting a feature vector at each pixel location.Principal component analysis-eigenvector decomposition, PCA-EVD, is subsequently performed on the extracted features before they are lexicographically sorted for efficient comparison for similarities. Principal Component Analysis, PCA, is a well known technique for multivariate analysis. The core idea of PCA is to reduce the dimensionality of a data set which has a large number of interrelated variables, while retaining the original variations of the data as much as possible. In addition, PCA-EVD not only reduces the dimension of the feature vectors, but also removes minor variations in DWT coefficients.Shift vector approach is used to filter out matching blocks which are not connected into possible duplicated regions. A shift vector is an ordered pair of the differences in the coordinates of the positions or locations of a copied and relocated object or region in an image from the original object or region in an image.The proposed PBIF method is not only non-complex, but also robust to weak attacks of additive noise as along as SNR is above24dB and JPEG compression as long as the quality is above70.Furthermore, the major steps of the proposed PBIF algorithm are illustrated through a simplified example involving a toy image, which enhances the explanation of the algorithm.(2)2D region duplications in which the duplicated image regions have been affected by major variations due to additive noise or lossy compression where both SNR and compression quality are low;Our focus is still on detecting2D region cloning or CMIF. However, we intend to enhance detection of CMIF even in images where the signal is heavily affected by additive noise or losy compression.Duplicated regions which are affected by major variations due to additive noise or lossy compression can only be detected by PBIF methods which extract more robust features. In the proposed solution, a block characteristic, BC, based approach to feature extraction is taken. Initially, DWT of the whole suspicious image is performed and only the low frequency subband is extracted to approximate the image. BC based features are extracted.The BC based features which are extracted in this algorithm have more induced quantization effect. Therefore, they are more robust to attacks than the feature vectors whose components are individual pixels or coefficients of DWT transform of the suspicious image.Radix Sort is used to lexicographically order the feature vectors for efficient comparison for similarities. Radix sort uses integer keys when it is sorting the extracted features. As a result, the computational complexity of the Radix sort algorithm is much smaller than that of the lexicographical sort because lexicographic sort uses floating keys when it is sorting the extracted feature vectors. Shift vector filters out isolated matching blocks.The BC based method is not only faster than the PCA based method, but also more robust to additive noise and JPEG compression. It is capable of detecting, with efficiency as high as95%, duplicated regions affected by either additive noise where the signal is as weak as20dB, or JPEG Compression where the quality is as low as40. (3)2D region cloning in which the duplicated image regions have been affected by affine transformation operations such as reflection, rotation, or scaling;It is not uncommon for an image attacker to reflect a duplicated region, rotate the region through an arbitrary angle or scale the region, in addition to translating it. A PBIF method can only detect such duplications if both the extracted features and the verification or selection method are robust to those geometric operations. The designed new geometrically robust PBIF method extracts BC based features that are invariant to affine transformation. Radix Sort is subsequently used to lexicographically order the feature vectors for efficient comparison for similarities.At the verification stage, a same affine transformation selection, SATS, is used to filter out isolated matching blocks. Unlike shift vector, SATS is insensitive to geometric attacks. The proposed PBIF method effectively and efficiently detects regions affected by affine transformation(4)2D digital image splicing, a kind of forgery in which two or more images are involved.A digital image splicing forgery is a specific kind of image tampering. In an image splicing forgery, a part of an image is copied and then pasted on a different location within the same image or in a different image altogether. A spliced image may be with or without post-splicing processing, such as feather operation. In either case, the artifacts introduced by the splicing process may be almost imperceptible to the human eye.The proposed PBIF method extracts only the low frequency subband of the DWT of a suspicious image. Fixed overlapping square lattices are tiled over each chroma channel of the subband computing a local maximum partial gradient, LMPG, at each pixel location. Consequently, an LMPG image is formed. Local complexity of the LMPG image is subsequently computed at each pixel location. LMPG not only reflects the degree of the DWT coefficient changes, but also de-correlates the image by removing most of the image information except abrupt changes in the pixel values. At the same time, local complexity determines the frequency of the coefficient changes. Appropriate thresholds of the two transition region measures isolate traces of image splicing. The proposed PBIF method is both non-complex and effective in detecting2D image splicing. The proposed algorithm has a better time complexity than existing algorithms which operate in spatial domain because, initially, the algorithm reduces the dimension of a suspicious image approximately by the factor of the powers of4. Secondly, LMPG reduces computational complexity of the resultant gradient as it is a linear function. Similarly, local complexity reduces the complexity of entropy. Hence both LMPG and local complexity are much less computationally complex operations compared respectively to local resultant gradient (Sobel, LoG) and local entropy which are commonly used in existing methods.The approach in the designing of all the PBIF methods proposed in this dissertation slightly shifts from the common PBIF design paradigm of absolute detection-rate efficiency to detection-rate-complexity trade-off efficiency. In the former paradigm, the main objective is to design a PBIF method whose detection rate improves on the detection rates of the existing methods. Little attention is given to the complexity of the designed PBIF method. In an effort to finding enabling features for such a design, high computational costs are usually incurred. However, in the latter paradigm the main objective is to design a PBIF method which strikes a trade-off between the high detection rate and the low computational complexity. The design paradigm shift is justifiably necessary because currently more miniature devices with limited computational power are incorporating enabling features to capture, read and display digital images. Should the need arise, these miniature devices would require effective and non-complex automated PBIF solutions to assess the authenticity of the images.
Keywords/Search Tags:Image forensics, Image authenticity, Passive-blind, Discrete wavelettransform, Affine transformation, Shift vector, Same affine transformation selection, Blockcharacteristic
PDF Full Text Request
Related items