Font Size: a A A

The Study On Multi-Focus Textile Fiber Images Fusion Technology

Posted on:2013-02-02Degree:MasterType:Thesis
Country:ChinaCandidate:K P ChenFull Text:PDF
GTID:2218330371956049Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Fibers and textiles automatic image detection system achieve the purpose for detecting textile quality by identifying the various types of fibers and number of statistics of fibers on slices. Ideally, it can capture clear images when the object is located in the focal plane, while the overlap between the fibers and the uneven degree about the fiber slice make the microscope having the phenomenon of multi-focal plane during focusing process, which affect the identification of longitudinal shape and horizontal shape of the fibers. In this case, to solve the problem of multi focal, we need to find a better image fusion method.Image fusion is the technique that integrates and processes the multi-source images from one or different sensors,which utilize the complementary and redundant information in multi-source images to obtain more satisfied image about the same scene, As a result of the processing,the fused image should be more accurate and suitable for the computer further processing.Multi-resolution image fusion based on the pixel level and region-based multi-resolution image fusion technique are widespread concerned at home and abroad, there are two main pixel-level fusion rules of Pixel level fusion:a single pixel-based fusion method, an active measure of the local window-based fusion method. However, whether it is based on the pixel or window-based fusion rule, which did not consider the characteristics of the target in the image, the aim of the image fusion should make the image absorbed in the goal of the fused image, the contours of the objects of interest more clearly. For example:real-time performance is poor, on the registration error, sensitive to noise, blurring and misregistration. region-based image fusion technique:split the source image to many segments and get one or multiple region segmentation maps, and then fuse regional integration on the source image under the guidance of segmentation. the latest region-based fusion method split the region of images into the background region, the target region and the edge region between the two regions. three regions were be adopted different fusion rules and get the last fused image.In order to achieve the purpose about extracting the most clear area of each fiber image from the multi-layer fibers image with the same scene, we need take the fiber fusion technology, image fusion algorithm will be taken under the same imaging conditions, respectively, at different focal plane pieces of the same size image fusion, in order to make the final fused image can obtain complete information of the sections of the fiber to ensure the accuracy of each fiber fineness and length of the detectionTraditional Multi-focus fusion algorithms usually calculates various data and statistics after the entire sample of all data is collected. This approach will undoubtedly affect the real time of the fiber length and fineness of detection and extend the detection time. Relative to the traditional fusion method of image data acquisition finished, begin to process and analysis multi-focus fiber images. This paper presents the algorithm which can launch the multi-focus fusion algorithm when the image acquisition system is running, on other words, during the process of the image acquisition, when image data is collected, analyzed and processed, and greatly improves the processing speed.A real-time local clarity based image fusion is proposed to improve the real-time property of multi-focal fusion algorithm for textile fibers. The value of pixel clarity is calculated by modulus value of its intensity. Firstly, regions of interest where textile fibers exist are located on the first image. Secondly, the modulus value of pixel intensity is determined for the same location on each image with same scene; the maximum modulus value is determined for each pixel by comparison of all modulus value for the same location; the modulus value is linked with the index of image that contributes the maximum modulus value for a determined pixel. Finally, a threshold value is calculated for noise removal. The image index for maximum modulus value is inspected and modified by layer numbers in its neighborhood. Experimental results demonstrate that the proposed method is satisfactory for real-time application.
Keywords/Search Tags:modulus value, real-time application, region of interest, multifocal image fusion
PDF Full Text Request
Related items