Font Size: a A A

Clothing Image Retrieval Based On Deep Learning

Posted on:2019-04-08Degree:MasterType:Thesis
Country:ChinaCandidate:X WeiFull Text:PDF
GTID:2428330572951993Subject:Pattern Recognition and Intelligent Systems
Abstract/Summary:PDF Full Text Request
With the development of Internet and the coming of e-commerce,the amount of clothing image on the network has increased dramatically.Facing the diversity of garment style,it is difficult to describe a clothing image with only simple key words.Furthermore,different user has different express to describe the same clothing image.It is a fact that traditional clothing image retrieval based on text can not satisfy the needs of user for clothing image retrieval.How to help users find their favorite clothes quickly and accurately becomes a major challenge.Therefore,clothing image retrieval based on content has become a hot topic.Recently,deep learning has achieved outstanding performance in image processing.In this paper,clothing image retrieval based on deep learning is studied,and the main work as follows:A clothing image retrieval method based on human pose aware is proposed.Object detection method can not be used to determine the local regions of clothing image because of clothing wrinkle.To resolve this problem,using the method of human pose estimation to get joints information,which help to locate the local regions such as the collar,the sleeve,the button and so on.Clothing image is fed to convolution neural network to extract the global feature.The position of local region is mapped to the global feature.By the way,the local feature of specific region is extracted.The concatenated feature from local feature and global feature are regarded as the representation feature of clothing image,which be used to the task of clothing image retrieval.Experiments show that the proposed method can effectively extract the local feature of clothing image and get batter retrieval accuracy.A clothing image retrieval method based visual-semantic joint embedding is proposed.This method mainly applies on the scene of the attribute feedback clothing image retrieval.It need to change one attribute of the image while keeping others fixed.A joint visual-semantic embedding model is trained with image-attribute pairs,projecting a product image into the joint embedding space through image embedding matrix while its associated attributes map to the same joint embedding space through attribute embedding matrix.By modeling the attribute feedback clothing image retrieval,the feature of query image and query attribute are added in the joint embedding space.In this way,the retrieved images have a higher similarity for added feature.Experiments show that the proposed method based on visual-semantic embedding has achieved higher accuracy on attribute-feedback image retrieval.An image retrieval method based on spatial aware for clothing attribute is proposed.To resolve the problem of how to extract the local feature of clothing attribute,modeling the relationship between the image point and clothing attribute based on the visual-semantic joint embedding space.Generating embedded attribute map through attribute spatial representation,which indicated how likely the attribute appears at spatial location.Then,localizing the salient regions of attributes for an image by setting a threshold value for the attribute spatial representation.The global feature of clothing image is extracted through the convolution neural network.The information of local region is mapped to the global feature.By the way,the local feature of specific region is extracted.Finally,concatenating feature from local feature and global feature,and used it to retrieval clothing image.Experiments show that the proposed method has achieved higher accuracy over several state-of-the-art approaches.
Keywords/Search Tags:Deep learning, Clothing image retrieval, Pose-aware, Attribute-feedback retrieval, Spatially-aware
PDF Full Text Request
Related items