| In recent years,plaid fabrics have become increasingly popular in many areas such as clothing,fashion and home and have been a frequently chosen design element.As a result of this trend,more and more plaid patterns are being designed,and companies are accumulating a large amount of plaid image data.Image retrieval technology can quickly and accurately retrieve the required products from the many plaid fabric images,directly call up existing process parameters for production,improve production efficiency,and also digitise and intelligently manage them,which has practical application value and important economic significance.The existing image retrieval methods are unable to highlight the special detailed features such as narrow strips,discrete plaid distribution and different scale plaids in the plaid fabric images,resulting in low accuracy and poor retrieval results when directly applied to plaid fabric image retrieval.In this paper,a deep learning approach is used to implement plaid fabric image retrieval from the perspective of enhancing the feature representation of plaid fabric images.In summary,the main research work in this paper is as follows:(1)To address the problem that existing image retrieval algorithms do not pay enough attention to the narrow stripes and discrete long-distance information features of plaid fabric images,this paper proposes DAMNet(Dual Attention Mechanism Network),a feature extraction network based on a dual attention mechanism,for learning the representation of plaid fabric images.Firstly,a bar attention mechanism is designed to connect and encode the narrowly distributed plaid fabric image information to focus,capture and enhance its feature regions based on the narrow strip feature of the plaid fabric image,enabling the network to effectively establish pixel-level long-range dependence;secondly,a channel attention mechanism is designed to focus on the channel importance of the features that have gone through the bar attention mechanism,further optimise the features,adaptively recalibrates the channel feature responses and selects salient and effective features to strengthen the feature representation capability.Through comparative experiments,it is demonstrated that the DAMNet model can effectively focus on the narrow strip features of plaid fabric images with better retrieval results.(2)In response to the problem that the network model extracts plaid fabric image features at a single scale and deeper features contain more semantic information,and does not make full use of the middle and lower layers of shape,texture and other detailed features,and lacks multilevel feature expression,this paper proposes MCMFFNet(Multi-scale Convolution and Multilayer Feature Fusion Network),the designed multi-scale convolution contains standard convolution and dilation convolution at different scales,where the dilation convolution expands the perceptual field of the convolution kernel to a certain extent,and encodes the image information in the form of block features similar to the discrete distribution of lattice fabric images.Afterwards,the multi-layer features are fused to reduce the shortcomings of singlelevel feature representation,making the advantages of each level of features complementary and the overall features more robust,while enriching the feature representation of different scales of plaid fabric images.In addition,to further improve the aggregation of features of the same class,the loss function of the network is improved by introducing central loss on top of the original Softmax loss function only,forming a mixed loss as the loss function of the network for training,expanding the class spacing while reducing the intra-class distance,thus making the features more aggregated.Through comparative experiments,it is demonstrated that the MCMFFNet model extracts more comprehensive features of plaid fabric images,has the best retrieval metrics,and outperforms DAMNet. |