Font Size: a A A

The Research Based On LRF-ELM Algorithm And Its Application In Object Material Classification

Posted on:2019-03-31Degree:MasterType:Thesis
Country:ChinaCandidate:J FangFull Text:PDF
GTID:2348330569479538Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
In recent years,the development of robotics technology has become one of the criteria for measuring the overall strength and creativity of a country.Therefore,all countries in the world have listed the development of robotics as a national technological development plan in the 21 st century.The robot's perception of the surrounding environment is not only limited to the use of visual signals,but also use the tactile signals which is a supplement to visual signals.How to effectively integrate the visual signals and the tactile signals to improve the robot's perceptual classification of surrounding information is a major problem.So far,there is still a certain gap between the research level of multi-modal fusion in China and foreign countries.Therefore,multi-modal fusion method will be a valuable research topic.As an important research field of artificial intelligence applications,machine learning mainly uses Convolutional Neural Network(CNN)to extract high-level representations of data information.CNN needs to use the gradient descent method to continuously adjust the parameters of the network during the network training process,so that the network will have the defects of the BP algorithm,such as long training time,overfitting,and local optimization.Motivated by above,Huang et al.proposed the Extreme Learning Machine(ELM),which can not only avoid the defects of BP neural network,but also improve the classification performance of the network.Based on the ELM method,Huang et al.also proposed a method of Local Receptive Fields Based Extreme Learning Machine(LRF-ELM).The model introduced Local Receptive Fields(LRFs)of the CNN network to implement local connections between the input layer and the hidden layer,which greatly reduce network parameters.LRF-ELM model has many advantages,such as fast training network,low computational complexity and generalization performance.It can be widely applied in natural language processing,computer vision,and so on.The LRF-ELM method has some disadvantages in inheriting the advantages of the ELM method.It is only applicable to the processing of grayscale images and cannot be applied to the extraction of image features with complex texture changes.Therefore,LRF-ELM model is necessary to be further improved.In this paper,a series of improvements are made to the LRF-ELM algorithm,finally,a multi-modal fusion method is proposed.The fusion method is applied to the classification of robot objects,and the object's material type is quickly identified by touching and observing the surface of the object.The main research content and innovation of this paper as follows:(1)For the traditional method of LRF-ELM,it can't fully utilize the color information of the image.This paper proposes a three-channel algorithm based on LRF-ELM(LRF-ELM-3C).In the input layer of network,we firstly separate the R,G,and B vectors from the image,and then put them into the corresponding color channels for feature extraction,which can effectively avoid external interference.Then the feature vectors are fused to classify the type of the image.LRF-ELM-3C can effectively improve the classification performance.(2)Research of LRF-ELM-3C algorithm,it is found that although the algorithm can fully utilize the color information of the image,the LRFs scale of the network is fixed and cannot adapt to the complex texture changes.Therefore,this paper proposes a modal of Extreme Learning Machine with Multi-Scale Local Receptive Fields(MSLRF-ELM).MSLRF-ELM also performs R,G,B vector separation for the image at the input layer.Multi-scale LRFs are used for the convolution operation of the hidden layer,the pooling operation also uses multiscale dimensions.Then,the extracted feature vectors can be classified.(3)Based on the MSLRF-ELM algorithm,a multi-modal fusion algorithm based on MSLRF-ELM(MM-MSLRF-ELM)is proposed.In this paper,we use the visual image,haptic acceleration signal and tactile sound signal of TUM haptic dataset to conduct multi-modal experiments.Firstly,each modal signal is extracted by MSLRF-ELM.Then fusion the feature vectors are used to extract the highly representative characteristics by MSLRF-ELM in the hybrid layer.The MM-MSLRF-ELM algorithm can be effectively applied to robots' classification of object materials.
Keywords/Search Tags:Extreme learning machine, local receptive fields, multi-scale, multi-modal fusion, object material, classification
PDF Full Text Request
Related items