Font Size: a A A

Research On Cooperative HTM Based On Recurrent Learning Unit

Posted on:2021-04-12Degree:MasterType:Thesis
Country:ChinaCandidate:T Q LiuFull Text:PDF
GTID:2428330629487246Subject:Computer technology
Abstract/Summary:PDF Full Text Request
Hierarchical Temporal Memory(HTM)algorithm is a new Artificial Intelligence algorithm,which is proposed to simulate the structural of neocortex and the organization of biological neurons.HTM has the characteristics of Sparse Distributed Representation(SDR)of the data and a small number of model parameters,and has gradually become a research focus nowadays.However,the existing HTM has low accuracy of learning and predicting on long sequential data,and can't exploit the computing power inherent in the multi-cores of CPU and other embedded computing units.These factors hinder the performance improvement of HTM and seriously affect its training efficiency.In order to address these problems,this thesis focuses on designing and developing a collaborative HTM based on recurrent learning unit.Firstly,based on the analysis of existing research work on HTM,we summarize the main factors that restrict the further improvement of model performance.In regard to these problems,such as low accuracy of learning and prediction on long sequential data or complex event,lack of distributed cooperation in training process,etc.,the structure of cooperative HTM based on recurrent learning unit is proposed,which provides a fundamental support for a novel efficient HTM.Secondly,aiming at the problem that HTM has low accuracy of learning and predicting sequence data with long time series correlation,a temporal pooler algorithm based on recurrent learning unit(TPARLU)is proposed.First,the structure of new HTM based on recurrent learning is given.Then,the temporal pooler algorithm and the training algorithm based on recurrent learning unit are designed.TPARLU improves the existing HTM neuron structure by replacing the vanilla HTM neuron with the recurrent learning unit.It makes each neuron in HTM has the ability of recurrent learning,so as to better explore the long-term temporal dependencies;At the same time,the recurrent learning unit can utilize the learning and recurrent feedback of multiple time steps in the sequence,and use the input of HTM and feedback to predict whether the recurrent learning unit is activated or not.The prototype of RUHTM algorithm is implemented,and the training efficiency of RUHTM is tested with different datasets.The results show that,compared with HTM,the training accuracy of RUHTM can be improved by 15.9%~31.9% on NYC taxi dataset.Then,aiming at the problems of lack concurrency mechanism,separation of storage and calculation in existing HTM training algorithms,a multilayer distributed and cooperative HTM training algorithm is proposed.First,the structure of multilayer distributed cooperative HTM training algorithm is designed.NVM storage device is used to train recurrent learning unit.The other simple computing tasks in HTM spatial pooler and temporal pooler are completed by CPU or GPU,leverage the respective computing advantages of NVM storage devices and CPU.Then,a multilayer cooperative training strategy is proposed.The recurrent learning units are distributed to multiple NVM storage device simulators PMEM,the embedded processing power in the NVM storage device is used to cooperate with the host CPU to form a multilayer distributed cooperative HTM training algorithm.The training strategy of recurrent learning unit based on differential parameter quantization is proposed,and different types of parameters required for training in the recurrent learning units are quantified in a differential way,which ensure the accuracy of HTM training.The prototype of DCHTM is implemented,and two different datasets are used to test and analyze.The experimental results show that compared with HTM,the training efficiency of DCHTM on the NYC taxi dataset can be improved by 8.1%~21.9%.Finally,based on Intel open source NVM storage device simulator PMEM and NuPIC open source HTM,the collaborative HTM prototype system based on recurrent learning unit is implemented.The prototype system is tested and analyzed with NAB dataset.The experimental results showed that,compared with HTM,the training efficiency of the prototype system can be improved by 14.9%~16.9%,and the accuracy can be improved by 14.7%~27.7%.
Keywords/Search Tags:HTM, Temporal pooler algorithm, Recurrent learning unit, Multilayer distributed collaboration
PDF Full Text Request
Related items