| Physiological signals can reflect the physiological and psychological state of human beings from many aspects,which is of great significance in judging emotions and diagnosing diseases.Music is a non-verbal auditory art form.Many biological signals in human activities contain music-like rhythms,such as breathing and heartbeat.Thus,the conversion of physiological signals to music has the potential to reveal hidden physiological state information,adding an additional dimension to the analysis.This study is to use deep learning and audibility methods to analyze physiological signals and generate music that can reflect physiological state.The data used in this paper are from the lying detection experiment,and the data set contains three physiological signals: ECG signal,skin signal and expression signal,and each signal group corresponds to the three states of telling the truth,lying and resting respectively.In this paper,two methods are used to analyze this multimodal physiological signal data set:The first method is to use deep neural network for direct classification: the preprocessed physiological signals are put into the convolutional neural network for classification.Subsequently,the LSTM which is better at dealing with time series signals is added into the model,and the convolutional neural network and LSTM are combined to do classification and achieve better results.After that,various methods to prevent overfitting and the influence of Batchsize on the experimental effect were compared through several experiments.The method and parameters suitable for the model were selected,and the final classification accuracy was improved to 86.1%.Compared with the traditional machine learning method and the LSTM network with feature extraction,the model presented in this paper has a great advantage in classification performance.It is proved that the model proposed in this paper can better learn the differences between different physiological states in the current physiological signal data set,and the classification effect is better.Entanglement model,the second method is using music solution containing physiological generated music: this paper apply deep learning to physiological signal is generated on the music,first by parameter mapping method to generate music with scalefree properties of physiology,application and to improve the music again entangled in the model,solved the difficult to control in the process of music to generate problems.Through the above improvements,the music containing physiological information was obtained.Then,the following four experiments were conducted to verify that physiological signals of different states could be distinguished by physiological music.(1)According to the comparison of the staff,there are obvious differences in the staff of the music generated in different states;(2)In the music listening experiment,the accuracy of the subjects’ resolution of the music corresponding to different states reached 80.4%;(3)Using the data mining software WEKA to classify the generated music,the accuracy rate reached 88.89%;(4)Comparing the difference of model effect before and after improvement,it is found that the improved model can better learn chord information in music data set.Finally,the experimental results are compared with the physiological music generated by predecessors,and the music generated in this paper is better in each experiment.The physiological music generated by the method in this paper can be used to distinguish different physiological states by direct listening.The music effect is good,the audibility is strong,and the music is scale-free.This method can not only be applied to the dataset in this paper,but also be transferred to other physiological signal datasets by analogy.Physiological music can assist the study of physiological signals,and can be used in insomnia treatment,pain relief and lie detection,etc.,which has great practical value. |