The molecular structures of different substances vary,leading to different Raman characteristic peaks,so Raman spectroscopy can be used as the"fingerprint"spectrum for molecular recognition.Raman spectroscopy has high resolution sensitivity,and capacity for the detection of biological samples.With the above characteristics,Raman spectroscopy technology has received widespread attention in the field of blood glucose detection.However,there are the following issues in current researches:the Raman spectroscopy signals of substances in liquids are weak because of the influence of water molecular;In addition,the blood has low glucose concentration,and complex composition,leaving the signal of glucose interfered by other substances,so it is difficult to use Raman spectroscopy technology for blood glucose concentration detection.To solve these problems,this article conducted researches on quantitative algorithms for Raman spectroscopy signals in mixtures,providing support for spectral signal processing in Raman blood glucose measurement.This article is an exploratory analysis for Raman blood glucose detection methods,and the main content is as follows:(1)In order to solve the problems of poor quality and low spectral stability of spectral signals collected by spatial heterodyne Raman spectrometer,the exploratory experiments and optimizing experimental operations to obtain high-quality spectral data was conducted.The system was calibrated and optimized to obtain high-quality Raman interferograms.Then,the experiments were carried to explore the optimal Raman spectral data collection methods for improving the stability and signal-to-noise ratio of spectral signals.Then,Raman spectroscopy experiments were designed and carried out in mixtures,using the spatial heterodyne Raman spectrometer to measure the Raman signals of solution and plasma with different glucose concentrations,and obtain the experimental data for this study.(2)The methods for improving the quality of Raman spectroscopy signals based on Lorentz function were studied to remove the noise and baseline during the data collection,which affected the quality of spectral signals.Firstly,the Lorentz function was used to simulate spectral,and the spectral signal of water measured by the spatial heterodyne Raman spectrometer is added to simulate real measurement conditions.Then,different denoising and baseline algorithms are used for signal preprocessing,and the signal-to-noise ratio and root mean square error of the signal processed by different methods are compared to select the appropriate Raman spectrum signal preprocessing algorithm.The experimental results show that the best baseline removal method is polynomial fitting,which can obtain a signal-to-noise ratio of 1.0723 d B and a root mean square error of 2.4020 in simulated data.The use of wavelet transform can achieve the best denoising effect.The algorithm obtains a signal-to-noise ratio of2.3056 d B and a root mean square error of 2.0841 in simulated data.(3)Raman spectroscopy quantitative regression models based on machine learning and parameter optimization were constructed to solve the issues of complex parameter debugging and low accuracy in traditional quantitative regression algorithms.In order to improve the accuracy of regression models,search strategies and multiple swarm intelligence algorithms were used to optimize the parameters of the machine learning algorithms.After parameter optimization,the estimation accuracy of glucose concentration in plasma was effectively improved.The regression support vector machine model based on particle swarm optimization algorithm achieved best regression effect.The R~2 value of this algorithm was 0.8396,and the root mean square error was 1.8192 mmol/L on the test set.(4)By utilizing the powerful feature extraction ability of convolutional neural networks,blood glucose concentration estimation models based on convolutional neural networks and Raman spectroscopy was constructed.When using this model to quantitatively analyze glucose concentration in plasma,a low accuracy resulted.In order to improve accuracy,convolutional neural networks were optimized.Firstly,wavelet decomposition was used to extract the features in different frequency domains of Raman signals,then the features were feed as inputs to the convolutional neural network to train the regression model;Secondly,convolutional neural networks were used to extract the features of Raman signals,then the extracted features were used as input to construct estimation model based on regression support vector machine;Finally,feature stitching operation was carried out,.two convolutional branches were set up,with different size and step of convolutional kernel.At the same time,multiscale features of the signal were obtained.Subsequently,the obtained features were stitched and calculated using fully connected layer to obtain the output.Through optimizations,the regression accuracy of the model has been improved.Among them,the glucose concentration estimation model in plasma based on convolutional feature stitching achieved the best effect,with an R~2 value of 0.7857 and a root mean square error of1.8794 mmol/L,this method improves the R~2 value of test set data by about 7%.The quantitative regression models based on convolutional neural networks achieved acceptable results.Due to limitations in data volume and other factors,however,the accuracy of model remained to be improved.This article conducts research on quantitative algorithms for Raman spectroscopy signals in mixtures,improving the accuracy of Raman spectral signal processing when the signal-to-noise ratio is low.It conducts feasibility analysis on Raman blood glucose measurement and is of great significance for Raman blood glucose measurement. |