Manual acupuncture(MA)manipulation is a Traditional Chinese Medicine(TCM)treatment method.The current teaching of MA manipulation always relies on oral transmission,lacking a unified approach for quantifying and modeling MA manipulation.Due to the complex hand movement pattern and small movement amplitude during MA operations,and the difficulty of accurately characterizing the relative movement information between the holding-needle fingers,there needs to be a multimodal approach to construct MA manipulation recognition methods.Although the traditional MA manipulation parameter meter can quantify the tactile characteristics of the fingers,it is not portable,as well,it cannot quantify the motion pattern of the acupuncture hand from the entire visual perspective.At the same time,the traditional method based on three-dimensional motion tracking not only requires multiple cameras to be set up simultaneously for gesture tracking but also only obtains spatial information about the relative position of the holdingneedle fingers from a visual perspective.This approach cannot obtain tactile details such as pressure,relative movement and entire hand location coordinates information between the holding-needle fingers during MA operations,making it challenging to build a quantitative description and recognition method for MA manipulation considering both visual and tactile characteristics.To address these issues,an array dual-channel PVDF films flexible tactile finger cot sensor including an accelerometer and two PVDF films was developed to obtain the physician’s hand spatial position by the accelerometer and to collect the pressure on the holding-needle fingers by two PVDF films.In addition,this thesis gives the preprocessing methods for the spatial position signal of the hand and the tactile piezoelectric signal.In particular,this thesis provides two data augmentation methods for the piezoelectric signals collected by the tactile finger cot with a view to increasing the diversity of training samples.Subsequently,this paper presents a method for feature extraction of the acquired array dual-channel PVDF films tactile piezoelectric signal data.First,considering the cycle characteristics of the MA manipulation,this thesis processes the acquired dual-channel PVDF films tactile piezoelectric signal by dividing the single sliding window into several clip windows,combine the separate windows into an action window embedded in a complete MA manipulation cycle,and finally obtain the tactile piezoelectric signal pressure features of an entire MA manipulation cycle.In addition,this paper constructs a multimodal deep-learning recognition model by integrating visual and tactile features.1.In this recognition model,this thesis defines a "visual feature extraction attention block" based on an attention mechanism to integrate the visual features of MA manipulation.This "visual feature extraction attention block" explicitly models the interdependencies between network channels,and includes a calibration mechanism for channel feature responses.Subsequently,in the entire deep learning network for needling technique recognition,this thesis stacks the "visual feature extraction attention blocks" to form a visual feature extraction network.2.For the construction of tactile feature diversity,this thesis designs a "random block" to enhance the variety of tactile features and strengthen the generalization ability of the entire MA manipulation recognition model.3.Based on the idea of tensor-based multimodal feature fusion,this thesis constructs a mechanism for visual and tactile feature fusion of MA manipulation and complete the construction of a deep learning network for entire MA manipulation recognition.Finally,for the classification of the four basic MA manipulations(reinforcing by twirling and rotating(RFTR),reducing by twirling and rotating(RDTR),reinforcing by lifting and thrusting(RFLT),and reducing by lifting and thrusting(RDLT)),this thesis invited twenty experts from three hospitals to validate the feasibility and validity of the MA manipulation recognition method proposed in this paper,leading to the satisfactory results. |